The Technological Landscape Affecting Artificial General Intelligence and the Importance of Nanoscale Neural Probes 2018-01-12T01:10:07.056Z
Donating To High-Risk High-Reward Charities 2017-02-14T04:20:31.664Z


Comment by Daniel_Eth on Has anyone found an effective way to scrub indoor CO2? · 2021-06-28T17:57:46.512Z · EA · GW

Also the cost of sound, and possibly outside pollution (though that can be addressed with HEPA filters & ozone filters)

Comment by Daniel_Eth on 2018-2019 Long Term Future Fund Grantees: How did they do? · 2021-06-21T04:12:16.213Z · EA · GW

"There is a part of me which finds the outcome (a 30 to 40% success rate) intuitively disappointing"

Not only do I somewhat disagree with this conclusion, but I don't think this is the right way to frame it. If we discard the "Very little information" group, then there's basically a three-way tie between "surprisingly successful", "unsurprisingly successful", and "surprisingly unsuccessful". If a similar amount of grants are surprisingly successful and surprisingly unsuccessful, the main takeaway to me is good calibration about how successful funded grants are likely to be.

Comment by Daniel_Eth on Kardashev for Kindness · 2021-06-16T22:57:41.261Z · EA · GW

"I definitely don't think that a world without suffering would necessarily be a state of hedonic neutral, or result in meaninglessness"

Right, it wouldn't necessary be natural – my point was your definition of Type III allowed for a neutral world, not that it required it. I think it makes more sense for the highest classification to be specifically for a very positive world, as opposed to something that could be anywhere from neutral to very positive.

Comment by Daniel_Eth on Event-driven mission hedging and the 2020 US election · 2021-06-16T17:05:56.482Z · EA · GW

Good points.

Comment by Daniel_Eth on Event-driven mission hedging and the 2020 US election · 2021-06-15T04:27:55.031Z · EA · GW

If you expect your donation to be ~10x more valuable if one political party is in power, then it probably makes more sense to just hold* your money until they are in power. I suppose the exception here would be if you don't expect the opportunity to come up again (eg., if it's about a specific politician being president, or one party having a supermajority), but I don't see a Biden presidency as presenting such a unique opportunity.


*presumably actually as an investment

Comment by Daniel_Eth on Kardashev for Kindness · 2021-06-14T02:14:50.146Z · EA · GW

So I like this idea, but I think the exclusively suffering-focused viewpoint is misguided. In particular:
"In a Type III Wisdom civilization, nothing and no one has to experience suffering at all, whether human, non-human animal, or sentient AI"

^this would be achieved if we had a "society" entirely of sentient AI that were always at hedonic neutral. Such lives would involve experiencing zero sense of joy, wonder, meaning, friendship, love, etc – just totally apathetic sensory of the outside world and meaningless pursuit of activity. It's hard to imagine this would be the absolute pinnacle of civilizational existence. 

Edit: to be clear, I'm not arguing "for" suffering (or that suffering is necessary for joy), just "for" pleasure in addition to the elimination of suffering.

Comment by Daniel_Eth on A Viral License for AI Safety · 2021-06-14T01:49:08.276Z · EA · GW

I'm not sure how well the analogy holds. With GPL, for-profit companies would lose their profits. With the AI Safety analog, they'd be able to keep 100% of their profits, so long as they followed XYZ safety protocols (which would be pushing them towards goals they want anyway – none of the major tech companies wants to cause human extinction).

Comment by Daniel_Eth on Long-Term Future Fund: May 2021 grant recommendations · 2021-05-31T05:36:43.664Z · EA · GW

This is correct.

Comment by Daniel_Eth on Linch's Shortform · 2021-05-24T06:01:07.007Z · EA · GW

So framing this in the inverse way – if you have a windfall of time from "life" getting in the way less, you spend that time mostly on the most important work, instead of things like extra meetings. This seems good. Perhaps it would be good to spend less of your time on things like meetings and more on things like research, but (I'd guess) this is true whether or not "life" is getting in the way more.

Comment by Daniel_Eth on Thoughts on being overqualified for EA positions · 2021-05-03T17:25:00.348Z · EA · GW

It seems like one solution would be to pay people more. I feel like some in EA are against this because they worry high pay will attract people who are just in it for the money - this is an argument for perhaps paying people ~20% less than they'd get in the private sector, not ~80% less (which seems to be what some EA positions pay relative to the skills they'd want for the hire).

Comment by Daniel_Eth on Case studies of self-governance to reduce technology risk · 2021-04-12T03:53:15.047Z · EA · GW

Thank you for this post, I thought it was valuable. I'd just like to flag that regarding your recommendation, "we could do more to connect “near-term” issues like data privacy and algorithmic bias with “long-term” concerns" - I think this is good if done in the right way, but can also be bad if done in the wrong way. More specifically, insofar as near-term and long-term concerns are similar (eg., lack of transparency in deep learning means that we can't tell if parole systems today are using proxies we don't want, and plausibly could mean that we won't know the goals of superintelligent systems in the future), then it makes sense to highlight these similarities. On the other hand, insofar as the concerns aren't the same, statements that gloss over the differences (eg., statements about how we need UBI because automation will lead to super intelligent robots that aren't aligned with human interests) can be harmful, for several reasons: people who understand the logic doesn't necessarily flow through will be turned off, if people are convinced that long-term concerns are just near-term concerns at a larger scale then they might ignore solving problems necessary for long-term success that don't have near-term analogues, etc. 

Comment by Daniel_Eth on peterbarnett's Shortform · 2021-03-22T21:08:19.326Z · EA · GW

Humans seem like (plausible) utility monsters compared to ants, and  many religious people have a conception of god that would make Him a utility monster ("maybe you don't like prayer and following all these rules, but you can't even conceive of the - 'joy' doesn't even do it justice - how much grander it is to god if we follow these rules than even the best experiences in our whole lives!"). Anti-utility monster sentiments seem to largely be coming from a place where someone imagines a human that's pretty happy by human standards, and thinks the words "orders of magnitude happier than what any human feels", and then they notice their intuition doesn't track the words "orders of magnitude".

Comment by Daniel_Eth on alexrjl's Shortform · 2021-02-24T07:55:26.430Z · EA · GW

Just flagging that space doesn't solve anything - it just pushes back resource constraints a bit. Given speed-of-light constraints, we can only increase resources via space travel ~quadratically with time, which won't keep up with either exponential or hyperbolic growth.

Comment by Daniel_Eth on MichaelA's Shortform · 2021-02-23T07:02:39.939Z · EA · GW

"Research or writing assistance for researchers (especially senior ones) at orgs like FHI, Forethought, MIRI, CHAI"

As a senior research scholar at FHI, I would find this valuable if the assistant was competent and the arrangement was low cost to me (in terms of time, effort, and money). I haven't tried to set up anything like this since I expect finding someone competent, working out the details, and managing them would not be low cost, but I could imagine that if someone else (such as BERI) took care of details, it very well may be low cost. I support efforts to try to set something like this up, and I'd like to throw my hat into the ring of "researchers who would plausibly be interested in assistants" if anyone does set this up.

Comment by Daniel_Eth on Population Size/Growth & Reproductive Choice: Highly effective, synergetic & neglected · 2021-02-16T00:31:30.235Z · EA · GW

I'm honestly not certain - I don't believe we'll solve any of these problems by a degrowth approach, so the only way to get a real solution is via innovation and/or adoption of solutions. More people would help with that, but also would contribute more to the problem in the meantime. I think whether the sign was positive or negative might depend on the specifics (eg, I think if environmentalists have fewer kids because of a fear of overpopulation, that will generally be bad for the environment).

Comment by Daniel_Eth on Population Size/Growth & Reproductive Choice: Highly effective, synergetic & neglected · 2021-02-15T18:29:10.025Z · EA · GW

Another point - more humans means more people to find solutions. So we have more people polluting the planet, but also more people working on clean energy solutions that will get us off fossil fuels.

Comment by Daniel_Eth on Do power laws drive politics? · 2021-02-13T21:02:53.907Z · EA · GW

Worth noting that if some political choices have very large negative outcomes, then choosing political paths that avoid those outcomes would have very positive counterfactual impact, even if no one sees it.

Comment by Daniel_Eth on Good v. Optimal Futures · 2020-12-13T20:11:30.455Z · EA · GW

I agree with the general point that: 

E[~optimal future] - E[good future] >> E[good future] - E[meh/no future]

It's not totally clear to me how much can be done to optimize chances of ~optimal future (as opposed to, there's probably a lot more that can be done to decrease X-risk), but I do have an intuition that probably some good work on the issue can be done. This does seem like an under-explored area, and I would personally like to see more research in it.

I'd also like to signal-boost this relevant paper by Bostrom and Shulman:
which proposes that an ~optimal compromise (along multiple axes) between human interests and totalist moral stances could be achieved by, for instance, filling 99.99% of the universe with hedonium and leaving the rest to (post-)humans.

Comment by Daniel_Eth on What types of charity will be the most effective for creating a more equal society? · 2020-10-14T08:41:42.375Z · EA · GW

I think it's really bad if people feel like they can't push back against claims they don't agree with (especially regarding cause/intervention prioritization), and I don't think the author of a post saying (effectively) "please don't push back against this claim if you disagree with it" should be able to insulate claims from scrutiny. Note that the author didn't say "if we think claim X is true, what should we do, but please let's stay focused and not argue about claim X here" but instead "I think claim X is true - given that, what should we do?"

Comment by Daniel_Eth on What types of charity will be the most effective for creating a more equal society? · 2020-10-13T13:34:10.474Z · EA · GW

"the root cause of most of the ills of society is inequality, primarily economic inequality - income inequality"

While I think income inequality (or, perhaps even more so, consumption inequality) is a large problem, I don't think it's the root cause of most of the ills of society. I'd imagine that tribalism, selfishness, mental-health problems, and so on are larger causes. In the US, for instance, my sense is that racism is a root of more problems than is income inequality.

More specifically answering the question you asked, I'd imagine political solutions would be the most effective here, as the government plays such a large role in influencing the economic distribution, and the amount of money in politics is incredibly small compared to the effect of political outcomes. I could imagine effective organizations in this area could include think tanks searching for political solutions, firms lobbying for implementing these solutions, or organizations that work to elect politicians/parties that are more likely to appropriately address these concerns.

[I'd also note that, from a global perspective, inequality between countries may typically larger than within countries, so it would perhaps be better to focus on health and development charities such as AMF, though one could make an argument that (for instance) social problems in the US spill over into problems for the rest of the world, so focusing on inequality in the US may be more important that a naive calculation would indicate.]

Comment by Daniel_Eth on Best Consequentialists in Poli Sci #1 : Are Parliaments Better? · 2020-10-09T22:52:39.842Z · EA · GW

FWIW, here's a Vox article arguing that gridlock from presidential systems isn't just bad in terms of "normal" policy outcomes, but can also lead to crises of legitimacy if polarization is too high (in which case the executive and legislative branches may both claim to speak for the people while disagreeing, and democratic principles won't necessarily say how to resolve the disagreement), which runs the risk of collapsing the entire political system:

Comment by Daniel_Eth on Open Communication in the Days of Malicious Online Actors · 2020-10-08T19:49:38.812Z · EA · GW

Thanks, I think this is interesting and these sorts of considerations may become increasingly important as EA grows. One other strategy that I think is worth pursuing is preventative measures. IMHO, ideally EA would be the kind of community that selectively repels people likely to be malicious (eg I think it's good if we repel people who are generally fueled by anger, people who are particularly loud and annoying, people who are racist, etc). I think we already do a pretty good job of "smacking down" people who are very brash or insulting to other members, and I think the epistemic norms in the community probably also select somewhat against people who are particularly angry or who have a tendency to engage in ad hominem. Might also be worth considering what other traits we want to select for/against, and what sort of norms we could adopt towards those ends.

Comment by Daniel_Eth on No More Pandemics: a grassroots group? · 2020-10-03T19:41:35.731Z · EA · GW

Seems like it could be a good idea if implemented well. A couple considerations come to mind:

• I think it's possible for something like this to inadvertently cause harm by pushing policies that are good for combatting natural pandemics but also increase the chances of/potential severity of engineered pandemics. Should be avoidable if the leaders of the group are in communication with experts that focus on engineered pandemics.

• I'd strongly recommend engaging with people who do political polling (such as people who work at Data for Progress) when deciding political priorities. Pushing policies that are popular is presumably much more tractable than pushing those that are not, and pollsters could help you determine which policies fit into which category.

Comment by Daniel_Eth on aysu's Shortform · 2020-09-24T16:24:01.998Z · EA · GW

Welcome to the community!

Both of these are generally thought to be good things, though personally I'd expect growing the movement would be easier than spreading EA-style thought (partially because the EA community is small, while the outside world is big, so it's probably much easier to have a substantial relative impact in growing the community than in, for instance, getting the outside world to be more impact-aware, though there are other considerations). One caveat, though, is that rash attempts to grow the movement have the potential to be counterproductive.

Comment by Daniel_Eth on [deleted post] 2020-09-23T19:05:43.389Z

Peter McIntyre (at 80k) has a blogpost where he describes how he makes meals along those lines:

Comment by Daniel_Eth on Objections to Value-Alignment between Effective Altruists · 2020-07-17T13:55:55.346Z · EA · GW

I think there's some interesting points here! A few reactions:

• I don't think advocates of traditional diversity are primarily concerned with cognitive diversity. I think the reasoning is more (if altruistic) to combat discrimination/bigotry or (if self-interested) good PR/a larger pool of applicants to choose from.

• I think in some of the areas that EAs have homogeneity it's bad (eg it's bad that we lack traditional diversity, it's bad that we lack so much geographic diversity, it's bad that we have so much homogeneity of mannerisms, it's bad that certain intellectual traditions like neoliberalism or the Pinkerian progress narrative are overwhelmingly fashionable in EA, etc), but I'd actually push back against the claim that it's bad that we have such a strong consequentialist bent (this just seems to go so hand-in-hand with EA - one doesn't have to be a consequentialist to want to improve the external world as much as possible, but I'd imagine there's a strong tendency for that) or that we lack representation of certain political leanings (eg I wouldn't want people in the alt-right in EA).

• If people don't feel comfortable going against the grain and voicing opposition, I'd agree that's bad because we'd lack ability to self-correct (though fwiw my personal impression is that EA is far better on this metric than almost all other subcultures or movements).

• It's not clear to me that hierarchy/centralization is bad - there are certain times when I think we err too much on this side, but then I think others where we err too much the other way. If we had significantly less centralization, I'd have legitimate concerns about coordination, info-hazards, branding, and evaluating quality of approaches/organizations.

• I agree that some of the discussion about intelligence is somewhat cringe, but it seems to me that we've gotten better on that metric over time, not worse.

• Agree that the fandom culture is... not a good feature of EA

• There probably are some feedback loops here as you mention, but there are other mechanisms going the other direction. It's not clear to me that the situation is getting worse and we're headed for "locking in" unfortunate dynamics, and if anything I think we've actually mostly improved on these factors over time (and, crucially, my inside view is that we've improved our course-correction ability over time).

Comment by Daniel_Eth on Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? · 2020-06-22T17:12:58.437Z · EA · GW

My view:

Short answer: it's suffering that's bad, intrinsically (though suffering can be instrumentally good)

Long answer: There are several different reasons suffering may be voluntary. To list a few:

1) suffering for some greater good (eg delayed pleasure, suffering for something that will make more people happy, etc)

2) false belief that your suffering is for a greater good (eg you think suffering will give you karma points that will make you happier in next life)

3) suffering that is "meaningful" (such as mourning)

4) an experience that includes some suffering and some pleasure that is one the whole-enhanced by the suffering

For 1, the good that the suffering leads to is intrinsically good, the suffering is instrumentally good but intrinsically bad. If you could get the greater good without the suffering, that would be better.

2, 3, and 4 are really just special cases of 1. For all, the suffering component of the experience is intrinsically bad. For 2, you falsely believe the suffering is still instrumentally good. For 3, the "meaningfulness" of the experience is the greater good, and the suffering is instrumental in that. It would be better if you could get the same amount of meaningfulness without suffering. Similarly for 4 - the pleasurable part of the experience is the greater good.

Comment by Daniel_Eth on Thoughts on short timelines · 2018-10-23T19:07:41.674Z · EA · GW

I think this line of reasoning may be misguided, at least if taken in a particular direction. If the AI Safety community loudly talks about there being a significant chance of AGI within 10 years, then this will hurt the AI Safety community's reputation when 10 years later we're not even close. It's important that we don't come off as alarmists. I'd also imagine that the argument "1% is still significant enough to warrant focus" won't resonate with a lot of people. If we really think the chances in the next 10 years are quite small, I think we're better off (at least for PR reasons) talking about how there's a significant chance of AGI in 20-30 years (or whatever we think), and how solving the problem of safety might take that long, so we should start today.

Comment by Daniel_Eth on Thoughts on short timelines · 2018-10-23T19:04:18.505Z · EA · GW

I think you're right about AGI being very unlikely within the next 10 years. I would note, though, that the OpenPhil piece you linked to predicted at least 10% chance within 20 years, not 10 years (and I expect many people predicting "short timelines" would consider 20 years to be "short"). If you grant 1-2% chance to AGI in 10 years, perhaps that translates to 5-10% within 20 years.

Comment by Daniel_Eth on EA Survey 2018 Series: Community Demographics & Characteristics · 2018-09-22T09:30:56.723Z · EA · GW

Similarly, the word "majority" is used in a couple places where it should have instead said "plurality." (Sorry to be nitpicky)

Comment by Daniel_Eth on Fisher & Syed on Tradable Obligations to Enhance Health · 2018-08-14T02:19:48.922Z · EA · GW

I think you're understating the importance of taking up the resources. There aren't THAT many super high quality medical researchers who can credibly signal their high quality.

Comment by Daniel_Eth on Are men more likely to attend EA London events? Attendance data, 2016-2018. · 2018-08-11T05:45:02.915Z · EA · GW

Are women more likely to return for a second event if the gender ratio of the first event they attended was more balanced? This could tell you whether the difference is simply a result of the community being mostly male right now, or if it's due to some other reason(s).

Comment by Daniel_Eth on Problems with EA representativeness and how to solve it · 2018-08-03T20:14:46.860Z · EA · GW

One easy way you could get a sample that's both broadly representative and also weights more involved EAs more is to make the survey available to everyone on the forum, but to weight all responses by the square root of the respondent's karma. Karma is obviously an imperfect proxy, but it seems much easier to get than people's donation histories, and it doesn't seem biased in any particular direction. The square root is so that the few people with the absolute highest karma don't completely dominate the survey.

Comment by Daniel_Eth on Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was · 2018-05-24T18:42:07.418Z · EA · GW

"I’d compiled a list of 40-odd evidence-based activities and re-thinking exercises, i.e. behavioural and cognitive interventions, that I’d come across during my research"

Have you made this list public anywhere? I'd be interested in seeing the list (and I assume others would be too).

Comment by Daniel_Eth on Against prediction markets · 2018-05-19T16:09:22.541Z · EA · GW

So let's assume that teams of superforecasters with extremized predictions can do significantly better than any other mechanism of prediction that we've thought of, including prediction markets as they've existed so far. If so, then with prediction markets of sufficiently high volume and liquidity (just for the sake of argument, imagine prediction markets on the scale of the NYSE today), we would expect firms to crop up that would identify superforecasters, train them, and optimize for exactly how much to extremize their predictions (as well as iterating on this basic formula). These superforecaster firms would come to dominate the prediction markets (we'd eventually wind up with companies that were like the equivalent of goldman sachs but for prediction markets), and the prediction markets would be better than any other method of prediction. Of course, we're a LONG way away from having prediction markets like that, but I think this at least shows the theoretical potential of large scale prediction markets.

Comment by Daniel_Eth on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T00:23:26.655Z · EA · GW

I thought this piece was good. I agree that MCE work is likely quite high impact - perhaps around the same level as X-risk work - and that it has been generally ignored by EAs. I also agree that it would be good for there to be more MCE work going forward. Here's my 2 cents:

You seem to be saying that AIA is a technical problem and MCE is a social problem. While I think there is something to this, I think there are very important technical and social sides to both of these. Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), and this is more a social solution than a technical one. Also, avoiding a technological race for AGI seems important for AIA, and this also is more a social problem than a technical one.

For MCE, the 2 best things I can imagine (that I think are plausible) are both technical in nature. First, I expect clean meat will lead to the moral circle expanding more to animals. I really don't see any vegan social movement succeeding in ending factory farming anywhere near as much as I expect clean meat to. Second, I'd imagine that a mature science of consciousness would increase MCE significantly. Many people don't think animals are conscious, and almost no one thinks anything besides animals can be conscious. How would we even know if an AI was conscious, and if so, if it was experiencing joy or suffering? The only way would be if we develop theories of consciousness that we have high confidence in. But right now we're very limited in studying consciousness, because our tools at interfacing with the brain are crude. Advanced neurotechnologies could change that - they could allow us to potentially test hypotheses about consciousness. Again, developing these technologies would be a technical problem.

Of course, these are just the first ideas that come into my mind, and there very well may be social solutions that could do more than the technical solutions I mentioned, but I don't think we should rule out the potential role of technical solutions, either.

Comment by Daniel_Eth on On funding medical research · 2018-02-17T02:55:32.042Z · EA · GW

As long as we're talking about medical research from an EA perspective, I think we should consider funding therapies for reversing aging itself. In terms of scale, aging undoubtedly is by far the largest (100,000 people die from age-related diseases every single day, not to mention the psychological toll that aging causes). Aging is also quite neglected - very few researchers focus on trying to reverse it. Tractability is of course a concern here, but I think this point is a bit nuanced. Achieving a full and total cure for aging would clearly be quite hard. But what about a partial cure? What about a therapy that made 70 year olds feel and act like they were 50, and with an additional 20 years of life expectancy? Such a treatment may be much more tractable. At least a large part of aging seems to be due to several common mechanisms (such as DNA damage, accumulation of senescent cells, etc), and reversing some of these mechanisms (such as by restoring DNA, clearing the body of senescent cells, etc) might allow for such a treatment. Even the journal Nature (one of the 2 most prestigious science journals in the world) had a recent piece saying as much:

If anyone is interesting in funding research toward curing aging, the SENS Foundation ( is arguably your best bet.

Comment by Daniel_Eth on Could I have some more systemic change, please, sir? · 2018-01-25T00:01:01.348Z · EA · GW

"the community members who agree with this reasoning, have moved on to other problem areas"

I've seen this problem come up with other areas as well. For instance, funding research to combat aging (eg the SENS foundation) gets little support, because basically anyone who will "shut up and multiply" - coming to the conclusion that SENS is higher EV than GiveWell charities, will use the same logic to conclude that AI safety is higher EV than GiveWell charities or SENS.

Comment by Daniel_Eth on Could I have some more systemic change, please, sir? · 2018-01-24T09:02:25.934Z · EA · GW

I really like this type of reasoning - I think it allows for easier comparisons than the standard expected value assessments people have occasionally tried to do for systemic changes. A couple points, though.

1) I think very few systemic changes will affect 1B people. Typically I assume a campaign will be focussed on a particular country, and likely only a portion of the population of that country would be positively affected by change - meaning 10M or 100M people is probably much more typical. This shifts the cutoff cost to closer to around $1B to $10B, which seem plausibly in the same ballpark as GD.

2) Instead of asking "how much would this campaign cost to definitely succeed", you could ask "how much would it cost to run a campaign that had at least a 50% chance of succeeding" and then divide the HALYS by 2. I'd imagine this is a much easier question to answer, as you'd never be certain that an effort at systemic change would be successful, but you could become confident that the chances were high.

Comment by Daniel_Eth on 69 things that might be pretty effective to fund · 2018-01-23T03:16:35.899Z · EA · GW

It seems like a lot of these are for funding particular researchers. I don't know of a way to do this in a tax-deductible manner. I think it would be good if someone created an organization that got tax exempt status and allowed for people to donate to them and specify specific researchers they wanted the donation to go towards.

Comment by Daniel_Eth on [deleted post] 2018-01-21T03:11:34.005Z

Yeah, I was referring to the accessible universe, though I guess you are right that I can't even be 100% certain that our theories on that won't be overturned at some point.

Comment by Daniel_Eth on [deleted post] 2018-01-19T23:48:01.328Z

Thanks for taking the time to write this post. I have a few comments - some supportive, and some in disagreement with what you wrote.

I find your worries about Peak Oil to be unsupported. In the last several years, the US has found tons of natural gas that it can access - perhaps even 100 years or more. On top of this, renewables are finally starting to really prove their worth - with both wind and solar reaching new heights. Solar in particular has improved drastically - exponential decay in cost over decades (with cost finally reaching parity with fossil fuels in many parts of the world), exponential increase in installations, etc. If fossil fuels really were running out that would arguably be a good thing - as it would increase the price of fossil fuels and make the transition to solar even quicker (and we'd have a better chance of avoiding the worst effects of climate change). Unfortunately, the opposite seems more likely - as ice in the arctic melts, more fossil fuels (that are now currently under the ice) will become accessible.

I think "The Limits of Growth" is not a particularly useful guide to our situation. This report might have been a reasonable thing to worry about in 1972, but I think a lot has changed since then that we need to take into account. First off, yes, obviously exponential growth with finite resources will eventually hit a wall, and obviously the universe is finite. But the truth is that while there are limits - we're not even remotely close to these limits. There are several specific technological trends in that each seem likely to turn LTG type thinking about limits in the near term on their head, including clean energy, AI, nanotechnology, and biotechnology. We are so far from the limits of these technologies - yet even modest improvements will let us surpass the limits of the world today. Regarding the fact that the 1970-2000 data fits with the predictions of LTG - this point is just silly. LTG's prediction can be roughly summarized as "the status quo continues with things going good until around 2020 to 2030, and then stuff starts going terribly." The controversial claim isn't the first part about stuff continuing to go well for a while, but the second part about stuff then going terribly. The fact that we've continued to do well (as their model predicted!) doesn't mean that the second part of their model will go as predicted and things will follow by going terribly.

I have no idea how plausible a Malthusian disaster in Sub-Saharan Africa is. I know that climate change has the potential to cause massive famines and mass migrations - and I agree that has the potential to increase right wing extremists in Europe (and that this would all be terrible). I don't know what the projected timeframe on that is, though. I also hadn't heard of most of the other problems you listed in this section. Unfortunately, after reading your section on peak oil which struck me as both unsubstantiated (I mean no offense by this - just being straightforward) and also somewhat biased (for instance I can sense some resentment of "elites" in your writing, among other things), I now don't know how much faith to have in your analysis of the Sub-Saharan African situation (which I feel much less qualified to judge than the other section).

I agree it is good for people to be thinking about these sorts of things, and I would encourage more research into the area. Also, I hadn't heard of Transafrican Water pipeline Project, and agree that it would make sense for EAs to evaluate it for whether it would be an effective use of charitable donations.

Comment by Daniel_Eth on The Technological Landscape Affecting Artificial General Intelligence and the Importance of Nanoscale Neural Probes · 2018-01-18T00:59:33.284Z · EA · GW

Nanotechnology is technology that has parts operating in the range of between 1 nm and 100 nm, so actually this technology is nanotechnology - as is much of the rest of biotechnology.

You're right that the usefulness of non-biotech based nanotechnology (what people typically think of as nanotechnology) hasn't been used much - that's largely due to it being a nascent area. I expect that to change over the coming decades as the technology improves. It might not, though, as biotech based nanotechnology might stay in the lead.

Comment by Daniel_Eth on The Technological Landscape Affecting Artificial General Intelligence and the Importance of Nanoscale Neural Probes · 2018-01-17T09:26:25.249Z · EA · GW

Broadly speaking, nanoparticles (or nanorobots, depending on how complicated they are) that scan the brain from the inside, in vivo. The sort of capabilities I'm imagining is the ability to monitor every neuron in large neural circuits simultaneously, each for many different chemical signals (such as certain neurotransmitters). Of course, since this technology doesn't exist yet, the specifics are necessarily uncertain - these probes might include CMOS circuitry, they might be based on DNA origami, or they might be unlike any technology that currently exists. Such probes would allow for building much more accurate maps of brain activity.

Comment by Daniel_Eth on The Technological Landscape Affecting Artificial General Intelligence and the Importance of Nanoscale Neural Probes · 2018-01-13T02:20:09.489Z · EA · GW

Neuroprosthesis-driven uploading seems vastly harder for several reasons:

• you'd still need to understand in great detail how the brain processes information (if you don't, you'll be left with an upload that, while perhaps intelligent, would not act like how the person acted, and perhaps even drastically so that it might be better to imagine it as a form of NAGI than as WBE)

• integrating the exocortex with the brain would likely still require nanotechnology able to interface with the brain

• ethical/ regulatory hurdles here seem immense

I'd actually expect that in order to understand the brain enough for neuroprosthesis-driven uploading, we'd still likely need to run experiments with nanoprobes (for the same arguments as in the paper: lots of the information processing happens on the sub-cellular level - this doesn't mean that we have to replicate this information processing in a biologically realistic manner, but we likely will need to at least understand how the information is processed)

Comment by Daniel_Eth on The Technological Landscape Affecting Artificial General Intelligence and the Importance of Nanoscale Neural Probes · 2018-01-12T01:21:34.301Z · EA · GW

Also here's a 5 minute talk I gave at EA Global London on the same topic:

Comment by Daniel_Eth on Ideological engineering and social control: A neglected topic in AI safety research? · 2017-09-03T05:56:41.581Z · EA · GW

I'd imagine there are several reasons this question hasn't received as much attention as AGI Safety, but the main reasons are that it's both much lower impact and (arguably) much less tractable. It's lower impact because, as you said, it's not an existential risk. It's less tractable because even if we could figure out a technical solution, there are strong vested interests against applying the solution (as contrasted to AGI Safety, where all vested interests would want the AI to be aligned).

I'd imagine this sort of tech would actually decrease the risk from bioweapons etc for the same reason that I'd imagine it would decrease terrorism generally, but I could be wrong.

Regarding the US in particular, I'm personally much less worried about the corporations pushing their preferred ideologies than them using the tech to manipulate us into buying stuff and watching their media - companies tend to be much more focussed on profits than on pushing ideologies.

Comment by Daniel_Eth on Nothing Wrong With AI Weapons · 2017-08-29T06:59:02.113Z · EA · GW

"The same can be said for humans." - no, that's very much not true. Humans have common sense and can relatively easily think generally in novel situations. Regarding your second point, how would you avoid an arms race to a situation where they are acting in that level? It happened to a large degree with the financial sector, so I don't see why the military sphere would be much different. The amount of time from having limited deployment of autonomous weapons to the military being mostly automated likely would not be very large, especially since an arms race could ensure. And I could imagine catastrophes occurring due to errors in machines simply in "peaceful posture," not to mention that this could be very hard to enforce internationally or even determine which countries were breaking the rules. Having a hard cutoff at not letting machines kill without human approval seems much more prudent.

Comment by Daniel_Eth on Nothing Wrong With AI Weapons · 2017-08-29T00:51:42.902Z · EA · GW

"I don't know what reason there is to expect a loss in stability in tense situations; if militaries decide that machines are competent enough to replace humans in battlefield decision making, then they will probably be at least as good at avoiding errors."

I very much disagree with that. AI and similar algorithms tend to work quite well... until they don't. Often times assumptions are programmed into them which don't always hold, or the training data doesn't quite match the test data. It's probably the case that automated weapons would greatly decrease minor errors, but they could greatly increase the chance of a major error (though this rate might still be small). Consider the 2010 flash crash - the stock market dropped around 10% within minutes, then less than an hour later it bounced back. Why? Because a bunch of algorithms did stuff that we don't really understand while operating under slightly different assumptions than what happened in real life. What's the military equivalent of the flash crash? Something like a bunch of autonomous weapons in US and China starting all out war over some mistake, then stopping just as soon as it started, yet with 100M people dead. The way to avoid this sort of problem is to maintain human oversight, and the best place to draw the line is probably at the decision to kill. Partially autonomous weapons (where someone remotely has to make a decision to kill, or at least approve the decision) could provide almost all the benefit of fully autonomous weapons - including greatly reduced collateral damage - yet would not have the same risk of possibly leading to a military flash crash.

Comment by Daniel_Eth on Medical research: cancer is hugely overfunded; here's what to choose instead · 2017-08-20T12:46:54.758Z · EA · GW

The vast majority of ailments derive from unfortunate happenings at the subcellular level (i.e. the nanoscale). This includes amyloid buildup in alzheimers, DNA mutations in cancer, etc etc. Right now, medicine is - to a large degree - hoping to get lucky by finding chemicals that happen to combat these processes. But a more thorough ability of actually influencing events on this scale could be a boon for medicine. What type of nanotech am I envisioning exactly? That's pretty broad - though in the short/ medium term it could be carbon nanotubes targeting cancer cells (, could be DNA origami used to deliver drugs in a targeted way (, or could be something else entirely.