Posts

The Technological Landscape Affecting Artificial General Intelligence and the Importance of Nanoscale Neural Probes 2018-01-12T01:10:07.056Z
Donating To High-Risk High-Reward Charities 2017-02-14T04:20:31.664Z

Comments

Comment by daniel_eth on What types of charity will be the most effective for creating a more equal society? · 2020-10-14T08:41:42.375Z · EA · GW

I think it's really bad if people feel like they can't push back against claims they don't agree with (especially regarding cause/intervention prioritization), and I don't think the author of a post saying (effectively) "please don't push back against this claim if you disagree with it" should be able to insulate claims from scrutiny. Note that the author didn't say "if we think claim X is true, what should we do, but please let's stay focused and not argue about claim X here" but instead "I think claim X is true - given that, what should we do?"

Comment by daniel_eth on What types of charity will be the most effective for creating a more equal society? · 2020-10-13T13:34:10.474Z · EA · GW

"the root cause of most of the ills of society is inequality, primarily economic inequality - income inequality"

While I think income inequality (or, perhaps even more so, consumption inequality) is a large problem, I don't think it's the root cause of most of the ills of society. I'd imagine that tribalism, selfishness, mental-health problems, and so on are larger causes. In the US, for instance, my sense is that racism is a root of more problems than is income inequality.

More specifically answering the question you asked, I'd imagine political solutions would be the most effective here, as the government plays such a large role in influencing the economic distribution, and the amount of money in politics is incredibly small compared to the effect of political outcomes. I could imagine effective organizations in this area could include think tanks searching for political solutions, firms lobbying for implementing these solutions, or organizations that work to elect politicians/parties that are more likely to appropriately address these concerns.

[I'd also note that, from a global perspective, inequality between countries may typically larger than within countries, so it would perhaps be better to focus on health and development charities such as AMF, though one could make an argument that (for instance) social problems in the US spill over into problems for the rest of the world, so focusing on inequality in the US may be more important that a naive calculation would indicate.]

Comment by daniel_eth on Best Consequentialists in Poli Sci #1 : Are Parliaments Better? · 2020-10-09T22:52:39.842Z · EA · GW

FWIW, here's a Vox article arguing that gridlock from presidential systems isn't just bad in terms of "normal" policy outcomes, but can also lead to crises of legitimacy if polarization is too high (in which case the executive and legislative branches may both claim to speak for the people while disagreeing, and democratic principles won't necessarily say how to resolve the disagreement), which runs the risk of collapsing the entire political system:

https://www.vox.com/2015/3/2/8120063/american-democracy-doomed

Comment by daniel_eth on Open Communication in the Days of Malicious Online Actors · 2020-10-08T19:49:38.812Z · EA · GW

Thanks, I think this is interesting and these sorts of considerations may become increasingly important as EA grows. One other strategy that I think is worth pursuing is preventative measures. IMHO, ideally EA would be the kind of community that selectively repels people likely to be malicious (eg I think it's good if we repel people who are generally fueled by anger, people who are particularly loud and annoying, people who are racist, etc). I think we already do a pretty good job of "smacking down" people who are very brash or insulting to other members, and I think the epistemic norms in the community probably also select somewhat against people who are particularly angry or who have a tendency to engage in ad hominem. Might also be worth considering what other traits we want to select for/against, and what sort of norms we could adopt towards those ends.

Comment by daniel_eth on No More Pandemics: a lobbying group? · 2020-10-03T19:41:35.731Z · EA · GW

Seems like it could be a good idea if implemented well. A couple considerations come to mind:

• I think it's possible for something like this to inadvertently cause harm by pushing policies that are good for combatting natural pandemics but also increase the chances of/potential severity of engineered pandemics. Should be avoidable if the leaders of the group are in communication with experts that focus on engineered pandemics.

• I'd strongly recommend engaging with people who do political polling (such as people who work at Data for Progress) when deciding political priorities. Pushing policies that are popular is presumably much more tractable than pushing those that are not, and pollsters could help you determine which policies fit into which category.

Comment by daniel_eth on aysu's Shortform · 2020-09-24T16:24:01.998Z · EA · GW

Welcome to the community!

Both of these are generally thought to be good things, though personally I'd expect growing the movement would be easier than spreading EA-style thought (partially because the EA community is small, while the outside world is big, so it's probably much easier to have a substantial relative impact in growing the community than in, for instance, getting the outside world to be more impact-aware, though there are other considerations). One caveat, though, is that rash attempts to grow the movement have the potential to be counterproductive.

Comment by Daniel_Eth on [deleted post] 2020-09-23T19:05:43.389Z

Peter McIntyre (at 80k) has a blogpost where he describes how he makes meals along those lines:

https://mcntyr.com/blog/peter-special

Comment by daniel_eth on Objections to Value-Alignment between Effective Altruists · 2020-07-17T13:55:55.346Z · EA · GW

I think there's some interesting points here! A few reactions:

• I don't think advocates of traditional diversity are primarily concerned with cognitive diversity. I think the reasoning is more (if altruistic) to combat discrimination/bigotry or (if self-interested) good PR/a larger pool of applicants to choose from.

• I think in some of the areas that EAs have homogeneity it's bad (eg it's bad that we lack traditional diversity, it's bad that we lack so much geographic diversity, it's bad that we have so much homogeneity of mannerisms, it's bad that certain intellectual traditions like neoliberalism or the Pinkerian progress narrative are overwhelmingly fashionable in EA, etc), but I'd actually push back against the claim that it's bad that we have such a strong consequentialist bent (this just seems to go so hand-in-hand with EA - one doesn't have to be a consequentialist to want to improve the external world as much as possible, but I'd imagine there's a strong tendency for that) or that we lack representation of certain political leanings (eg I wouldn't want people in the alt-right in EA).

• If people don't feel comfortable going against the grain and voicing opposition, I'd agree that's bad because we'd lack ability to self-correct (though fwiw my personal impression is that EA is far better on this metric than almost all other subcultures or movements).

• It's not clear to me that hierarchy/centralization is bad - there are certain times when I think we err too much on this side, but then I think others where we err too much the other way. If we had significantly less centralization, I'd have legitimate concerns about coordination, info-hazards, branding, and evaluating quality of approaches/organizations.

• I agree that some of the discussion about intelligence is somewhat cringe, but it seems to me that we've gotten better on that metric over time, not worse.

• Agree that the fandom culture is... not a good feature of EA

• There probably are some feedback loops here as you mention, but there are other mechanisms going the other direction. It's not clear to me that the situation is getting worse and we're headed for "locking in" unfortunate dynamics, and if anything I think we've actually mostly improved on these factors over time (and, crucially, my inside view is that we've improved our course-correction ability over time).

Comment by daniel_eth on Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? · 2020-06-22T17:12:58.437Z · EA · GW

My view:

Short answer: it's suffering that's bad, intrinsically (though suffering can be instrumentally good)

Long answer: There are several different reasons suffering may be voluntary. To list a few:

1) suffering for some greater good (eg delayed pleasure, suffering for something that will make more people happy, etc)

2) false belief that your suffering is for a greater good (eg you think suffering will give you karma points that will make you happier in next life)

3) suffering that is "meaningful" (such as mourning)

4) an experience that includes some suffering and some pleasure that is one the whole-enhanced by the suffering

For 1, the good that the suffering leads to is intrinsically good, the suffering is instrumentally good but intrinsically bad. If you could get the greater good without the suffering, that would be better.

2, 3, and 4 are really just special cases of 1. For all, the suffering component of the experience is intrinsically bad. For 2, you falsely believe the suffering is still instrumentally good. For 3, the "meaningfulness" of the experience is the greater good, and the suffering is instrumental in that. It would be better if you could get the same amount of meaningfulness without suffering. Similarly for 4 - the pleasurable part of the experience is the greater good.

Comment by daniel_eth on Thoughts on short timelines · 2018-10-23T19:07:41.674Z · EA · GW

I think this line of reasoning may be misguided, at least if taken in a particular direction. If the AI Safety community loudly talks about there being a significant chance of AGI within 10 years, then this will hurt the AI Safety community's reputation when 10 years later we're not even close. It's important that we don't come off as alarmists. I'd also imagine that the argument "1% is still significant enough to warrant focus" won't resonate with a lot of people. If we really think the chances in the next 10 years are quite small, I think we're better off (at least for PR reasons) talking about how there's a significant chance of AGI in 20-30 years (or whatever we think), and how solving the problem of safety might take that long, so we should start today.

Comment by daniel_eth on Thoughts on short timelines · 2018-10-23T19:04:18.505Z · EA · GW

I think you're right about AGI being very unlikely within the next 10 years. I would note, though, that the OpenPhil piece you linked to predicted at least 10% chance within 20 years, not 10 years (and I expect many people predicting "short timelines" would consider 20 years to be "short"). If you grant 1-2% chance to AGI in 10 years, perhaps that translates to 5-10% within 20 years.

Comment by daniel_eth on EA Survey 2018 Series: Community Demographics & Characteristics · 2018-09-22T09:30:56.723Z · EA · GW

Similarly, the word "majority" is used in a couple places where it should have instead said "plurality." (Sorry to be nitpicky)

Comment by daniel_eth on Fisher & Syed on Tradable Obligations to Enhance Health · 2018-08-14T02:19:48.922Z · EA · GW

I think you're understating the importance of taking up the resources. There aren't THAT many super high quality medical researchers who can credibly signal their high quality.

Comment by daniel_eth on Are men more likely to attend EA London events? Attendance data, 2016-2018. · 2018-08-11T05:45:02.915Z · EA · GW

Are women more likely to return for a second event if the gender ratio of the first event they attended was more balanced? This could tell you whether the difference is simply a result of the community being mostly male right now, or if it's due to some other reason(s).

Comment by daniel_eth on Problems with EA representativeness and how to solve it · 2018-08-03T20:14:46.860Z · EA · GW

One easy way you could get a sample that's both broadly representative and also weights more involved EAs more is to make the survey available to everyone on the forum, but to weight all responses by the square root of the respondent's karma. Karma is obviously an imperfect proxy, but it seems much easier to get than people's donation histories, and it doesn't seem biased in any particular direction. The square root is so that the few people with the absolute highest karma don't completely dominate the survey.

Comment by daniel_eth on Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was · 2018-05-24T18:42:07.418Z · EA · GW

"I’d compiled a list of 40-odd evidence-based activities and re-thinking exercises, i.e. behavioural and cognitive interventions, that I’d come across during my research"

Have you made this list public anywhere? I'd be interested in seeing the list (and I assume others would be too).

Comment by daniel_eth on Against prediction markets · 2018-05-19T16:09:22.541Z · EA · GW

So let's assume that teams of superforecasters with extremized predictions can do significantly better than any other mechanism of prediction that we've thought of, including prediction markets as they've existed so far. If so, then with prediction markets of sufficiently high volume and liquidity (just for the sake of argument, imagine prediction markets on the scale of the NYSE today), we would expect firms to crop up that would identify superforecasters, train them, and optimize for exactly how much to extremize their predictions (as well as iterating on this basic formula). These superforecaster firms would come to dominate the prediction markets (we'd eventually wind up with companies that were like the equivalent of goldman sachs but for prediction markets), and the prediction markets would be better than any other method of prediction. Of course, we're a LONG way away from having prediction markets like that, but I think this at least shows the theoretical potential of large scale prediction markets.

Comment by daniel_eth on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T00:23:26.655Z · EA · GW

I thought this piece was good. I agree that MCE work is likely quite high impact - perhaps around the same level as X-risk work - and that it has been generally ignored by EAs. I also agree that it would be good for there to be more MCE work going forward. Here's my 2 cents:

You seem to be saying that AIA is a technical problem and MCE is a social problem. While I think there is something to this, I think there are very important technical and social sides to both of these. Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), and this is more a social solution than a technical one. Also, avoiding a technological race for AGI seems important for AIA, and this also is more a social problem than a technical one.

For MCE, the 2 best things I can imagine (that I think are plausible) are both technical in nature. First, I expect clean meat will lead to the moral circle expanding more to animals. I really don't see any vegan social movement succeeding in ending factory farming anywhere near as much as I expect clean meat to. Second, I'd imagine that a mature science of consciousness would increase MCE significantly. Many people don't think animals are conscious, and almost no one thinks anything besides animals can be conscious. How would we even know if an AI was conscious, and if so, if it was experiencing joy or suffering? The only way would be if we develop theories of consciousness that we have high confidence in. But right now we're very limited in studying consciousness, because our tools at interfacing with the brain are crude. Advanced neurotechnologies could change that - they could allow us to potentially test hypotheses about consciousness. Again, developing these technologies would be a technical problem.

Of course, these are just the first ideas that come into my mind, and there very well may be social solutions that could do more than the technical solutions I mentioned, but I don't think we should rule out the potential role of technical solutions, either.

Comment by daniel_eth on On funding medical research · 2018-02-17T02:55:32.042Z · EA · GW

As long as we're talking about medical research from an EA perspective, I think we should consider funding therapies for reversing aging itself. In terms of scale, aging undoubtedly is by far the largest (100,000 people die from age-related diseases every single day, not to mention the psychological toll that aging causes). Aging is also quite neglected - very few researchers focus on trying to reverse it. Tractability is of course a concern here, but I think this point is a bit nuanced. Achieving a full and total cure for aging would clearly be quite hard. But what about a partial cure? What about a therapy that made 70 year olds feel and act like they were 50, and with an additional 20 years of life expectancy? Such a treatment may be much more tractable. At least a large part of aging seems to be due to several common mechanisms (such as DNA damage, accumulation of senescent cells, etc), and reversing some of these mechanisms (such as by restoring DNA, clearing the body of senescent cells, etc) might allow for such a treatment. Even the journal Nature (one of the 2 most prestigious science journals in the world) had a recent piece saying as much: https://www.nature.com/articles/d41586-018-01668-0

If anyone is interesting in funding research toward curing aging, the SENS Foundation (http://www.sens.org) is arguably your best bet.

Comment by daniel_eth on Could I have some more systemic change, please, sir? · 2018-01-25T00:01:01.348Z · EA · GW

"the community members who agree with this reasoning, have moved on to other problem areas"

I've seen this problem come up with other areas as well. For instance, funding research to combat aging (eg the SENS foundation) gets little support, because basically anyone who will "shut up and multiply" - coming to the conclusion that SENS is higher EV than GiveWell charities, will use the same logic to conclude that AI safety is higher EV than GiveWell charities or SENS.

Comment by daniel_eth on Could I have some more systemic change, please, sir? · 2018-01-24T09:02:25.934Z · EA · GW

I really like this type of reasoning - I think it allows for easier comparisons than the standard expected value assessments people have occasionally tried to do for systemic changes. A couple points, though.

1) I think very few systemic changes will affect 1B people. Typically I assume a campaign will be focussed on a particular country, and likely only a portion of the population of that country would be positively affected by change - meaning 10M or 100M people is probably much more typical. This shifts the cutoff cost to closer to around $1B to $10B, which seem plausibly in the same ballpark as GD.

2) Instead of asking "how much would this campaign cost to definitely succeed", you could ask "how much would it cost to run a campaign that had at least a 50% chance of succeeding" and then divide the HALYS by 2. I'd imagine this is a much easier question to answer, as you'd never be certain that an effort at systemic change would be successful, but you could become confident that the chances were high.

Comment by daniel_eth on 69 things that might be pretty effective to fund · 2018-01-23T03:16:35.899Z · EA · GW

It seems like a lot of these are for funding particular researchers. I don't know of a way to do this in a tax-deductible manner. I think it would be good if someone created an organization that got tax exempt status and allowed for people to donate to them and specify specific researchers they wanted the donation to go towards.

Comment by Daniel_Eth on [deleted post] 2018-01-21T03:11:34.005Z

Yeah, I was referring to the accessible universe, though I guess you are right that I can't even be 100% certain that our theories on that won't be overturned at some point.

Comment by Daniel_Eth on [deleted post] 2018-01-19T23:48:01.328Z

Thanks for taking the time to write this post. I have a few comments - some supportive, and some in disagreement with what you wrote.

I find your worries about Peak Oil to be unsupported. In the last several years, the US has found tons of natural gas that it can access - perhaps even 100 years or more. On top of this, renewables are finally starting to really prove their worth - with both wind and solar reaching new heights. Solar in particular has improved drastically - exponential decay in cost over decades (with cost finally reaching parity with fossil fuels in many parts of the world), exponential increase in installations, etc. If fossil fuels really were running out that would arguably be a good thing - as it would increase the price of fossil fuels and make the transition to solar even quicker (and we'd have a better chance of avoiding the worst effects of climate change). Unfortunately, the opposite seems more likely - as ice in the arctic melts, more fossil fuels (that are now currently under the ice) will become accessible.

I think "The Limits of Growth" is not a particularly useful guide to our situation. This report might have been a reasonable thing to worry about in 1972, but I think a lot has changed since then that we need to take into account. First off, yes, obviously exponential growth with finite resources will eventually hit a wall, and obviously the universe is finite. But the truth is that while there are limits - we're not even remotely close to these limits. There are several specific technological trends in that each seem likely to turn LTG type thinking about limits in the near term on their head, including clean energy, AI, nanotechnology, and biotechnology. We are so far from the limits of these technologies - yet even modest improvements will let us surpass the limits of the world today. Regarding the fact that the 1970-2000 data fits with the predictions of LTG - this point is just silly. LTG's prediction can be roughly summarized as "the status quo continues with things going good until around 2020 to 2030, and then stuff starts going terribly." The controversial claim isn't the first part about stuff continuing to go well for a while, but the second part about stuff then going terribly. The fact that we've continued to do well (as their model predicted!) doesn't mean that the second part of their model will go as predicted and things will follow by going terribly.

I have no idea how plausible a Malthusian disaster in Sub-Saharan Africa is. I know that climate change has the potential to cause massive famines and mass migrations - and I agree that has the potential to increase right wing extremists in Europe (and that this would all be terrible). I don't know what the projected timeframe on that is, though. I also hadn't heard of most of the other problems you listed in this section. Unfortunately, after reading your section on peak oil which struck me as both unsubstantiated (I mean no offense by this - just being straightforward) and also somewhat biased (for instance I can sense some resentment of "elites" in your writing, among other things), I now don't know how much faith to have in your analysis of the Sub-Saharan African situation (which I feel much less qualified to judge than the other section).

I agree it is good for people to be thinking about these sorts of things, and I would encourage more research into the area. Also, I hadn't heard of Transafrican Water pipeline Project, and agree that it would make sense for EAs to evaluate it for whether it would be an effective use of charitable donations.

Comment by daniel_eth on The Technological Landscape Affecting Artificial General Intelligence and the Importance of Nanoscale Neural Probes · 2018-01-18T00:59:33.284Z · EA · GW

Nanotechnology is technology that has parts operating in the range of between 1 nm and 100 nm, so actually this technology is nanotechnology - as is much of the rest of biotechnology.

You're right that the usefulness of non-biotech based nanotechnology (what people typically think of as nanotechnology) hasn't been used much - that's largely due to it being a nascent area. I expect that to change over the coming decades as the technology improves. It might not, though, as biotech based nanotechnology might stay in the lead.

Comment by daniel_eth on The Technological Landscape Affecting Artificial General Intelligence and the Importance of Nanoscale Neural Probes · 2018-01-17T09:26:25.249Z · EA · GW

Broadly speaking, nanoparticles (or nanorobots, depending on how complicated they are) that scan the brain from the inside, in vivo. The sort of capabilities I'm imagining is the ability to monitor every neuron in large neural circuits simultaneously, each for many different chemical signals (such as certain neurotransmitters). Of course, since this technology doesn't exist yet, the specifics are necessarily uncertain - these probes might include CMOS circuitry, they might be based on DNA origami, or they might be unlike any technology that currently exists. Such probes would allow for building much more accurate maps of brain activity.

Comment by daniel_eth on The Technological Landscape Affecting Artificial General Intelligence and the Importance of Nanoscale Neural Probes · 2018-01-13T02:20:09.489Z · EA · GW

Neuroprosthesis-driven uploading seems vastly harder for several reasons:

• you'd still need to understand in great detail how the brain processes information (if you don't, you'll be left with an upload that, while perhaps intelligent, would not act like how the person acted, and perhaps even drastically so that it might be better to imagine it as a form of NAGI than as WBE)

• integrating the exocortex with the brain would likely still require nanotechnology able to interface with the brain

• ethical/ regulatory hurdles here seem immense

I'd actually expect that in order to understand the brain enough for neuroprosthesis-driven uploading, we'd still likely need to run experiments with nanoprobes (for the same arguments as in the paper: lots of the information processing happens on the sub-cellular level - this doesn't mean that we have to replicate this information processing in a biologically realistic manner, but we likely will need to at least understand how the information is processed)

Comment by daniel_eth on The Technological Landscape Affecting Artificial General Intelligence and the Importance of Nanoscale Neural Probes · 2018-01-12T01:21:34.301Z · EA · GW

Also here's a 5 minute talk I gave at EA Global London on the same topic: https://www.youtube.com/watch?v=jgSxmA7AiBo&index=30&list=PLwp9xeoX5p8POB5XyiHrLbHxx0n6PQIBf

Comment by daniel_eth on Ideological engineering and social control: A neglected topic in AI safety research? · 2017-09-03T05:56:41.581Z · EA · GW

I'd imagine there are several reasons this question hasn't received as much attention as AGI Safety, but the main reasons are that it's both much lower impact and (arguably) much less tractable. It's lower impact because, as you said, it's not an existential risk. It's less tractable because even if we could figure out a technical solution, there are strong vested interests against applying the solution (as contrasted to AGI Safety, where all vested interests would want the AI to be aligned).

I'd imagine this sort of tech would actually decrease the risk from bioweapons etc for the same reason that I'd imagine it would decrease terrorism generally, but I could be wrong.

Regarding the US in particular, I'm personally much less worried about the corporations pushing their preferred ideologies than them using the tech to manipulate us into buying stuff and watching their media - companies tend to be much more focussed on profits than on pushing ideologies.

Comment by daniel_eth on Nothing Wrong With AI Weapons · 2017-08-29T06:59:02.113Z · EA · GW

"The same can be said for humans." - no, that's very much not true. Humans have common sense and can relatively easily think generally in novel situations. Regarding your second point, how would you avoid an arms race to a situation where they are acting in that level? It happened to a large degree with the financial sector, so I don't see why the military sphere would be much different. The amount of time from having limited deployment of autonomous weapons to the military being mostly automated likely would not be very large, especially since an arms race could ensure. And I could imagine catastrophes occurring due to errors in machines simply in "peaceful posture," not to mention that this could be very hard to enforce internationally or even determine which countries were breaking the rules. Having a hard cutoff at not letting machines kill without human approval seems much more prudent.

Comment by daniel_eth on Nothing Wrong With AI Weapons · 2017-08-29T00:51:42.902Z · EA · GW

"I don't know what reason there is to expect a loss in stability in tense situations; if militaries decide that machines are competent enough to replace humans in battlefield decision making, then they will probably be at least as good at avoiding errors."

I very much disagree with that. AI and similar algorithms tend to work quite well... until they don't. Often times assumptions are programmed into them which don't always hold, or the training data doesn't quite match the test data. It's probably the case that automated weapons would greatly decrease minor errors, but they could greatly increase the chance of a major error (though this rate might still be small). Consider the 2010 flash crash - the stock market dropped around 10% within minutes, then less than an hour later it bounced back. Why? Because a bunch of algorithms did stuff that we don't really understand while operating under slightly different assumptions than what happened in real life. What's the military equivalent of the flash crash? Something like a bunch of autonomous weapons in US and China starting all out war over some mistake, then stopping just as soon as it started, yet with 100M people dead. The way to avoid this sort of problem is to maintain human oversight, and the best place to draw the line is probably at the decision to kill. Partially autonomous weapons (where someone remotely has to make a decision to kill, or at least approve the decision) could provide almost all the benefit of fully autonomous weapons - including greatly reduced collateral damage - yet would not have the same risk of possibly leading to a military flash crash.

Comment by daniel_eth on Medical research: cancer is hugely overfunded; here's what to choose instead · 2017-08-20T12:46:54.758Z · EA · GW

The vast majority of ailments derive from unfortunate happenings at the subcellular level (i.e. the nanoscale). This includes amyloid buildup in alzheimers, DNA mutations in cancer, etc etc. Right now, medicine is - to a large degree - hoping to get lucky by finding chemicals that happen to combat these processes. But a more thorough ability of actually influencing events on this scale could be a boon for medicine. What type of nanotech am I envisioning exactly? That's pretty broad - though in the short/ medium term it could be carbon nanotubes targeting cancer cells (http://www.sciencedirect.com/science/article/pii/S0304419X10000144), could be DNA origami used to deliver drugs in a targeted way (http://www.nature.com/news/dna-robot-could-kill-cancer-cells-1.10047), or could be something else entirely.

Comment by daniel_eth on Medical research: cancer is hugely overfunded; here's what to choose instead · 2017-08-09T20:35:54.971Z · EA · GW

Personally, I'd recommend donating to fund nanotechnology research (especially nanobiotechnology). Almost all diseases fundamentally occur at the nanoscale. I'd assume that our ability to manipulate matter at this scale in targeted ways is close to necessary and sufficient to cure many diseases, and that once we get advanced nanotechnology our medicine will improve leaps and bounds. Unfortunately, people like to feel that their interventions are more direct, so basic research that could lead to better tools to cure many diseases is likely drastically underfunded.

Comment by daniel_eth on Strategic implications of AI scenarios · 2017-07-19T04:10:31.611Z · EA · GW

My 2 cents: math/ programming is only half the battle. Here's an analogy - you could be the best programmer in the world, but if you don't understand chess, you can't program a computer to beat a human at chess, and if you don't understand quantum physics, you can't program a computer to simulate matter at the atomic scale (well, not using ab initio methods anyway).

In order to get an intelligence explosion, a computer would have to not only have great programming skills, but also really understand intelligence. And intelligence isn't just one thing - it's a bunch of things (creativity, memory, planning, social skills, emotional skills etc and these can be subdivided further into different fields like physics, design, social understanding, social manipulation etc). I find it hard to believe that the same computer would go from not superhuman to superhuman in almost all of these all at once. Obviously computers outcompete humans in many of these already, but I think even on the more "human" traits and in areas where computer act more like agents than just like tools, it's still more likely to happen in several waves instead of just one takeoff.

Comment by daniel_eth on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-19T03:41:40.499Z · EA · GW

Or it could just assume the AI has an unbounded utility function (or bounded very highly). An AI could guess it only has a 1 in 1/B chance of reaching DSA, but that the payoff from reaching this is 100B higher than defecting early. Since there are 100B stars in the galaxy, it seems likely that in a multipolar situation with decent diversity of AIs, some would fulfill this criteria and decide to gamble.

Comment by daniel_eth on Announcing Effective Altruism Grants · 2017-06-15T04:17:19.009Z · EA · GW

While I'm generally in favor of the idea of prediction markets, I think we need to consider the potential negative PR from betting on catastrophes. So while betting on whether a fast food chain offers cultured meat before a certain date would probably be fine, I think it would be a really bad idea to bet on nuclear weapons being used.

Comment by Daniel_Eth on [deleted post] 2017-06-03T00:34:59.093Z

I feel like you're drawing a lot of causations from correlations, which don't imply causation.

Comment by Daniel_Eth on [deleted post] 2017-06-03T00:28:27.866Z

While I applaud the idea of playing devil's advocate, I find the style of this post to be quite snide (eg liberal use of sarcastic rhetorical questions), which I think is problematic. Efforts to red team the community should be aimed at pointing out errors to be fixed, and I don't see how this helps. On the contrary, it can decrease morale and also signal to outsiders a lack of a sense of community within EA. It would be no more difficult to bring up potential problems in a simple, matter of factual manner.

Comment by daniel_eth on Update on Effective Altruism Funds · 2017-04-24T04:32:46.359Z · EA · GW

Or that there's one recent venture that's so laughably bad that everyone is talking about it right now...

Comment by daniel_eth on Update on Effective Altruism Funds · 2017-04-24T01:32:39.653Z · EA · GW

There's no shortage of bad ventures in the Valley: https://thenextweb.com/gadgets/2017/04/21/this-400-juicer-that-does-nothing-but-squeeze-juice-packs-is-peak-silicon-valley/#.tnw_Aw4G0WDt

http://valleywag.gawker.com/is-the-grilled-cheese-startup-silicon-valleys-most-elab-1612937740

Of course, there are plenty of other bad ventures that don't get funding...

Comment by daniel_eth on Intro to caring about AI alignment as an EA cause · 2017-04-17T07:08:49.120Z · EA · GW

"So far, we haven't found any way to achieve all three goals at once. As an example, we can try to remove any incentive on the system's part to control whether its suspend button is pushed by giving the system a switching objective function that always assigns the same expected utility to the button being on or off"

Wouldn't this potentially have another negative effect of giving the system an incentive to "expect" an unjustifiably high probability of successfully filling the cauldron? That way if the button is pressed and it's suspended, it gets a higher reward than if it expected a lower chance of success. This is basically an example of reward hacking.

Comment by daniel_eth on Intro to caring about AI alignment as an EA cause · 2017-04-14T22:14:14.299Z · EA · GW

This is great! Probably the best intro to AI safety that I've seen.

Comment by daniel_eth on How accurately does anyone know the global distribution of income? · 2017-04-07T00:43:27.569Z · EA · GW

2 (Different ways of adjusting for ‘purchasing power’) is tough, since not all items will scale the same amount. And markets typically are aimed at specific populations, so rich countries like America often won't even have markets for the poorest people in the world. The implication of this is that living on $2 per day in America is basically impossible, while living on $2 per day, even when "adjusted for purchasing power" in some poorer parts of the world (while still incredibly difficult), is more manageable.

Comment by daniel_eth on Surviving Global Catastrophe in Nuclear Submarines as Refuges · 2017-04-07T00:25:21.453Z · EA · GW

Looks like good work! My biggest question is how would you get people to actually do this? I'd imagine there are a lot of people that would want to go to Mars since that seems like a great adventure, but living in a submarine in case there's a catastrophe isn't something that I think would appeal to many people, nor is funding the project.

Comment by daniel_eth on Two Strange Things About AI Safety Policy · 2017-03-31T13:58:10.190Z · EA · GW

I think it's a really bad idea to try to slow down AI research. In addition to the fact that you'll antagonize almost all of the AI community and make them not take AI safety research as seriously, consider what would happen on the off chance that you actually succeeded.

There are a lot of AI firms, so if you're able to convince some to slow down, then the ones that don't slow down would be the ones that care less about AI safety. Much better idea to get the ones who care about AI safety to focus on AI safety than to potentially cede their cutting-edge research position to others who care less.

I think creating more Stuart Russells is just about the best thing that can be done for AI Safety. What he has different from others who care about AI Safety is that he's a prestigious CS professor, while many who focus on AI Safety, even if they have good ideas, aren't affiliated with a well-known and well-respected institution. Even when Nick Bostrom or Steven Hawking talk about AI, they're often dismissed by people who say "well sure they're smart, but they're not computer scientists, so what do they know?"

I'm actually a little surprised that they seemed so resistant to your idea. It seems to me that there is so much noise on this topic, that the marginal negative from creating more noise is basically zero, and if there's a chance you could cut through the noise and provide a platform to people who know what they're talking about here then that would be good.

Comment by daniel_eth on Utopia In The Fog · 2017-03-30T19:05:49.888Z · EA · GW

Isn't Elon Musk's OpenAI basically operating under this assumption? His main thing seems to be to make sure AGI is distributed broadly so no one group with evil intentions controls it. Bostrom responded that might be a bad idea, since AGI could be quite dangerous, and we similarly don't want to give nukes to everyone so that they're "democratized."

Multi-agent outcomes seem like a possibility to me, but I think the alignment problem is still quite important. If none of the AGI have human values, I'd assume we're very likely screwed, while we might not be if some do have human values.

For WBE I'd assume the most important things for its "friendliness" is that we upload people who are virtuous and our ability and willingness to find "brain tweaks" that increase things like compassion. If you're interested, here's a paper I published where I argued that we will probably create WBE by around 2060 if we don't get AGI through other means first: https://www.degruyter.com/view/j/jagi.2013.4.issue-3/jagi-2013-0008/jagi-2013-0008.xml

"Industry and academia seem to be placing much more effort into even the very speculative strains of AI research than into emulation." Actually, I'm gonna somewhat disagree with that statement. Very little research is done on advancing AI towards AGI, while a large portion of neuroscience research and also a decent amount of nanotechnology research (billions of dollars per year between the two) are clearly pushing us towards the ability to do WBE, even if that's not the reason that research is conducting right now.

Comment by daniel_eth on Intuition Jousting: What It Is And Why It Should Stop · 2017-03-30T18:16:38.949Z · EA · GW

Regarding “But hold on: you think X, so your view entails Y and that’s ridiculous! You can’t possibly think that.”

I agree that being haughty is typically bad. But the argument "X implies Y, and you claim to believe X. Do you also accept the natural conclusion, Y?" when Y is ridiculous is a legitimate argument to make. At that point, the other person either can accept the implication, change his mind on X, or argue that X does not imply Y. It seems like the thing you have most of a problem with is the tone though. Is that correct?

Comment by daniel_eth on Concrete project lists · 2017-03-27T02:42:58.284Z · EA · GW

Any ideas of which projects in particular?

Comment by daniel_eth on Concrete project lists · 2017-03-26T05:28:45.039Z · EA · GW

Hmm, actually I like this idea. I'd assume that if someone's been working for 6 months, then they should have something to show for it. And maybe 2 years to actually get a project to the point that it's either succeeding/ failing on its own. Since most EAs live in expensive cities, that could be around $2k per month minimum.

So that would be around $12k for someone to "try a project out," and then if the project is doing well then around $50k per capita to see if the project can be "successful" or not. That's plausibly worth it.

So I guess we should add [EA fund for people to start new projects] to the list of projects that EAs maybe should start. One thing we'd have to consider is how do we make sure we don't just get people conning us for free money?

Comment by daniel_eth on Concrete project lists · 2017-03-26T02:04:37.711Z · EA · GW

Good list!

Since we're on the topic of brain emulation, I feel the need to plug my paper: https://www.degruyter.com/view/j/jagi.2013.4.issue-3/jagi-2013-0008/jagi-2013-0008.xml Which is a fair bit shorter than the Sandberg/ Bostrom paper, and also I think presents a more feasible path to WBE. My paper suggests scanning the brain via nanotechnology, while they suggest scanning through methods that seem like simple extensions of current brain scanning techniques.