Posts

How do you decide between upvoting and strong upvoting? 2019-08-25T18:19:11.107Z
Explaining the Open Philanthropy Project's $17.5m bet on Sherlock Biosciences’ Innovations in Viral Diagnostics 2019-06-11T17:23:37.349Z
The case for taking AI seriously as a threat to humanity 2018-12-23T01:00:08.314Z
Pandemic Risk: How Large are the Expected Losses? Fan, Jamison, & Summers (2018) 2018-11-21T15:58:31.856Z

Comments

Comment by anonymous_ea on Is it still hard to get a job in EA? Insights from CEA’s recruitment data · 2022-07-20T23:02:37.177Z · EA · GW

Sorry, I'm not sure I understand what your point is. Are you saying that my point 1 is misleading because having even any relevant experience can be a big boost for an applicant's chances to getting hired by CEA, and any relevant experience isn't a high bar? 

Comment by anonymous_ea on Is it still hard to get a job in EA? Insights from CEA’s recruitment data · 2022-07-19T20:43:21.446Z · EA · GW

It sounds like there are two, separate things going on:

  1. Jobs at CEA are very hard to get, even for candidates with impressive resumes overall.
  2. CEA finds it hard to get applicants that have particular desirable qualities like previous experience in the same role. 
Comment by anonymous_ea on On Deference and Yudkowsky's AI Risk Estimates · 2022-07-08T22:47:49.254Z · EA · GW

(Of course, this is bound to be a judgment call; e.g. Eliezer didn’t state how many 9’s of confidence he has. It’s not like there’s a universal convention for how many 9’s are enough 9’s to state something as a fact without hedging, or how many 9’s are enough 9’s to mock the people who disagree with you.)

Yes, agreed. 

Let me lay out my thinking in more detail. I mean this to explain my views in more detail, not as an attempt to persuade. 

Paul's account of Aaronson's view says that Eliezer shouldn't be as confident in MWI as he is, which in words sounds exactly like my point, and similar to Aaronson's stack exchange answer. But it still leaves open the question of how overconfident he was, and what, if anything, should be taken away from this. It's possible that there's a version of my point which is true but is also uninteresting or trivial (who cares if Yudkowsky was 10% too confident about MWI 15 years ago?). 

And it's worth reiterating that a lot of people give Eliezer credit for his writing on QM, including for being forceful in his views. I have no desire to argue against this. I had hoped to sidestep discussing this entirely since I consider it to be a separate point, but perhaps this was unfair and led to miscommunication. If someone wants to write a detailed comment/post explaining why Yudkowsky deserves a lot of credit for his QM writing, including credit for how forceful he was at times, I would be happy to read it and would likely upvote/strong upvote it depending on quality. 

However, here my intention was to focus on the overconfidence aspect. 

I'll explain what I see as the epistemic mistakes Eliezer likely made to end up in an overconfident state. Why do I think Eliezer was overconfident on MWI? 

(Some of the following may be wrong.)  

  • He didn't understand non-MWI-extremist views, which should have rationally limited his confidence
    • I don't have sources for this, but I think something like this is true.
    • This was an avoidable mistake
    • Worth noting that Eliezer has updated towards the competence of elites in science since some of his early writing according to Rob's comment elsewhere this thread
  • It's possible that his technical understanding was uneven. This should also have limited his confidence.
    • Aaronson praised him for "actually get most of the technical stuff right", which of course implies that not everything technical was correct.
    • He also suggested a specific, technical flaw in Yudkowsky's understanding.
    • One big problem with having extreme conclusions based on uneven technical understanding is that you don't know what you don't know. And in fact Aaronson suggests a mistake Yudkowsky seems unaware of as a reason why Yudkowsky's central argument is overstated/why Yudkowsky is overconfident about MWI.
    • However, it's unclear how true/important a point this really is
  • At least 4 points limit confidence in P(MWI) to some degree:
    • Lack of experimental evidence
    • The possibility of QM getting overturned
    • The possibility of a new and better interpretation in the future
    • Unknown unknowns
    • I believe most or all of these are valid, commonly brought up points that together limit how confident anyone can be in P(MWI). Reasonable people may disagree with their weighting of course.
    • I am skeptical that Eliezer correctly accounted for these factors

Note that these are all points about the epistemic position Eliezer was in, not about the correctness of MWI. The first two are particular to him, and the last one applies to everyone. 

Now, Rob points out that maybe the heliocentrism example is lacking context in some way (I find it a very compelling example of a super overconfident mistake if it's not). Personally I think there are at least a couple[1] [2] of places in the sequences where Yudkowsky clearly says something that I think indicates ridiculous overconfidence tied to epistemic mistakes, but to be honest I'm not excited to argue about whether some of his language 15 years ago was or wasn't overzealous. 

The reason I brought this up despite it being a pretty minor point is because I think it's part of a general pattern of Eliezer being overconfident in his views and overstating them. I am curious how much people actually disagree with this. 

Of course, whether Eliezer has a tendency to be overconfident and overstate his views is only one small data point among very many others in evaluating p(doom), the value of listening to Eliezer's views, etc. 

  1. ^

    "Many-worlds is an obvious fact, if you have all your marbles lined up correctly (understand very basic quantum physics, know the formal probability theory of Occam’s Razor, understand Special Relativity, etc.)"

  2. ^

    "The only question now is how long it will take for the people of this world to update." Both quotes from https://www.lesswrong.com/s/Kqs6GR7F5xziuSyGZ/p/S8ysHqeRGuySPttrS

Comment by anonymous_ea on On Deference and Yudkowsky's AI Risk Estimates · 2022-07-06T20:44:04.948Z · EA · GW

I’m trying to make sense of why you’re bringing up “overconfidence” here. The only thing I can think of is that you think that maybe there is simply not enough information to figure out whether MWI is right or wrong (not even for even an ideal reasoner with a brain the size of Jupiter and a billion years to ponder the topic), and therefore saying “MWI is unambiguously correct” is “overconfident”?

Here's my point: There is a rational limit to the amount of confidence one can have in MWI (or any belief). I don't know where exactly this limit is for MWI-extremism but Yudkowsky clearly exceeded it sometimes. To use made up numbers, suppose: 

  • MWI is objectively correct
  • Eliezer says P(MWI is correct) = 0.9999999
  • But rationally one can only reach P(MWI) = 0.999
    • Because there are remaining uncertainties that cannot be eliminated through superior thinking and careful consideration, such lack of experimental evidence, the possibility of QM getting overturned, the possibility of a new and better interpretation in the future, and unknown unknowns.
    • These factors add up to at least P(Not MWI) = 0.001.

Then even though Eliezer is correct about MWI being correct, he is still significantly overconfident in his belief about it. 

Consider Paul's example of Eliezer saying MWI is comparable to heliocentrism:

If we are deeply wrong about physics, then I [Paul Christiano] think this could go either way. And it still seems quite plausible that we are deeply wrong about physics in one way or another (even if not in any particular way). So I think it's wrong to compare many-worlds to heliocentrism (as Eliezer has done). Heliocentrism is extraordinarily likely even if we are completely wrong about physics---direct observation of the solar system really is a much stronger form of evidence than a priori reasoning about the existence of other worlds. 

I agree with Paul here. Heliocentrism is vastly more likely than any particular interpretation of quantum mechanics, and Eliezer was wrong to have made this comparison. 

This may sound like I'm nitpicking, but I think it fits into a pattern of Eliezer making dramatic and overconfident pronouncements, and it's relevant information for people to consider e.g. when evaluating Eliezer's belief that p(doom) = ~1 and the AI safety situation is so hopeless that the only thing left is to die with slightly more dignity. 

Of course, it's far from the only relevant data point. 

Regarding (2), I think we're on the same page haha. 

Comment by anonymous_ea on On Deference and Yudkowsky's AI Risk Estimates · 2022-07-05T22:38:12.706Z · EA · GW

When I said it was relevant to his track record as a public intellectual, I was referring to his tendency to make dramatic and overconfident pronouncements (which Ben mentioned in the parent comment). I wasn't intending to imply that the debate around QM had been settled or that new information had come out. I do think that even at the time Eliezer's positions on both MWI and why people disagreed with him on it were overconfident though. 

I think you're right that my comment gave too little credit to Eliezer, and possibly misleadingly implied that Eliezer is the only one who holds some kind of extreme MWI or anti-collapse view or that such views are not or cannot be reasonable (especially anti-collapse). I said that MWI is a leading candidate but that's still probably underselling how many super pro-MWI positions there are. I expanded on this in another comment.  

Your story of Eliezer comparing MWI to heliocentrism is a central example of what I'm talking about. It is not that his underlying position is wrong or even unlikely, but that he is significantly overconfident. 

I think this is relevant information for people trying to understand Eliezer's recent writings. 

To be clear, I don't think it's a particularly important example, and there is a lot of other more important information than whether Eliezer overestimated the case for MWI to some degree while also displaying impressive understanding of physics and possibly/probably being right about MWI. 

Comment by anonymous_ea on On Deference and Yudkowsky's AI Risk Estimates · 2022-07-05T21:19:10.945Z · EA · GW

I agree that: Yudkowsky has an impressive understanding of physics for a layman, in some situations his understanding is on par with or exceeds some experts, and he has written explanations of technical topics that even some experts like and find impressive. This includes not just you, but also e.g. Scott Aaronson, who praised his series on QM in the same answer I excerpted above, calling it entertaining, enjoyable, and getting the technical stuff mostly right. He also praised it for its conceptual goals. I don't believe this is faint praise, especially given stereotypes of amateurs writing about physics. This is a positive part of Yudkowsky's track record. I think my comment sounds more negative about Yudkowsky's QM sequence than it deserves, so thanks for pushing back on that. 

I'm not sure what you mean when you call yourself a pro-MWI extremist but in any case AFAIK there are physicists, including one or more prominent ones, who think MWI is really the only explanation that makes sense, although there are obviously degrees in how fervently one can hold this position and Yudkowsky seems at the extreme end of the scale in some of his writings. And he is far from the only one who thinks Copenhagen is ridiculous. These two parts of Yudkowsky's position on MWI are not without parallel within professional physicists, and the point about Copenhagen being ridiculous is probably a point in his favor from most views (e.g. Nobel laureate Murray Gell-Mann said that Neils Bohr brainwashed people into Copenhagen), let alone this community. Perhaps I should have clarified this in my comment, although I did say that MWI is a leading interpretation and may well be correct. 

The negative aspects I said in my comment were:

  1. Yudkowsky's confidence in MWI is disproportionate
  2. Yudkowsky's conviction that people who disagree with him are making elementary mistakes is disproportionate
  3. These may come partly from a lack of knowledge or expertise

Maybe (3) is a little unfair, or sounds harsher than I meant it. It's a bit unclear to me how seriously to take Aaronson's quote. It seems like plenty of physicists have looked through the sequences to find glaring flaws, and basically found none (physics stackexchange). This is a nontrivial achievement in context. At the same time I expect most of the scrutiny has been to a relatively shallow level, partly because Yudkowsky is a polarizing writer. Aaronson is probably one of fairly few people who have deep technical expertise and have read the sequences with both enjoyment and a critical eye. Aaronson suggested a specific, technical flaw that may be partly responsible for Yudkowsky holding an extreme position with overconfidence and misunderstanding what people who disagree with him think. Probably this is a flaw Yudkowsky would not have made if he had worked with a professional physicist or something. But maybe Aaronson was just casually speculating and maybe this doesn't matter too much. I don't know. Possibly you are right to push back on the mixed states explanation. 

I think (1) and (2) are well worth considering though. The argument here is not that his position is necessarily wrong or impossible, but that it is overconfident. I am not courageous enough to argue for this position to a physicist who holds some kind of extreme pro-MWI view, but I think this is a reasonable view and there's a good chance (1) and (2) are correct. It also fits in Ben's point 4 in the comment above: "Yudkowsky’s track record suggests a substantial bias toward dramatic and overconfident predictions." 

Comment by anonymous_ea on The Future Might Not Be So Great · 2022-07-05T16:48:09.065Z · EA · GW

For convenience, this is CEA's statement from three years ago:

We approached Jacy about our concerns about his behavior after receiving reports from several parties about concerns over several time periods, and we discussed this public statement with him. We have not been able to discuss details of most of these concerns in order to protect the confidentiality of the people who raised them, but we find the reports credible and concerning. It’s very important to CEA that EA be a community where people are treated with fairness and respect. If you’ve experienced problems in the EA community, we want to help. Julia Wise serves as a contact person for the community, and you can always bring concerns to her confidentially.

By my reading, the information about the reports contained in this is:

  • CEA received reports from several parties about concerns over Jacy's behavior over several time periods
  • CEA found the reports 'credible and concerning'
  • CEA cannot discuss details of most of these concerns because the people who raised them want to protect their confidentiality
  • It also implies that Jacy did not treat people with fairness and respect in the reported incidents
    • 'It’s very important to CEA that EA be a community where people are treated with fairness and respect' - why say this unless it's applicable to this case?

Julia also said in a comment at the time that the reports were from members of the animal advocacy and EA communities, and CEA decided to approach Jacy primarily because of these rather than the Brown case:

The accusation of sexual misconduct at Brown is one of the things that worried us at CEA. But we approached Jacy primarily out of concern about other more recent reports from members of the animal advocacy and EA communities. 

Comment by anonymous_ea on On Deference and Yudkowsky's AI Risk Estimates · 2022-07-04T01:13:02.134Z · EA · GW

Edit: I think this came off more negatively than I intended it to, particularly about Yudkowsky's understanding of physics. The main point I was trying to make is that Yudkowsky was overconfident, not that his underlying position was wrong. See the replies for more clarification. 

I think there's another relevant (and negative) data point when discussing Yudkowsky's track record: his argument and belief that the Many-Worlds Interpretation of quantum mechanics is the only viable interpretation of quantum mechanics, and anyone who doesn't agree is essentially a moron. Here's one 2008 link from the Sequences where he expresses this position[1]; there are probably many other places where he's said similar things. (To be clear, I don’t know if he still holds this belief, and if he doesn’t anymore, when and why he updated away from it.) 

Many Worlds is definitely a viable and even leading interpretation, and may well be correct. But Yudkowsky's confidence in Many Worlds, as well as his conviction that people who disagree with him are making elementary mistakes, is more than a little disproportionate, and may come partly from a lack of knowledge and expertise. 

The above is a paraphrase of Scott Aaronson, a credible authority on quantum mechanics who is sympathetic to both Yudkowsky and Many Worlds (bold added): 

I think Yudkowsky's central argument---basically, that anyone who rejects [Many Worlds] needs to have their head examined---is to put it mildly, a bit overstated. :) I'll resist the temptation to elaborate, since this is really a discussion for another thread.

In several posts, Yudkowsky gives indications that he doesn't really understand the concept of mixed states. (For example, he writes about the No-Communication Theorem as something complicated and mysterious, which it's not from a density-matrix perspective.) As I see it, this might be part of the reason why Yudkowsky sees anything besides Many-Worlds as insanity, and can't understand what (besides sheep-like conformity) would drive any knowledgeable physicist to any other point of view. If I didn't know that in real life, people pretty much never encounter pure states, but only more general objects that (to paraphrase Jaynes) scramble together "subjective" probabilities and "objective" amplitudes into a single omelette, the view that quantum states are "states of knowledge" that "live in the mind, not in the world" would probably also strike me as meaningless nonsense.

While this isn't directly related to AI risk, I think it's relevant to Yudkowsky's track record as a public intellectual. 

  1. ^

    He expresses this in the last six paragraphs of the post. I'm excerpting some of it (bold added, italics were present in the original): 

     

    Many-worlds is an obvious fact, if you have all your marbles lined up correctly (understand very basic quantum physics, know the formal probability theory of Occam’s Razor, understand Special Relativity, etc.) It is in fact considerably more obvious to me than the proposition that spinning black holes should obey conservation of angular momentum.

    ...

    So let me state then, very clearly, on behalf of any and all physicists out there who dare not say it themselves: Many-worlds wins outright given our current state of evidence. There is no more reason to postulate a single Earth, than there is to postulate that two colliding top quarks would decay in a way that violates Conservation of Energy. It takes more than an unknown fundamental law; it takes magic.

    The debate should already be over. It should have been over fifty years ago. The state of evidence is too lopsided to justify further argument. There is no balance in this issue. There is no rational controversy to teach. The laws of probability theory are laws, not suggestions; there is no flexibility in the best guess given this evidence. Our children will look back at the fact that we were still arguing about this in the early twenty-first century, and correctly deduce that we were nuts.

    We have embarrassed our Earth long enough by failing to see the obvious. So for the honor of my Earth, I write as if the existence of many-worlds were an established fact, because it is. The only question now is how long it will take for the people of this world to update.

Comment by anonymous_ea on The Future Might Not Be So Great · 2022-07-02T21:12:49.203Z · EA · GW

From Jacy: 

this was only on my website for a few weeks at most... I believe I also casually used the term elsewhere, and it was sometimes used by people in my bio description when introducing me as a speaker.

Comment by anonymous_ea on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-23T01:43:04.847Z · EA · GW

I don't necessarily disagree with the assessment of a temporary ban for "unnecessary rudeness or offensiveness", or "other behaviour that interferes with good discourse", but I disagree that Charles' comment quality is "uniformly" low or that a ban might be merited primarily because of high comment volume and too low quality.There are some real insights and contributions sprinkled in in my opinion. 

For me the unnecessary rudeness or offensiveness and other behavior interfering with discourse comes from things like comments that are technically replies to a particular person but seem like they're mostly intended to win the argument in front of unknown readers, and containing things like rudeness, paranoia, and condescension towards the person they're replying to. I think the doxing accusation, which if I remember correctly actually doxxed the victim much more than Gwern's comment, is part of a similar pattern of engaging poorly with a particular person, partly through an incorrect assessment that the benefits to bystanders will outweigh the costs. I think this sort of behavior stifles conversation and good will. 

I'm not sure a ban is a great solution though. There might be other, less blunt ways of tackling this situation. 

What I would really like to see is a (much) higher lower limit of comment quality from Charles i.e. moving the bar for tolerating rudeness and bad behavior in a comment much higher even though it could be potentially justified in terms of benefits to bystanders or readers. 

Comment by anonymous_ea on Some potential lessons from Carrick’s Congressional bid · 2022-05-19T03:29:31.952Z · EA · GW

The Reddit comments I've seen have been largely the same as well: people being tired of apparently incessant campaign ads along with some suspicion of large out of state money and crypto.

A couple of random threads: 

https://www.reddit.com/r/Portland/comments/u2c69t/carrick_flynn_cryptobacked_candidate_in_new/

https://www.reddit.com/r/SALEM/comments/u9lmir/carrick_flynns_bizarre_response_to_where_he_voted/

Comment by anonymous_ea on EA and the current funding situation · 2022-05-17T15:24:41.659Z · EA · GW

Glad to have been helpful :)

Comment by anonymous_ea on EA and the current funding situation · 2022-05-14T14:31:55.661Z · EA · GW

My read on your comment is that you misread Anthony's allusion to $1b as about potentially spending $1b at some stage (whether right now or later), rather than about the expected impact of his idea. I could be wrong, but that's the only way your comment makes sense to me ("if you spent $1b of EA money" - what could this refer to besides spending $1b of money?). 

Anthony is asking for connection to someone who is skilled at running a particular kind of simulation to see if his idea has potential. He believes that the value of checking of his idea might be $1b, because of potentially trillions of dollars worth of gains. Crucially, it would not take $1b to check his idea - that figure is an estimate of the potential value of checking the idea, not of the cost of checking it. The cost of checking is probably something like the social capital to connect him with a relevant person and the costs involved in running the simulation (if it progresses to that stage).

I don't think this was a bad mistake on your end, just a quick, incorrect assumption that you made while trying to help someone. It only led to a fractious response because so many other EAs have also misread and misunderstood Anthony, and he is naturally tired and upset by this. In my opinion, the fault here lies mostly with social dynamics rather than any one person acting particularly badly. 

I appreciate your attempts to engage productively (including deciding not to engage if that seems better to you), take responsibility for any mistakes you may have made, and without assigning blame to other parties. That is a clear positive to me. 

Hope you have a good day as well :)

Comment by anonymous_ea on EA and the current funding situation · 2022-05-14T04:43:53.556Z · EA · GW

Upvoted for the last three sentences, but I believe your first sentence is incorrect. The second paragraph of your initial comment does not make sense to me in the absence of you believing that Anthony was looking for funding. 

Comment by anonymous_ea on EA and the current funding situation · 2022-05-14T01:09:01.608Z · EA · GW

Thanks, this is a good followup. I'm glad my comment contained useful feedback for you. 

I think your attempt to help Anthony went awry when he asked you why his tone was the bigger issue than whether he had been misrepresented, and you did not even seem to consider that he could be right in your reply. Perhaps he is right? Perhaps not? But it's important to at least genuinely consider that he could be.

Comment by anonymous_ea on EA and the current funding situation · 2022-05-14T00:21:50.675Z · EA · GW

Strong downvote for extreme and inappropriate condescension in the guise of helping someone. There is no adequate reason for you to assume that Anthony is living in a world where everyone is intrinsically against him, and that he cannot even imagine not living in a different world. This is an extremely strong statement to make about someone you know through a few online comments. Why do you think you're right? 

Even if you were right, helping him would not take the form of trying to point this out publicly in such a tactless way. 

Comment by anonymous_ea on An update in favor of trying to make tens of billions of dollars · 2022-05-11T12:38:42.216Z · EA · GW

Sorry, I don't have the capacity to engage further here. 

Comment by anonymous_ea on An update in favor of trying to make tens of billions of dollars · 2022-05-10T02:38:51.209Z · EA · GW

I strongly disagree that the situational opportunity is anywhere near as broad as "mostly being an American alive in the 21st century". I'm not sure what you have in mind regarding "the type of person who is capable of starting the next FTX", but I think that is a fairly narrow class, not a very wide one. 

Comment by anonymous_ea on An update in favor of trying to make tens of billions of dollars · 2022-05-10T01:54:52.176Z · EA · GW

Charles is not saying is that having an elite background is the only thing that matters. He is saying that high success involves both high capability and high situational opportunity. 

Comment by anonymous_ea on Effective altruism’s odd attitude to mental health · 2022-04-30T00:35:33.110Z · EA · GW

Like IanDavidMoss says, I think the more interesting phenomenon you mention is the sudden and unnoticed switch to skeptical mode:

I then told them about my work at the Happier Lives Institute comparing the cost-effectiveness of cash transfers to treating depression and how we'd found the latter was about 10x better (original analysis, updated analysis). They suddenly switched to sceptical mode, saying they didn't believe you could really measure people's mental health or feelings and that, even if you could, targeting poverty must still be better. 

After a couple of minutes of this, I suddenly clocked how weirdly disconnected the first and second parts of the conversation were. I asked them how they could be so sceptical of mental health as a global priority when they had literally just been talking to me about it as a very serious issue for EAs. They looked puzzled - the tension seemed never to have occurred to them - and, to their credit, they replied "oh, yeah, hmm, that is weird".

Comment by anonymous_ea on What is a neutral life like? · 2022-04-17T01:55:35.390Z · EA · GW

I don't buy that what a neutral life is like is an important question. 

I listened to a few minutes of the timestamp you linked but unless I missed something, Will is talking about his interest in finding out what proportion of people have lives above and below zero, not what a neutral life is like. 

Consider a life which, on bringing it into existence, neither increases nor decreases social welfare because it is perfectly neutral in quality. What is such a life like?

This is a hugely important question. For Effective Altruists, the answer has implications for the value of the far future and the relative importance of saving lives versus reducing suffering. 

I don't see any tight connections between the value of finding out more about neutral lives and what implications that might have for efforts to reduce existential risk or other longtermist efforts. It's more related to the important question of saving lives vs reducing suffering, but I don't see any clear implications here either. If you spell out what connections you see I might be more convinced. 

Even those not looking to improve the world should be interested - the answer also has implications for the ethics of having children and even whether or not you were wronged by being brought into existence. Despite this, the question has been given surprisingly little attention in both EA and non-EA circles.

It seems to me that the ethics of having children and the question of antinatalism are swamped by many considerations besides what a neutral life is like. Again, if you spell out the connections you see here I might be more interested. 

I hope this is useful feedback!

Comment by anonymous_ea on Movement Collapse Scenarios · 2022-01-11T00:35:38.563Z · EA · GW

This is a great post in both content and writing quality. I'm a little sad that despite winning a forum prize, there was relatively little followup. 

Comment by anonymous_ea on [Linkpost] Eric Schwitzgebel: Against Longtermism · 2022-01-07T00:11:41.289Z · EA · GW

Thanks for sharing this!

Quoting from the article (underline added):

First, it's unlikely that we live in a uniquely dangerous time for humanity, from a longterm perspective. Ord and other longtermists suggest, as I mentioned, that if we can survive the next few centuries, we will enter a permanently "secure" period in which we no longer face serious existential threats. Ord's thought appears to be that our wisdom will catch up with our power; we will be able to foresee and wisely avoid even tiny existential risks, in perpetuity or at least for millions of years. But why should we expect so much existential risk avoidance from our descendants? Ord and others offer little by way of argument.

[...]

You might suppose that, as resources improve, people will grow more cooperative and more inclined toward longterm thinking. Maybe. But even if so, cooperation carries risks. For example, if we become cooperative enough, everyone's existence and/or reproduction might come to depend on the survival of the society as a whole. The benefits of cooperation, specialization, and codependency might be substantial enough that more independent-minded survivalists are outcompeted. If genetic manipulation is seen as dangerous, decisions about reproduction might be centralized. We might become efficient, "superior" organisms that reproduce by a complex process different from traditional pregancy, requiring a stable web of technological resources. We might even merge into a single planet-sized superorganism, gaining huge benefits and efficiencies from doing so. However, once a species becomes a single organism the same size as its environment, a single death becomes the extinction of the species. Whether we become a supercooperative superorganism or a host of cooperative but technologically dependent individual organisms, one terrible miscalculation or one highly unlikely event could potentially bring down the whole structure, ending us all.

A more mundane concern is this: Cooperative entities can be taken advantage of. As long as people have differential degrees of reproductive success, there will be evolutionary pressure for cheaters to free-ride on others' cooperativeness at the expense of the whole. There will always be benefits for individuals or groups who let others be the ones who think longterm, making the sacrifices necessary to reduce existential risks. If the selfish groups are permitted to thrive, they could employ for their benefit technology with, say, a 1/1000 or 1/1000000 annual risk of destroying humanity, flourishing for a long time until the odds finally catch up. If, instead, such groups are aggressively quashed, that might require warlike force, with the risks that war entails, or it might involve complex webs of deception and counterdeception in which the longtermists might not always come out on top.

The point about cooperation carrying risks is interesting and not something I've seen elsewhere. 

Comment by anonymous_ea on A huge opportunity for impact: movement building at top universities · 2021-12-26T15:56:34.438Z · EA · GW

Our Community Building Grants were a step up from an all-volunteer force of organizers, but we think they had some problems:

  • We also didn’t spend enough time listening to and learning from organizers.

I would be really curious to hear more about this. For example:

  • What factors led to this happening?
  • How did you realize you were making this mistake?
  • What were the consequences of this mistake?
  • How do you plan to rectify this in the future?
Comment by anonymous_ea on Aiming for the minimum of self-care is dangerous · 2021-12-11T20:06:27.319Z · EA · GW

On a similar note, I actually parsed the title as the opposite of the intended meaning. That is, I thought the article was going to say that aiming for the minimum [amount of impact, or something else related like career capital] is dangerous, rather than that that aiming for the minimal amount of self-care is dangerous. 

Comment by anonymous_ea on A Primer on the Symmetry Theory of Valence · 2021-09-10T02:15:42.079Z · EA · GW

Greg, I want to bring two comments that have been posted since your comment above to your attention:

  1. Abby said the following to Mike:

Your responses here are much more satisfying and comprehensible than your previous statements, it's a bit of a shame we can't reset the conversation.

2. Another anonymous commentator (thanks to Linch for posting) highlights that Abby's line of questioning regarding EEGs ultimately resulted in a response satisfactory to her and which she didn't have the expertise to further evaluate: 

if they had given the response that they gave in one of the final comments in the discussion, right at the beginning (assuming Abby would have responded similarly) the response to their exchange might have been very different i.e. I think people would have concluded that they gave a sensible response and were talking about things that Abby didn't have expertise to comment on:

_______


Abby Hoskin: If your answer relies on something about how modularism/functionalism is bad: why is source localization critical for your main neuroimaging analysis of interest? If source localization is not necessary: why can't you use EEG to measure synchrony of neural oscillations?

Mike Johnson: The harmonic analysis we’re most interested in depends on accurately modeling the active harmonics (eigenmodes) of the brain. EEG doesn’t directly model eigenmodes; to infer eigenmodes we’d need fairly accurate source localization. It could be there are alternative ways to test STV without modeling brain eigenmodes, and that EEG could give us. I hope that’s the case, and I hope we find it, since EEG is certainly a lot easier to work with than fMRI.

Abby Hoskin: Ok, I appreciate this concrete response. I don't know enough about calculating eigenmodes with EEG data to predict how tractable it is.

Comment by anonymous_ea on AI Timelines: Where the Arguments, and the "Experts," Stand · 2021-09-08T13:26:34.579Z · EA · GW

I appreciate you posting this picture, which I had not seen before. I just want to add that this was compiled in 2014, and some of the people in the picture have likely shifted in their views since then. 

Comment by anonymous_ea on Towards a Weaker Longtermism · 2021-08-09T19:46:54.828Z · EA · GW

Phil Trammell's point in  Which World Gets Saved is also relevant: 

It seems to me that there is another important consideration which complicates the case for x-risk reduction efforts, which people currently neglect. The consideration is that, even if we think the value of the future is positive and large, the value of the future conditional on the fact that we marginally averted a given x-risk may not be.

...

Once we start thinking along these lines, we open various cans of worms. If our x-risk reduction effort starts far "upstream", e.g. with an effort to make people more cooperative and peace-loving in general, to what extent should we take the success of the intermediate steps (which must succeed for the x-risk reduction effort to succeed) as evidence that the saved world would go on to a great future? Should we incorporate the fact of our own choice to pursue x-risk reduction itself into our estimate of the expected value of the future, as recommended by evidential decision theory, or should we exclude it, as recommended by causal? How should we generate all these conditional expected values, anyway?

Some of these questions may be worth the time to answer carefully, and some may not. My goal here is just to raise the broad conditional-value consideration which, though obvious once stated, so far seems to have received too little attention. (For reference: on discussing this consideration with Will MacAskill and Toby Ord, both said that they had not thought of it, and thought that it was a good point.) In short, "The utilitarian imperative 'Maximize expected aggregate utility!'" might not really, as Bostrom (2002) puts it, "be simplified to the maxim 'Minimize existential risk'".

Comment by anonymous_ea on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T20:59:04.492Z · EA · GW

I like this idea in general, but would it ever really be able to employ $100m+ annually? For comparison, GiveWell spends about $6 million/year and CSET was set up for $55m/5 years (11m/year)

Comment by anonymous_ea on Linch's Shortform · 2021-07-15T23:36:41.580Z · EA · GW

Thanks. Going back to your original impact estimate, I think the bigger difficulty I have in swallowing your impact estimate and claims related to it (e.g. "the ultimate weight of small decisions you make is measured not in dollars or relative status, but in stars") is not the probabilities of AI or space expansion, but what seems to me to be a pretty big jump from the potential stakes of a cause area or value possible in the future without any existential catastrophes, to the impact that researchers working on that cause area might have. 

Comment by anonymous_ea on Linch's Shortform · 2021-07-05T21:09:47.563Z · EA · GW

So is the basic idea that transformative AI not ending in an existential catastrophe is the major bottleneck on a vastly positive future for humanity? 

Comment by anonymous_ea on Linch's Shortform · 2021-07-05T16:50:03.495Z · EA · GW

Conditioning upon us buying the importance of work at MIRI (and if you don't buy it, replace what I said with CEA or Open Phil or CHAI or FHI or your favorite organization of choice), I think the work of someone sweeping the floors of MIRI is just phenomenally, astronomically important, in ways that is hard to comprehend intuitively. 

(Some point estimates with made-up numbers: Suppose EA work in the next few decades can reduce existential risk from AI by 1%.  Assume that MIRI is 1% of the solution, and that there are less than 100 employees of MIRI. Suppose variance in how good a job someone can do in cleanliness of MIRI affects research output by 10^-4 as much as an average researcher.* Then we're already at 10^-2 x 10^ -2 x 10^-2 x 10^-4 = 10^-10 the impact of the far future. Meanwhile there are 5 x 10^22 stars in the visible universe)

Can you spell out the impact estimation you are doing in more detail? It seems to me that you first estimate how much a janitor at an org might impact the research productivity of that org, and then there's some multiplication related to the (entire?) value of the far future. Are you assuming that AI will essentially solve all issues and lead to positive space colonization, or something along those lines? 

Comment by anonymous_ea on Looking for more 'PlayPumps' like examples · 2021-05-28T16:57:08.808Z · EA · GW

I'm not sure Make a Wish is a good example given the existence of this study. Quoting Dylan Matthews from Future Perfect on it (emphasis added):

The average wish costs $10,130 to fulfill. Given that Malaria Consortium can save the life of a child under 5 for roughly $2,000 (getting a precise figure is, of course, tough, but it’s around that), you could probably save four or five children’s lives in sub-Saharan Africa for the cost of providing a nice experience for a single child in the US. For the cost of the heartwarming Batkid stunt — $105,000 — you could save the lives of some 50-odd kids.

So that’s why I’ve been hard on Make-A-Wish in the past, and why effective altruists like Peter Singer have criticized the group as well.

But now I’m reconsidering. A new study in the journal Pediatric Research, comparing 496 patients at the Nationwide Children’s Hospital in Columbus, Ohio, who got their wishes granted to 496 “control” patients with similar ages, gender, and diseases, found that the patients who got their wishes granted went to the emergency room less, and were less likely to be readmitted to the hospital (outside of planned readmissions).

In a number of cases, this reduction in hospital admissions and emergency room visits resulted in a cost savings in excess of $10,130, the cost of the average wish. In other words, Make-A-Wish helped, and helped in a cost-effective way.

Comment by anonymous_ea on Draft report on existential risk from power-seeking AI · 2021-05-08T21:02:21.863Z · EA · GW

your other comment

This links to A Sketch of Good Communication, not whichever comment you were intending to link :)

Comment by anonymous_ea on Concerns with ACE's Recent Behavior · 2021-04-18T17:59:00.991Z · EA · GW

.

Comment by anonymous_ea on Please stand with the Asian diaspora · 2021-03-23T01:02:16.341Z · EA · GW

.

Comment by anonymous_ea on Open and Welcome Thread: March 2021 · 2021-03-22T16:50:54.335Z · EA · GW

Welcome Sive!

Comment by anonymous_ea on Please stand with the Asian diaspora · 2021-03-22T13:55:01.636Z · EA · GW

Thanks for explaining. I don't wish to engage further here [feel free to reply though of course], but FWIW I don't agree that there are any reasoning errors in Jacob's post or any anomalies to explain. I think you are strongly focused on a part of the conversation that is of particular importance to you (something along the lines of whether people who are not motivated or skilled at expressing sympathy will be welcome here), while Jacob is mostly focused on other aspects. 

Comment by anonymous_ea on Please stand with the Asian diaspora · 2021-03-21T22:33:12.693Z · EA · GW

what appears to me to be a series of anomalies that is otherwise hard to explain

What do you believe needs explaining? 

Comment by anonymous_ea on Please stand with the Asian diaspora · 2021-03-21T17:15:18.516Z · EA · GW

This might be a minor point, but personally I think it's better to avoid making generalizations of how an entire community must be feeling. Some members of the Asian community are unaware of recent events, while others may not be particularly affected by them. Perhaps something more along the lines of "I understand many people in the Asian community are feeling hurt right now" would be generally better. 

Comment by anonymous_ea on Please stand with the Asian diaspora · 2021-03-21T03:14:39.570Z · EA · GW

I'm curious how xccf's comment elsewhere on this thread fits in with your position as expressed here. 

Comment by anonymous_ea on [deleted post] 2021-03-20T15:44:10.817Z

Ben Hoffman's GiveWell and the problem of Partial Funding was also posted here on the forum, with replies from OpenPhil and GiveWell staff. 

Comment by anonymous_ea on [deleted post] 2021-03-19T19:09:45.330Z

I don't have any advice to offer, but as a datapoint for you: I applaud your goal and am even sympathetic to many of your points, but even I found this post actively annoying (unlike your previous ones in this series). It feels like you're writing a series of posts for your own benefit without actually engaging with your audience or interlocutors.  I think this is fine for a personal blog, but does not fit on this forum. 

Comment by anonymous_ea on Religious Texts and EA: What Can We Learn and What Can We Inform? · 2021-01-30T16:08:26.958Z · EA · GW

There's a Buddhists in Effective Altruism group as well. 

Comment by anonymous_ea on CEA update: Q4 2020 · 2021-01-15T22:54:59.758Z · EA · GW

Thanks for writing this!

Comment by anonymous_ea on Strong Longtermism, Irrefutability, and Moral Progress · 2021-01-08T21:43:19.459Z · EA · GW

.

Comment by anonymous_ea on Long-Term Future Fund: Ask Us Anything! · 2020-12-11T02:53:51.389Z · EA · GW

I think I would spend a substantial amount of money on prizes for people who seem to have done obviously really good things for the world. Giving $10M to scihub seems worth it. Maybe giving $5M to Daniel Ellsberg as a prize for his lifetime achievements. There are probably more people in this reference class of people who seem to me to have done heroic things, but haven't even been remotely well enough rewarded (like, it seems obvious that I would have wanted Einstein to die having at least a few millions in the bank, so righting wrongs of that reference class seems valuable, though Einstein did at least get a Nobel prize). My guess is one could spend another $100M this way.

 

I'm really surprised by this; I think things like the Future of Life award are good, but if I got $1B I would definitely not think about spending potentially $100m on similar awards as an EA endeavor. Can you say more about this? Why do you think this is so valuable? 

Comment by anonymous_ea on Long-Term Future Fund: Ask Us Anything! · 2020-12-05T23:29:57.533Z · EA · GW

Regardless of whatever happens, I've benefited greatly from all the effort you've put in your public writing on the fund Oliver. 

Comment by anonymous_ea on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-05T23:58:39.024Z · EA · GW

.

Comment by anonymous_ea on KevinO's Shortform · 2019-12-06T19:09:58.021Z · EA · GW

I just voted for the GFI, AMF, and GD videos because of your comment!