Posts

Explaining the Open Philanthropy Project's $17.5m bet on Sherlock Biosciences’ Innovations in Viral Diagnostics 2019-06-11T17:23:37.349Z · score: 25 (9 votes)
The case for taking AI seriously as a threat to humanity 2018-12-23T01:00:08.314Z · score: 18 (9 votes)
Pandemic Risk: How Large are the Expected Losses? Fan, Jamison, & Summers (2018) 2018-11-21T15:58:31.856Z · score: 22 (10 votes)

Comments

Comment by anonymous_ea on Ask Me Anything! · 2019-08-19T19:09:30.550Z · score: 6 (6 votes) · EA · GW

I don't understand why this question is downvoted with so many votes. It seems like a reasonable, if underspecified, question to me.

Edit: When I commented, this comment was at -2 with perhaps 15 votes.

Comment by anonymous_ea on Ask Me Anything! · 2019-08-19T19:05:26.842Z · score: 28 (13 votes) · EA · GW

I'd be super interested in hearing you elaborate more on most of the points! Especially the first two.

Comment by anonymous_ea on Ask Me Anything! · 2019-08-16T17:44:24.776Z · score: 7 (9 votes) · EA · GW

Also, what can normal EAs do about it?

Comment by anonymous_ea on Impact Report for Effective Altruism Coaching · 2019-08-15T18:07:36.750Z · score: 2 (2 votes) · EA · GW

I'm surprised that business expenses are 40% of revenue. I thought it would be a lot lower than that. Are you comfortable sharing what the biggest expenses are?

Comment by anonymous_ea on Ask Me Anything! · 2019-08-15T18:03:42.280Z · score: 10 (8 votes) · EA · GW

How do you decide your own cause prioritization? Relatedly, how do you decide where to donate to?

Comment by anonymous_ea on Ask Me Anything! · 2019-08-15T18:03:06.756Z · score: 20 (10 votes) · EA · GW

What do you think are the things or ideas that most casual EAs don't know much about or appreciate enough, but are (deservedly or undeservedly) very influential in EA hubs or organizations like CEA, 80K, GPI, etc? Some candidates I have in mind for this are things like cluelessness, longtermism, the possibility of short AI timelines, etc.

Comment by anonymous_ea on Ask Me Anything! · 2019-08-15T18:00:28.477Z · score: 20 (10 votes) · EA · GW

If you had the option of making a small change to EA by pressing a button, would you do it? If so, what would it be? What about a big change?

Comment by anonymous_ea on Ask Me Anything! · 2019-08-15T17:59:21.384Z · score: 13 (9 votes) · EA · GW

What do you see as the best longterm path for EA? Should we try to stay small and weird, or try to get buy-in from the masses? How important is academic influence for the long term success of EA?

Comment by anonymous_ea on Ask Me Anything! · 2019-08-15T17:57:56.080Z · score: 18 (14 votes) · EA · GW

Is there a question you want to answer that hasn't been asked yet? What's your answer to it?

Comment by anonymous_ea on Is running Folding@home / Rosetta@home beneficial? · 2019-08-01T17:00:10.099Z · score: 6 (3 votes) · EA · GW

I agree that the impact of this decision is likely to be very small, but trying to analyze a complicated phenomenon can be personally beneficial for improving your skills at analyzing the impact of other phenomenon. In general, it seems good for EAs to practice analyzing the impact of various interventions, as long as they keep in mind that the impact of the intervention and the direct value of the analysis might be small.

Comment by anonymous_ea on The EA Forum is a News Feed · 2019-08-01T16:56:13.129Z · score: 1 (1 votes) · EA · GW

As a data point, I would commit to tagging old posts for at least 1 hour if other people were also doing it or expressed interest in it happening.

Comment by anonymous_ea on Is running Folding@home / Rosetta@home beneficial? · 2019-07-31T15:24:40.068Z · score: 6 (5 votes) · EA · GW

Since this post has gotten very little traction, I wanted to let you (orenmn) know that at least I found it valuable and interesting!

Comment by anonymous_ea on Editing available for EA Forum drafts · 2019-07-31T15:11:36.677Z · score: 1 (1 votes) · EA · GW

Thank you! I'll send you a link if/when I get around to working on the draft :)

Comment by anonymous_ea on Editing available for EA Forum drafts · 2019-07-30T16:50:31.823Z · score: 1 (1 votes) · EA · GW

Thanks for offering this service! I'd like to share a draft/idea with you (the second example in my comment here) but I don't want to use my personal email since I want to keep my account anonymous. Is there a way I could get feedback from you without creating an anonymous email?

Comment by anonymous_ea on Four practices where EAs ought to course-correct · 2019-07-30T16:45:36.742Z · score: 3 (2 votes) · EA · GW

There's an incorrect link in this sentence:

This post suggested the rather alarming idea that EA's growth is petering out in a sort of logistic curve.

The link goes to Noah Smith's blog post advocating the two paper rule.

Comment by anonymous_ea on EA Forum Prize: Winners for June 2019 · 2019-07-26T16:22:30.677Z · score: 4 (3 votes) · EA · GW

Is there any data on how prize winners generally feel about winning? Does the prize help motivate them to either write the material or post it here?

Comment by anonymous_ea on EA Forum Prize: Winners for June 2019 · 2019-07-26T16:21:36.895Z · score: 15 (7 votes) · EA · GW

I like the idea of having separate categories for professional work and amateur/some other categorization work. I'd still like to encourage the professional work to be posted here, but encouraging non-professional work is also important.

Comment by anonymous_ea on In what ways and in what areas might it make sense for EA to adopt more a more bottoms-up approach? · 2019-07-25T16:26:51.192Z · score: 6 (4 votes) · EA · GW

Your link to Anna Salamon's comment goes to the Wikipedia page for sealioning :)

Comment by anonymous_ea on What posts you are planning on writing? · 2019-07-25T15:19:47.828Z · score: 4 (3 votes) · EA · GW

I just skimmed some of the recent posts on your website and liked them! What makes you think that they're not good enough to be posted here? They definitely seem less comprehensive than some of your (very comprehensive) posts here, but still more than good enough to post here.

Comment by anonymous_ea on What posts you are planning on writing? · 2019-07-25T15:05:31.098Z · score: 1 (1 votes) · EA · GW

Thank you! Do you happen to have any advice or feedback on how I'm planning to write it? I'm tempted to make it fairly short and open it up for other people to comment on with their own experiences, but I'm worried that a short, feelings focused post won't get a lot of engagement. Trying to make it more comprehensive by e.g. compiling some of the ways the EA community ends up signaling that it really wants highly talented people (and almost never signals the opposite) might make it more engaging, but would also decrease the likelihood that I'll publish this anytime soon.

I could also pose it slightly differently as a question post on how people feel about their place in the community.

Comment by anonymous_ea on What posts you are planning on writing? · 2019-07-25T14:58:04.655Z · score: 5 (3 votes) · EA · GW

Interesting. I hadn't heard of sealioning before. You're right about the thing I'm pointing to being somewhat different. I think EAs want to encourage good criticisms of EA and want to be the kind of people and movement where criticisms are received positively. I think this often leads to EAs being overly generous with criticism posts on an object level, although I don't know whether this is positive or negative on aggregate.

Comment by anonymous_ea on What posts you are planning on writing? · 2019-07-24T21:40:05.486Z · score: 14 (7 votes) · EA · GW

I have two drafts saved with only a few links or a couple of paragraphs written:

1. How do we respond to criticism of EA on the forum?

Several commentators on the forum have recently casually expressed theories of how effective altruists respond to criticism of EA on the forum. Some have expressed skepticism of the idea that EAs can respond positively to criticism of EA. I aim to look at several notable comments and posts on the forum over at least the past several months to see how criticism is practically received on the forum.

My tentative theory, without having properly researched this, is that EAs are generally too eager to read and upvote any nicely written criticism by an intelligent person that sounds non-threatening enough. Criticism of this sort, while often praised, is often not deeply engaged with. On the rare occasion that criticism seems threatening enough to EA, there's deeper engagement with the actual arguments, rather than responses mostly trying to signal-boosting the criticism. There's also one instance of a threatening criticism on a particularly political topic that attracted significantly lower quality comments in my opinion.

The posts I've casually collected so far are:

Benjamin Hoffman's Drowning Children are Rare

Jeff Kaufman's There's Lots More To Do

beth's Three Biases That Made Me Believe in AI Risk

Fods12' Effective Altruism is an Ideology, not (just) a Question

EAs for Inclusion's Making discussions in EA groups inclusive

Jessica Taylor's The AI Timelines Scam (maybe?)

Jessica Taylor's The Act of Charity

Benjamin Hoffman's Effective Altruism is Self-recommending

Alexander Guzey's William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better"

Chris Smith's The Optimizer's Curse & Wrong-Way Reductions

Milan Griffes' Cash prizes for the best arguments against psychedelics being an EA cause area (maybe?)

2. Will I be accepted in EA if I'm not prodigiously successful professionally?

The EA community contains a tremendous amount of extremely talented and accomplished people. I worry that unless I also achieve a lot of professional success, other EAs won't particularly respect me, like me, or particularly want to interact with me. While some of this is definitely related to my own issues about social acceptance, I think there's a decent chance that many other people also feel this way. My aim is to explore my feelings and what about EA makes me feel this way, and encourage others to express how they feel about their place in the community as well. At a meta-level, I hope to at least explore how a different, more feelings focused article might fit in this forum. I don't want to give any specific solutions, imply that this is a problem of any particular magnitude, or even imply that this is necessarily a problem on net for EA.

Comment by anonymous_ea on There's Lots More To Do · 2019-07-18T19:21:09.707Z · score: 8 (6 votes) · EA · GW

I don't feel inclined to get into this, but FWIW I have read a reasonable amount of Ben's writings on both EA and non-EA topics, and I do not find it obvious that his main, subconscious motivation is epistemic health rather than a need to reject EA.

Comment by anonymous_ea on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-15T06:07:46.113Z · score: 4 (3 votes) · EA · GW

What did you find compelling about the comment that you found to be the best argument?

Comment by anonymous_ea on Advice for an Undergrad · 2019-07-03T16:59:22.576Z · score: 3 (2 votes) · EA · GW
[I] figure two years is enough time to major in pretty much anything outside of the sciences or engineering. My current plan is to double major in Philosophy and Math (with an applied bent)

Have you taken any Math classes before? Starting and finishing a Math major in 2 years sounds unrealistic to me.

Comment by anonymous_ea on Aid Scepticism and Effective Altruism · 2019-07-03T16:55:43.666Z · score: 8 (7 votes) · EA · GW
This is the first post in a short series where I share some academic articles on effective altruism I've written over the last couple of years. Hopefully, this is also the first in a longer series of posts over the summer where I try to share some of my thinking over the last year - for these, I'm aiming to lower my quality threshold, in order to ease the transmission of ideas and discussion from the research side of EA to the broader community, and to get some feedback.

I'm excited to hear this and look forward to reading more of your posts!

Comment by anonymous_ea on Should we talk about altruism or talk about justice? · 2019-07-03T16:49:44.965Z · score: 5 (4 votes) · EA · GW

I'm curious to hear from someone who downvoted this post about why they did so

Comment by anonymous_ea on Effective Altruism is an Ideology, not (just) a Question · 2019-06-28T20:02:45.050Z · score: 29 (16 votes) · EA · GW

This is a good article and a valuable discussion to have. I have a couple of nitpicks on the discussion of theoretical frameworks that tend to be ignored by EA. You mentioned the following examples:

Sociological theory: potentially relevant to understanding causes of global poverty, how group dynamics operates and how social change occurs.
Ethnography: potentially highly useful in understanding causes of poverty, efficacy of interventions, how people make dietary choices regarding meat eating, the development of cultural norms in government or research organisations surrounding safety of new technologies, and other such questions, yet I have never heard of an EA organisation conducting this sort of analysis.
Phenomenology and existentialism: potentially relevant to determining the value of different types of life and what sort of society we should focus on creating.
Historical case studies: there is some use of these in the study of existential risk, mostly relating to nuclear war, but mostly this method is ignored as a potential source of information about social movements, improving society, and assessing the risk of catastrophic risks.
Regression analysis: potentially highly useful for analysing effective causes in global development, methods of political reform, or even the ability to influence AI or nuclear policy formation, but largely neglected in favour of either experiments or abstract theorising.
  • I have never seen any sociological analysis in EA and agree that it's been (almost?) completely ignored.
  • Ethnographies have been almost completely absent from EA from all of its history, with the exception of a recent small increase in interest. A recent upvoted, positively received, and discussed post on this forum expressed interest in an ethnography of EA. A comment on that post mentioned a couple of ethnographies of EAs, including this one of London EA.
  • Phenomenology and existentialism: I'm not sure what this means. EA has spent a fair amount of time thinking about the value of different types of biological and synthetic lifeforms (e.g. wildlife suffering, suffering in AI). The second example seems a bit underdefined to me. I'm not familiar with phenomenology and existentialism and might be misunderstanding this section.
  • For historical case studies, I think "mostly ignored" is misleading. A more accurate description might be that they're underutilized relative to their full potential and relative to other frameworks, but they're taken seriously when they come up.

As you mention, historical case studies have been used in x-risk analysis. The Open Philanthropy Project has also commissioned and written about several historical case studies , going back to its GiveWell Labs days. Their page on the History of Philanthropy says:

We’ve found surprisingly little existing literature on the history of philanthropy. In particular, we’ve found few in-depth case studies examining questions like what role philanthropists, compared with other actors, played in bringing important changes to pass. To help fill that gap, we are commissioning case studies on past philanthropic success stories, with a focus on cases that seem — at first glance — to be strong examples of philanthropy having a major impact on society.

They go on to list 10+ case studies (only one focusing on x-risks), including some from a $165k grant to the Urban Institute specifically for the purpose of producing more case studies. Of course $165k is a small amount for OpenPhil, but it seems to me, for a few reasons, that they take this work seriously.

The Sentience Institute has published 6 reports, 3 of which are historical case studies. Historical case studies relating to nuclear war, like the Petrov and Arkhipov incidents, have been widely discussed in EA as well. The Future of Life Institute has published some material relating to this.

  • Regression analysis: I find this example puzzling. Regressions are widely used in development economics, which has heavily influenced EA thinking on global health. EAs who are professional economists or otherwise have reason to use regressions do so when appropriate (e.g. Rachel Glennester and J-PAL, some of Eva Vivalt's work, etc). GiveWell's recommendation of deworming charities is largely dependent on the regression estimates of a couple of studies.

More generally, regressions are a subset of statistical analysis techniques. I'm not sure if EA can be credibly accused of ignoring statistical analysis. I also don't think the other examples you gave of uses of regressions (political reform, AI and nuclear policy) are a great fit for regression analysis or statistical analysis in general.


Comment by anonymous_ea on Ways Frugality Increases Productivity · 2019-06-26T18:21:27.951Z · score: 17 (8 votes) · EA · GW
I’m a little hesitant to publish this because I don’t think most people should prioritize frugality.

Can you expand on why you think most people shouldn't prioritize frugality? Do you mean most of the general population, most EAs, or some other group?

Comment by anonymous_ea on What new EA project or org would you like to see created in the next 3 years? · 2019-06-18T23:40:07.303Z · score: 1 (1 votes) · EA · GW

I didn't see this comment earlier. Having read it, this seems like one of the best ideas here and certainly worth trying. I would also be curious to see if there are strong arguments against this idea.

Comment by anonymous_ea on What new EA project or org would you like to see created in the next 3 years? · 2019-06-18T23:33:20.030Z · score: 7 (4 votes) · EA · GW

If done well this could be good, but I worry that a concerted effort will most likely come across as fake or insincere and turn out to be a negative.

Comment by anonymous_ea on There's Lots More To Do · 2019-06-14T17:09:22.316Z · score: 3 (3 votes) · EA · GW

I don't think the two reasons for Ben's actions you suggested are mutually inconsistent. He may want to emotionally reject EA style giving arguments, think of arguments that could justify this, and then get frustrated by what he sees as poor arguments for EA or against his arguments. This outcome (frustration and worry with the EA community's epistemic health) seems likely to me for someone who starts off emotionally wanting to reject certain arguments. He could also have identified genuine flaws in EA that both make him reject EA and make him frustrated by the epistemic health of EA.

Comment by anonymous_ea on What books or bodies of work, not about EA or EA cause areas, might be beneficial to EAs? · 2019-06-12T20:27:28.996Z · score: 5 (4 votes) · EA · GW

Harry Potter and the Methods of Rationality can be for inspiring an EA-like mood, as well as for introducing the idea of thinking ways that can be helpful for EAs (although some ways of thinking that end up being effectively promoted are anti-EA to varying degrees).

Comment by anonymous_ea on Crowdfunding for Effective Climate Policy · 2019-06-11T18:27:30.044Z · score: 4 (3 votes) · EA · GW

Can you expand on this claim? Do you mean that all research has non-zero bias (but some could be very close to 0 bias), that all research has significant bias towards the hypothesis or framework it's working in, or something else?

Comment by anonymous_ea on [Link] Book Review: The Secret Of Our Success | Slate Star Codex · 2019-06-07T16:42:56.332Z · score: 4 (3 votes) · EA · GW

What do you think is the point of the book that SSC missed?

Comment by anonymous_ea on [Link] Act of Charity · 2019-06-02T05:16:59.309Z · score: 17 (7 votes) · EA · GW

Notably, Jessica says in the Less Wrong comments that "GiveWell is a scam (as reasonable priors in this area would suggest), although I don't want this to be treated as a public accusation or anything; it's not like they're more of a scam than most other things in this general area."

I do not find her evidence very convincing. Some of it relates to private information which she privately messaged to Jeff Kaufman. The first part of this private information, a rumor relating to GiveWell's treatment of an ex-employee, was disconfirmed by the person in question according to Jeff. The rest of this private information is advice to talk to specific people and links to public blog posts.

The rest of the evidence seems to center around arguments that international charities like AMF create dependency and apathy, sourced from a YouTube philosophy video creator and apparent worker in international development who cites personal anecdotes and Dambisa Moyo's book Dead Aid. This person alleges that AMF and other organizations have put the local bed net makers out of business and says that he has personally seen many families that only bring out their bed net when the AMF inspector comes around. Jessica emphasizes further that the strongest section of the video is where the he says that (quoting Jessica) "the problems caused by aid are extremely bad in some of the countries that are targets of aid (like, they essentially destroy people's motivation to solve their community's problems)."

Arguments about dependency and building sustainable institutions instead have been discussed a plenty in EA circles over the years, and I won't rehash them further here. I just want to note that Moyo says herself that her critique should not be applied to private NGOs, and even aid critics accept that health interventions, like those of most GiveWell top charities, can have positive impact.

I also do not think that, even if the evidence was rock solid, this would mean that GiveWell is a scam; people can be wrong or disagree without it meaning that they're scamming you or that they're deluding themselves.

Edit: Cleaned up a couple of sentences

Comment by anonymous_ea on Is preventing child abuse a plausible Cause X? · 2019-06-01T21:37:26.979Z · score: 7 (2 votes) · EA · GW

Please do expand this onto a top level post if you are able to!

Comment by anonymous_ea on Drowning children are rare · 2019-05-31T18:39:21.185Z · score: 10 (6 votes) · EA · GW

Another ex-GiveWell's employee post criticizing GiveWell and the EA community was recently highly upvoted. See also Ben's old post Effective Altruism is Self-Recommending, which is currently at +30 (a solid amount given that it was posted on the old forum, where karma totals were much lower).

I think the reason this post is at near 0 karma is because it is objectively wrong in multiple ways, and is of negative value. I would say this is clear if you engage with the comments here, on Ben's blog, and Jeff Kaufman's reply.

I actually interpret the voting on this post to be too positive. I think it is because EAs tend to be wary of downvoting criticisms that might be good. Ben's previous reputation for worthwhile criticism seems to be protecting him to a certain extent.

Comment by anonymous_ea on Drowning children are rare · 2019-05-31T18:23:01.123Z · score: 2 (4 votes) · EA · GW

I think people use upvotes both to signal agreement and to highlight thoughtful, effortful, or detailed comments. I think it's fairly clear that Kbog's comments was upvoted because people agreed with it, not because people thought it was a particularly insightful comment. That doesn't preclude people upvoting posts for being high quality.

If your point is more that people don't generally upvote quality posts that they disagree with, then I would probably agree with that.

Comment by anonymous_ea on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-31T18:07:55.874Z · score: 3 (2 votes) · EA · GW

Also I do want to say that I appreciate you trying hard to engage with skeptical people and try to figure out independently new promising areas! That's valuable work for the community, even if this particular intervention doesn't pan out.

Comment by anonymous_ea on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-31T18:06:58.076Z · score: 3 (2 votes) · EA · GW

Thanks for the clarification. I also share your model of mental health disorders being on the far end of a continuous spectrum of unendorsed behavior patterns. The crux for me here is more what the effect of psychedelics is on people not at the far end of the spectrum. I agree that it might be positive, it might even be likely to be positive, but I'm not aware of any compelling empirical evidence or other reason to think that it is strong.

I have essentially a mathematical objection, in that I think the math is unlikely to work out, but I don't have a problem with the idea in principle (putting aside PR risks).

Thanks for linking your thread with Kit in your other reply. I think my objection is very similar to Kit's. Consider:

Total benefit = effect from boosting efficacy of current long-termist labor (1) + effect from increasing the amount of long-termist labor (2) + effect from short-termist benefits (3)

I expect (1) to be extremely not worth it given the costs of making any substantial improvement in the availability of psychedelics, and (2) to be speculative and to almost certainly not be worth it. By (3), do you mean the mental health benefits for people in general?

Comment by anonymous_ea on Drowning children are rare · 2019-05-30T16:35:53.997Z · score: 2 (2 votes) · EA · GW

My (small) update is also this, except confined to posts criticizing EA.

Comment by anonymous_ea on Drowning children are rare · 2019-05-28T22:59:30.198Z · score: 14 (8 votes) · EA · GW

Whether you think it's a rationalization or not, the claim in the OP is misleading at best. It sounds like you're paraphrasing them as saying that they don't recommend that Good Ventures fully fund their charities because this is an unfair way to save lives. GiveWell says nothing of the sort in the very link you use to back up your claim. The reason the you assign to them instead, that they think that this would be unfair, is absurd and isn't backed up by anything in the OP.

Comment by anonymous_ea on Drowning children are rare · 2019-05-28T18:29:03.970Z · score: 17 (11 votes) · EA · GW

I found this post interesting overall. I have a few thoughts on the argument as a whole, but want to focus on one thing in particular:

[GiveWell] recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried that this would be an unfair way to save lives.

I don't see this as an accurate summary of the reasons GiveWell outlined in the linked blogpost. The stated reason is that in the long-term, fully funding every strong giving opportunity they see would be counterproductive because their behavior might influence other donors' behavior:

We do not want to be in the habit of – or gain a reputation for – recommending that Good Ventures fill the entire funding gap of every strong giving opportunity we see. In the long run, we feel this would create incentives for other donors to avoid the causes and grants we’re interested in; this, in turn, could lead to a much lower-than-optimal amount of total donor interest in the things we find most promising.

Despite this, that year they recommended that Good Ventures fully funds the highest-value opportunities:

For the highest-value giving opportunities, we want to recommend that Good Ventures funds 100%. It is more important to us to ensure these opportunities are funded than to set incentives appropriately.

The post itself goes into much greater detail about these considerations.


Comment by anonymous_ea on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-25T22:23:34.371Z · score: 10 (4 votes) · EA · GW

Argument in OP:

Interventions that increase the set of well-intentioned + capable people also seem quite robust to cluelessness, because they allow for more error correction at each timestep on the way to the far future.
The psychedelic experience also seems like a plausible lever on increasing capability (via reducing negative self-talk & other mental blocks) and improving intentions (via ego dissolution changing one's metaphysical assumptions).

I view this as a weak argument. I think one could make this sort of argument for a large number of interventions: reading great literature, yoga, a huge number of productivity systems, participating in healthy communities, quantified self, volunteering for local charities like working at a soup kitchen, etc. Some of these interventions focus more on the increasing capability aspect (productivity systems, productivity systems) and some focus more on improving intentions (participating in healthy communities, volunteering). Some focus on both to some degree.

The reason it seems like a weak argument to me is because:

(a) the average effects of psychedelics on increasing capability seem unlikely to be strong. They may be high for a small percentage of people, but I'm not aware of any particularly strong reason to think that the average effects are large.

They may be large for people with mental health issues, but then it's not really an intervention for increasing capability in general, it's a mental health intervention. These are distinct, and as I said above, psychedelics could plausibly be a top intervention for mental health.

(b) The improving intentions aspect looks to be on even shakier grounds. What is the evidence that taking psychedelics is an effective treatment for improving intentions in a manner relevant to working on the long term? I've never heard of any psychedelic or spiritual community being focused on long termism in an EA relevant manner. Some people report ego dissolution, but I'm not even aware of any anecdotal reports that ego dissolution led to non-EAs thinking and working on long term things. It sounds like you know some cases where it may have been helpful, but I'm skeptical that a high quality study would report something amazing.

Comment by anonymous_ea on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-24T17:56:42.541Z · score: 3 (2 votes) · EA · GW

I don't have much to contribute beyond the many things that have already been said, but I suspect my overall opinion may be shared by many others: I think psychedelics could plausibly (but not >50%) be a very effective mental health intervention. One could perhaps call them a promising EA intervention, although the evidence base is quite thin at the moment. However psychedelics don't seem likely to be a particularly effective long term intervention at the moment. They perhaps might be once they were legalized and there was some more evidence behind this, but that seems quite a long way away. Trying to legalize psychedelics or improve research for the long term impacts seems quite implausible as an effective intervention.

Comment by anonymous_ea on How does one live/do community as an Effective Altruist? · 2019-05-17T03:56:17.252Z · score: 5 (3 votes) · EA · GW

Regarding EA weddings, check out the forum thread Suggestions for EA Weddings Vows? from just a couple of months ago.

Comment by anonymous_ea on Which scientific discovery was most ahead of its time? · 2019-05-16T18:56:34.702Z · score: 8 (8 votes) · EA · GW

While I am certainly not an expert on this topic, the claim that general relativity wouldn't have been discovered until the 1970s without Einstein seems false to me. David Hilbert was doing similar work at the same time, and from what I'm aware there was something of a race between Einstein and Hilbert to finish the work first, with Einstein winning narrowly (on the order of days). More information can be found on Wikipedia pages: History of General Relativity and Relativity Priority Dispute.

Comment by anonymous_ea on Benefits of EA engaging with mainstream (addressed) cause areas · 2019-05-16T18:39:08.494Z · score: 4 (3 votes) · EA · GW

Thanks for the additional research. I can add a few more things:

'Carl Shulman' commented on the GiveWell blog on December 31, 2007, seemingly familiar with GiveWell and having a positive impression of it at the time. This is presumably Carl Shulman (EA forum user Carl_Shulman), longtime EA and member of the rationality community.

Robert Wiblin's earliest post on Overcoming Bias dates back to June 22, 2012.

The earliest post of LessWrong user 'jkaufman' (presumably longtime EA Jeff Kaufman) dates back to 25th September 2011.

There's some discussion of the history of EA as connected with different communities on this LessWrong comment thread. User 'thebestwecan' (addressed as 'Jacy' by another comment, so presumably Jacy Reese) stated that the term 'Effective Altruism' was used several years in the Felicifia community before CEA adopted the term, but jkaufman's Google search could only find the term going back to 2012. This comment is also interesting:

'lukeprog (Luke Muehlhauser) objects to CEA's claim that EA grew primarily out of Giving What We Can at http://www.effectivealtruism.org/#comments :

This was a pretty surprising sentence. Weren’t LessWrong & GiveWell growing large, important parts of the community before GWWC existed? It wasn’t called “effective altruism” at the time, but it was largely the same ideas and people.'


So apparently Luke Muehlhauser, an important and well connected member of the rationality community, believed that important parts of the EA community came from LW and GW before GWWC existed. This seems to exclude the idea that EA grew primarily out of LW.

Overall it seems to me that my earlier summary of EA growing out of the connected communities of GiveWell, Oxford (GWWC, people like Toby Ord and Will MacAskill etc), and LessWrong is probably correct.

Comment by anonymous_ea on Benefits of EA engaging with mainstream (addressed) cause areas · 2019-05-15T16:39:12.310Z · score: 6 (4 votes) · EA · GW

A quick note on 'EA branched off from [LessWrong] to form a closely related subculture': this is a little inaccurate to my knowledge. In my understanding, EA initially came together from 3 main connected but separate sources: GiveWell, Oxford philosophers like Toby Ord and Will MacAskill and other associated people like Rob Wiblin, Ben Todd etc, and LessWrong. I think these 3 sources all interacted with each other quite early on (pre-2010), but I don't think it's accurate to say that EA branched off from LessWrong.