Comment by anonymous_ea on Ways Frugality Increases Productivity · 2019-06-26T18:21:27.951Z · score: 7 (4 votes) · EA · GW
I’m a little hesitant to publish this because I don’t think most people should prioritize frugality.

Can you expand on why you think most people shouldn't prioritize frugality? Do you mean most of the general population, most EAs, or some other group?

Comment by anonymous_ea on What new EA project or org would you like to see created in the next 3 years? · 2019-06-18T23:40:07.303Z · score: 1 (1 votes) · EA · GW

I didn't see this comment earlier. Having read it, this seems like one of the best ideas here and certainly worth trying. I would also be curious to see if there are strong arguments against this idea.

Comment by anonymous_ea on What new EA project or org would you like to see created in the next 3 years? · 2019-06-18T23:33:20.030Z · score: 6 (3 votes) · EA · GW

If done well this could be good, but I worry that a concerted effort will most likely come across as fake or insincere and turn out to be a negative.

Comment by anonymous_ea on There's Lots More To Do · 2019-06-14T17:09:22.316Z · score: 1 (1 votes) · EA · GW

I don't think the two reasons for Ben's actions you suggested are mutually inconsistent. He may want to emotionally reject EA style giving arguments, think of arguments that could justify this, and then get frustrated by what he sees as poor arguments for EA or against his arguments. This outcome (frustration and worry with the EA community's epistemic health) seems likely to me for someone who starts off emotionally wanting to reject certain arguments. He could also have identified genuine flaws in EA that both make him reject EA and make him frustrated by the epistemic health of EA.

Comment by anonymous_ea on What books or bodies of work, not about EA or EA cause areas, might be beneficial to EAs? · 2019-06-12T20:27:28.996Z · score: 5 (4 votes) · EA · GW

Harry Potter and the Methods of Rationality can be for inspiring an EA-like mood, as well as for introducing the idea of thinking ways that can be helpful for EAs (although some ways of thinking that end up being effectively promoted are anti-EA to varying degrees).

Comment by anonymous_ea on Crowdfunding for Effective Climate Policy · 2019-06-11T18:27:30.044Z · score: 4 (3 votes) · EA · GW

Can you expand on this claim? Do you mean that all research has non-zero bias (but some could be very close to 0 bias), that all research has significant bias towards the hypothesis or framework it's working in, or something else?

Explaining the Open Philanthropy Project's $17.5m bet on Sherlock Biosciences’ Innovations in Viral Diagnostics

2019-06-11T17:23:37.349Z · score: 25 (9 votes)
Comment by anonymous_ea on [Link] Book Review: The Secret Of Our Success | Slate Star Codex · 2019-06-07T16:42:56.332Z · score: 4 (3 votes) · EA · GW

What do you think is the point of the book that SSC missed?

Comment by anonymous_ea on [Link] Act of Charity · 2019-06-02T05:16:59.309Z · score: 17 (7 votes) · EA · GW

Notably, Jessica says in the Less Wrong comments that "GiveWell is a scam (as reasonable priors in this area would suggest), although I don't want this to be treated as a public accusation or anything; it's not like they're more of a scam than most other things in this general area."

I do not find her evidence very convincing. Some of it relates to private information which she privately messaged to Jeff Kaufman. The first part of this private information, a rumor relating to GiveWell's treatment of an ex-employee, was disconfirmed by the person in question according to Jeff. The rest of this private information is advice to talk to specific people and links to public blog posts.

The rest of the evidence seems to center around arguments that international charities like AMF create dependency and apathy, sourced from a YouTube philosophy video creator and apparent worker in international development who cites personal anecdotes and Dambisa Moyo's book Dead Aid. This person alleges that AMF and other organizations have put the local bed net makers out of business and says that he has personally seen many families that only bring out their bed net when the AMF inspector comes around. Jessica emphasizes further that the strongest section of the video is where the he says that (quoting Jessica) "the problems caused by aid are extremely bad in some of the countries that are targets of aid (like, they essentially destroy people's motivation to solve their community's problems)."

Arguments about dependency and building sustainable institutions instead have been discussed a plenty in EA circles over the years, and I won't rehash them further here. I just want to note that Moyo says herself that her critique should not be applied to private NGOs, and even aid critics accept that health interventions, like those of most GiveWell top charities, can have positive impact.

I also do not think that, even if the evidence was rock solid, this would mean that GiveWell is a scam; people can be wrong or disagree without it meaning that they're scamming you or that they're deluding themselves.

Edit: Cleaned up a couple of sentences

Comment by anonymous_ea on Is preventing child abuse a plausible Cause X? · 2019-06-01T21:37:26.979Z · score: 7 (2 votes) · EA · GW

Please do expand this onto a top level post if you are able to!

Comment by anonymous_ea on Drowning children are rare · 2019-05-31T18:39:21.185Z · score: 10 (6 votes) · EA · GW

Another ex-GiveWell's employee post criticizing GiveWell and the EA community was recently highly upvoted. See also Ben's old post Effective Altruism is Self-Recommending, which is currently at +30 (a solid amount given that it was posted on the old forum, where karma totals were much lower).

I think the reason this post is at near 0 karma is because it is objectively wrong in multiple ways, and is of negative value. I would say this is clear if you engage with the comments here, on Ben's blog, and Jeff Kaufman's reply.

I actually interpret the voting on this post to be too positive. I think it is because EAs tend to be wary of downvoting criticisms that might be good. Ben's previous reputation for worthwhile criticism seems to be protecting him to a certain extent.

Comment by anonymous_ea on Drowning children are rare · 2019-05-31T18:23:01.123Z · score: 2 (4 votes) · EA · GW

I think people use upvotes both to signal agreement and to highlight thoughtful, effortful, or detailed comments. I think it's fairly clear that Kbog's comments was upvoted because people agreed with it, not because people thought it was a particularly insightful comment. That doesn't preclude people upvoting posts for being high quality.

If your point is more that people don't generally upvote quality posts that they disagree with, then I would probably agree with that.

Comment by anonymous_ea on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-31T18:07:55.874Z · score: 3 (2 votes) · EA · GW

Also I do want to say that I appreciate you trying hard to engage with skeptical people and try to figure out independently new promising areas! That's valuable work for the community, even if this particular intervention doesn't pan out.

Comment by anonymous_ea on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-31T18:06:58.076Z · score: 3 (2 votes) · EA · GW

Thanks for the clarification. I also share your model of mental health disorders being on the far end of a continuous spectrum of unendorsed behavior patterns. The crux for me here is more what the effect of psychedelics is on people not at the far end of the spectrum. I agree that it might be positive, it might even be likely to be positive, but I'm not aware of any compelling empirical evidence or other reason to think that it is strong.

I have essentially a mathematical objection, in that I think the math is unlikely to work out, but I don't have a problem with the idea in principle (putting aside PR risks).

Thanks for linking your thread with Kit in your other reply. I think my objection is very similar to Kit's. Consider:

Total benefit = effect from boosting efficacy of current long-termist labor (1) + effect from increasing the amount of long-termist labor (2) + effect from short-termist benefits (3)

I expect (1) to be extremely not worth it given the costs of making any substantial improvement in the availability of psychedelics, and (2) to be speculative and to almost certainly not be worth it. By (3), do you mean the mental health benefits for people in general?

Comment by anonymous_ea on Drowning children are rare · 2019-05-30T16:35:53.997Z · score: 2 (2 votes) · EA · GW

My (small) update is also this, except confined to posts criticizing EA.

Comment by anonymous_ea on Drowning children are rare · 2019-05-28T22:59:30.198Z · score: 14 (8 votes) · EA · GW

Whether you think it's a rationalization or not, the claim in the OP is misleading at best. It sounds like you're paraphrasing them as saying that they don't recommend that Good Ventures fully fund their charities because this is an unfair way to save lives. GiveWell says nothing of the sort in the very link you use to back up your claim. The reason the you assign to them instead, that they think that this would be unfair, is absurd and isn't backed up by anything in the OP.

Comment by anonymous_ea on Drowning children are rare · 2019-05-28T18:29:03.970Z · score: 17 (11 votes) · EA · GW

I found this post interesting overall. I have a few thoughts on the argument as a whole, but want to focus on one thing in particular:

[GiveWell] recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried that this would be an unfair way to save lives.

I don't see this as an accurate summary of the reasons GiveWell outlined in the linked blogpost. The stated reason is that in the long-term, fully funding every strong giving opportunity they see would be counterproductive because their behavior might influence other donors' behavior:

We do not want to be in the habit of – or gain a reputation for – recommending that Good Ventures fill the entire funding gap of every strong giving opportunity we see. In the long run, we feel this would create incentives for other donors to avoid the causes and grants we’re interested in; this, in turn, could lead to a much lower-than-optimal amount of total donor interest in the things we find most promising.

Despite this, that year they recommended that Good Ventures fully funds the highest-value opportunities:

For the highest-value giving opportunities, we want to recommend that Good Ventures funds 100%. It is more important to us to ensure these opportunities are funded than to set incentives appropriately.

The post itself goes into much greater detail about these considerations.

Comment by anonymous_ea on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-25T22:23:34.371Z · score: 10 (4 votes) · EA · GW

Argument in OP:

Interventions that increase the set of well-intentioned + capable people also seem quite robust to cluelessness, because they allow for more error correction at each timestep on the way to the far future.
The psychedelic experience also seems like a plausible lever on increasing capability (via reducing negative self-talk & other mental blocks) and improving intentions (via ego dissolution changing one's metaphysical assumptions).

I view this as a weak argument. I think one could make this sort of argument for a large number of interventions: reading great literature, yoga, a huge number of productivity systems, participating in healthy communities, quantified self, volunteering for local charities like working at a soup kitchen, etc. Some of these interventions focus more on the increasing capability aspect (productivity systems, productivity systems) and some focus more on improving intentions (participating in healthy communities, volunteering). Some focus on both to some degree.

The reason it seems like a weak argument to me is because:

(a) the average effects of psychedelics on increasing capability seem unlikely to be strong. They may be high for a small percentage of people, but I'm not aware of any particularly strong reason to think that the average effects are large.

They may be large for people with mental health issues, but then it's not really an intervention for increasing capability in general, it's a mental health intervention. These are distinct, and as I said above, psychedelics could plausibly be a top intervention for mental health.

(b) The improving intentions aspect looks to be on even shakier grounds. What is the evidence that taking psychedelics is an effective treatment for improving intentions in a manner relevant to working on the long term? I've never heard of any psychedelic or spiritual community being focused on long termism in an EA relevant manner. Some people report ego dissolution, but I'm not even aware of any anecdotal reports that ego dissolution led to non-EAs thinking and working on long term things. It sounds like you know some cases where it may have been helpful, but I'm skeptical that a high quality study would report something amazing.

Comment by anonymous_ea on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-24T17:56:42.541Z · score: 3 (2 votes) · EA · GW

I don't have much to contribute beyond the many things that have already been said, but I suspect my overall opinion may be shared by many others: I think psychedelics could plausibly (but not >50%) be a very effective mental health intervention. One could perhaps call them a promising EA intervention, although the evidence base is quite thin at the moment. However psychedelics don't seem likely to be a particularly effective long term intervention at the moment. They perhaps might be once they were legalized and there was some more evidence behind this, but that seems quite a long way away. Trying to legalize psychedelics or improve research for the long term impacts seems quite implausible as an effective intervention.

Comment by anonymous_ea on How does one live/do community as an Effective Altruist? · 2019-05-17T03:56:17.252Z · score: 5 (3 votes) · EA · GW

Regarding EA weddings, check out the forum thread Suggestions for EA Weddings Vows? from just a couple of months ago.

Comment by anonymous_ea on Which scientific discovery was most ahead of its time? · 2019-05-16T18:56:34.702Z · score: 8 (8 votes) · EA · GW

While I am certainly not an expert on this topic, the claim that general relativity wouldn't have been discovered until the 1970s without Einstein seems false to me. David Hilbert was doing similar work at the same time, and from what I'm aware there was something of a race between Einstein and Hilbert to finish the work first, with Einstein winning narrowly (on the order of days). More information can be found on Wikipedia pages: History of General Relativity and Relativity Priority Dispute.

Comment by anonymous_ea on Benefits of EA engaging with mainstream (addressed) cause areas · 2019-05-16T18:39:08.494Z · score: 4 (3 votes) · EA · GW

Thanks for the additional research. I can add a few more things:

'Carl Shulman' commented on the GiveWell blog on December 31, 2007, seemingly familiar with GiveWell and having a positive impression of it at the time. This is presumably Carl Shulman (EA forum user Carl_Shulman), longtime EA and member of the rationality community.

Robert Wiblin's earliest post on Overcoming Bias dates back to June 22, 2012.

The earliest post of LessWrong user 'jkaufman' (presumably longtime EA Jeff Kaufman) dates back to 25th September 2011.

There's some discussion of the history of EA as connected with different communities on this LessWrong comment thread. User 'thebestwecan' (addressed as 'Jacy' by another comment, so presumably Jacy Reese) stated that the term 'Effective Altruism' was used several years in the Felicifia community before CEA adopted the term, but jkaufman's Google search could only find the term going back to 2012. This comment is also interesting:

'lukeprog (Luke Muehlhauser) objects to CEA's claim that EA grew primarily out of Giving What We Can at :

This was a pretty surprising sentence. Weren’t LessWrong & GiveWell growing large, important parts of the community before GWWC existed? It wasn’t called “effective altruism” at the time, but it was largely the same ideas and people.'

So apparently Luke Muehlhauser, an important and well connected member of the rationality community, believed that important parts of the EA community came from LW and GW before GWWC existed. This seems to exclude the idea that EA grew primarily out of LW.

Overall it seems to me that my earlier summary of EA growing out of the connected communities of GiveWell, Oxford (GWWC, people like Toby Ord and Will MacAskill etc), and LessWrong is probably correct.

Comment by anonymous_ea on Benefits of EA engaging with mainstream (addressed) cause areas · 2019-05-15T16:39:12.310Z · score: 6 (4 votes) · EA · GW

A quick note on 'EA branched off from [LessWrong] to form a closely related subculture': this is a little inaccurate to my knowledge. In my understanding, EA initially came together from 3 main connected but separate sources: GiveWell, Oxford philosophers like Toby Ord and Will MacAskill and other associated people like Rob Wiblin, Ben Todd etc, and LessWrong. I think these 3 sources all interacted with each other quite early on (pre-2010), but I don't think it's accurate to say that EA branched off from LessWrong.

Comment by anonymous_ea on Why we should be less productive. · 2019-05-10T21:47:35.779Z · score: 9 (5 votes) · EA · GW

In my experience (which could be different from yours), meal replacements are less about productivity than things like whether you like eating food, enjoy cooking food, have time to cook food, don't want to eat food you don't like, etc. In other words, it's more about valuing food or the process of cooking it less, rather than necessarily valuing productivity more.

Comment by anonymous_ea on Is EA unscalable central planning? · 2019-05-08T16:19:18.516Z · score: 9 (3 votes) · EA · GW

EA activities have historically changed over time. EA growth itself is much less prioritized now than it was a few years ago. The importance of money, funding, and earning to give has changed over time. There have been several posts about this over the years - "funding constrained" might a good keyword to search for.

I think the core mission of doing the most good has always stayed the same and probably always will. The cause areas EA most focuses on has changed to some extent over the years. Most importantly, longtermism and far future concerns have become more prominent over time in EA orgs and prominent EAs, but much less so among more casual EAs.

80000 Hours is perhaps the most prominent example of an organization whose activities, thinking, and priorities have changed over time. Some of this should be visible from reading some of their older content.

Comment by anonymous_ea on How do we check for flaws in Effective Altruism? · 2019-05-08T16:11:58.434Z · score: 6 (4 votes) · EA · GW

I also agree that some infrastructure would be good. In the meantime, I suggest reading criticisms of EA from both non-EAs and from EAs, and how EAs respond to the criticism (or how one could successfully respond to it). That's probably the closest you can get to external audits and checking for flaws in EA.

Unfortunately there's no central repository of EA criticism that I know of (this seems quite valuable to me!). Carl Shulman said on Facebook recently on a post by Julia Galef that he keeps a personal bookmarks folder of criticisms of groups that he has some affiliation with or interest in. If you're interested, you could try contacting him to see if it's shareable.

You can also check the mistakes pages of EA orgs, like GiveWell and 80000 Hours (and their evaluations page). That's only a partial solution since there could be many mistakes by EA orgs that they themselves don't recognize, but it's one step forward of many.

Comment by anonymous_ea on Scrupulosity: my EAGxBoston 2019 lightning talk · 2019-05-02T23:18:08.489Z · score: 2 (2 votes) · EA · GW

Thanks for posting this here! Scrupulosity is a relatively neglected topic in EA so it's good to see some more attention and care about it.

Comment by anonymous_ea on Legal psychedelic retreats launching in Jamaica · 2019-04-22T01:12:28.175Z · score: 2 (2 votes) · EA · GW

Thanks! Sorry for the slightly combative tone in my earlier comment.

Comment by anonymous_ea on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-21T21:52:16.788Z · score: 1 (1 votes) · EA · GW

To be clear, I don't know whether they specifically target elite college graduates. I was speaking slightly loosely and don't have any inside information on 80k. It just seems to me that use elite colleges are a proxy for ambitious graduates.

Comment by anonymous_ea on Legal psychedelic retreats launching in Jamaica · 2019-04-18T18:43:29.752Z · score: 12 (7 votes) · EA · GW

I definitely read the post as suggesting implicitly that EAs should consider going on the retreat. What would be the point of the post otherwise? There is some discussion of psychedelics in general, but that doesn't seem to be the primary purpose.

Comment by anonymous_ea on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-18T18:12:21.898Z · score: 8 (4 votes) · EA · GW
I'm concerned you're defending a straw man - did anyone ever claim 80k's list was true for every single possible person? I don't think so and such a claim would be implausible.

As an anecdote, I've always read their list and recommendations as applying to their target audience of talented graduates of elite Western colleges.

Comment by anonymous_ea on Legal psychedelic retreats launching in Jamaica · 2019-04-17T20:16:53.661Z · score: 2 (3 votes) · EA · GW

OP neglected to mention that the retreat costs $1700 according to the website. Neither does there seem to be some kind of financial aid plan or discount for EAs, like CFAR does.

Comment by anonymous_ea on Who is working on finding "Cause X"? · 2019-04-15T17:25:54.831Z · score: 1 (1 votes) · EA · GW

The link didn't work properly for me. Did you mean the following comment?

We're also working on understanding invertebrate sentience and wild animal welfare - maybe not "cause X" because other EAs are aware of this cause already, but I think will help unlock important new interventions.
Additionally, we're doing some analysis of nuclear war scenarios and paths toward non-proliferation. I think this is understudied in EA, though again maybe not "cause X" because EAs are already aware of it.
Lastly, we're also working on examining ballot initiatives and other political methods of achieving EA aims - maybe not cause X because it isn't a new cause area, but I think it will help unlock important new ways of achieving progress on our existing causes.
Comment by anonymous_ea on Who is working on finding "Cause X"? · 2019-04-12T17:27:29.420Z · score: 14 (7 votes) · EA · GW

Can you expand on this answer? E.g. how much this is a focus for you, how long you've been doing this, how long you expect to continue doing this, etc.

Comment by anonymous_ea on Apology · 2019-03-25T16:10:37.622Z · score: 15 (14 votes) · EA · GW

As an extreme example, the Young Adult fiction community has recently seen multiple authors cancel their completed and to-be-published books based on allegations that would not be taken very seriously in EA or most communities. One example is detailed in Slate, where Amelie Zhao's anticipated book, Blood Heir, was essentially retracted by the author after completion but before publication because of social media pressure stemming from flimsy-seeming accusations of racial insensitivity and plagiarism.

To be clear, I do not think it is plausible that Jacy is wholly innocent. Persistent accusations going back to him getting expelled from college seem quite likely to be rooted in some level of harmful behavior. But I don't think Jacy apologizing and stepping back from public life is strong evidence of anything - it seems to me that he would likely do that even if he thought he had only committed minor misdeamoners. CEA's response seems like stronger evidence of harmful behavior to me.

Comment by anonymous_ea on Apology · 2019-03-25T15:36:34.411Z · score: 29 (15 votes) · EA · GW

Julia Wise clarified this in her reply elsewhere in this comment section:

The accusation of sexual misconduct at Brown is one of the things that worried us at CEA. But we approached Jacy primarily out of concern about other more recent reports from members of the animal advocacy and EA communities.
Comment by anonymous_ea on Bayesian Investor proposes you can predictably beat the market by ~3% following a simple and easy strategy · 2019-03-15T17:45:06.584Z · score: 1 (3 votes) · EA · GW

I don't know much about investing, but a couple of quick comments might be helpful:

I understand many people knowledgable about about investing have thought they could beat the market and were wrong, but how many people were both knowledgeable about investing and about rationality but were still wrong? Given how few rationalists there are, I doubt there have been many.

Is there any empirical reason to think that knowledge about 'rationality' is particularly helpful for investing?

If we assign a 1/3 chance of the strategy beating the market by 3% and otherwise matches the market

1/3 chance seems possibly orders of magnitude too high to me.

Comment by anonymous_ea on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-02T22:26:20.322Z · score: 14 (9 votes) · EA · GW

I also perceived the personalized email as indicating a reasonable (30-50%+) chance of getting hired if I applied. I certainly didn't perceive it as indicating the 10% or perhaps even lower chance it seems to be after reading this thread. It was only after a couple of my friends also got similar emails that I realized that Open Phil was probably sending personalized emails to dozens, if not hundreds, of applicants.

Something that may be hard for Open Phil to know is that it felt really flattering for me to get a personalized email from one of the most prestigious EA orgs asking me to apply. It's sort of like if Harvard sent me an email saying that they'd seen my resume and thought I would be a good fit for them because of X, Y, and Z (all of which happened to be factors I personally also thought I was a good fit for Harvard). That may have caused me to overestimate my chances, and also would probably have led to me being more disappointed than otherwise if I had been rejected.

Comment by anonymous_ea on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-02T19:13:44.990Z · score: 34 (20 votes) · EA · GW

A meta point: A lot of the discussion here has focused on reducing the time spent applying. I think a more fundamental and important problem, based on the replies here and my own experiences, is that many, many EAs feel that either they're working at a top EA org or they're not contributing much. Since only a fraction of EAs can currently work at a top EA org due to supply vastly exceeding demand, even if the time spent applying goes down a lot, many EAs will end up feeling negatively about themselves and/or EA when they get rejected. See e.g. this post by Scott Alexander on the message he feels he gets from the community. A couple of excerpts below:

It just really sucks to constantly have one lobe of my brain thinking “You have to do [direct work/research], everybody is so desperate for your help and you’ll be letting them down if you don’t”, and the other lobe thinking “If you try to do the thing, you’ll be in an uphill competition against 2,000 other people who want to do it, which ends either in time wasted for no reason, or in you having an immense obligation to perform at 110% all the time to justify why you were chosen over a thousand almost-equally-good candidates”.
So instead I earn-to-give, and am constantly hit with messages (see above caveat! messages may not be real!) of “Why are you doing this? Nobody’s funding-constrained! Money isn’t real! Only talent constraints matter!” while knowing that if I tried to help with talent constraints, I would get “Sorry, we have 2,000 applicants per position, you’re imposing a huge cost on us by even making us evaluate you”.
Comment by anonymous_ea on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-02T19:04:31.824Z · score: 11 (4 votes) · EA · GW

To add another anecdote, my story is broadly similar to your story as well: Top college, focused on EA, was particularly well informed on long termist topics, did plenty of EA projects, got good feedback from EAs, and now have fairly increased anxiety and depression about my ability to contribute to long termism that I didn't have before. I haven't applied to many EA jobs, but a similar thing would probably happen to me as well if I did.

Comment by anonymous_ea on [blog cross-post] The remembering self needs to get real about the experiencing self. · 2019-02-10T06:37:52.444Z · score: 2 (2 votes) · EA · GW

Thanks! I don't disagree. Btw the link to the Remembering self is dead.

Comment by anonymous_ea on [blog cross-post] The remembering self needs to get real about the experiencing self. · 2019-02-09T04:04:58.207Z · score: 1 (1 votes) · EA · GW

I'm glad I read this piece. It makes a good point!

Can you expand on the connection to EA? I'm not sure I quite see it.

Comment by anonymous_ea on EA Boston 2018 Year in Review · 2019-02-05T22:19:29.446Z · score: 3 (3 votes) · EA · GW

Thanks for this great review. It helps outsiders understand how different EA groups and social scenes work.

Do you have estimates for how many people are involved in different groups and overall in Boston? Potentially for different levels of involvement? E.g. 30 hardcore/dedicated (whatever word seems best) EAs, 100 casual EAs.

Comment by anonymous_ea on Open Thread #43 · 2019-02-05T22:05:12.219Z · score: 4 (4 votes) · EA · GW

Future Perfect put out an article on this recently.

Comment by anonymous_ea on Is intellectual work better construed as exploration or performance? · 2019-01-28T22:51:25.979Z · score: 4 (4 votes) · EA · GW
Clearly both metaphors do work – I'm wondering which is better to cultivate on the margin.
My intuition is that it's better to lean on the image of intellectual work as exploration; curious what folks here think.

I'm a bit unclear on the question exactly. You ask which metaphor is better to cultivate on the margin, but I'm not sure for whom or for what purpose. Both metaphors seem clearly true to some extent to me, and which metaphor it fits more depends a lot on individuals and fields IMO.

Comment by anonymous_ea on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-28T22:45:59.191Z · score: 1 (1 votes) · EA · GW
Dylan Matthews' claim that nuclear war would cause "much or all of humankind" to suddenly vanish is unsubstantiated. The idea that billions of people worldwide will die from nuclear war is not supported by models with realistic numbers of nuclear warheads. "Much" is a very vague term, but speculation that every (or nearly every) human will die is a false alarm. Now that is easy to forgive, as it's a common belief within EA anyway and probably someone will try to argue with me about it.

Could you expand on this or give sources? I do hear EAs talking about nuclear war and nuclear winter being existential threats.

Comment by anonymous_ea on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-28T22:38:17.212Z · score: 9 (4 votes) · EA · GW

I think Vox, Ezra Klein, Dylan Matthews etc would disagree about point 2. Not to put words in someone else's mouth, but my sense is that Ezra Klein doesn't think that their coverage is substantially flawed and systematically biased relative to other comparable sources. He might even argue that their coverage is less biased than most sources.

Could you link to some of the criticisms you mentioned in point 1? I've seen others claim that as well on previous EA Forum posts about Future Perfect, and I think it would be good to have at least a few sources on this. Many EAs outside the US probably know very little about Vox.

Comment by anonymous_ea on Announcing a predoctoral research programme in economics at the Global Priorities Institute · 2019-01-21T21:14:19.751Z · score: 7 (2 votes) · EA · GW

How many fellows do you plan to accept?

Comment by anonymous_ea on A guide to effective altruism fellowships · 2019-01-21T21:12:41.743Z · score: 4 (4 votes) · EA · GW

This is a really great and helpful post. Thanks so much for running it, trying to evaluate its impact, and writing it up!

Comment by anonymous_ea on What has Effective Altruism actually done? · 2019-01-16T19:26:03.028Z · score: 6 (4 votes) · EA · GW

Interesting question! I think Givewell's estimate for how much money they've directed over the years should be counted in some way as well.

Comment by anonymous_ea on EA Giving Tuesday Donation Matching Initiative 2018 Retrospective · 2019-01-08T18:37:32.220Z · score: 3 (3 votes) · EA · GW

I want to echo other people in saying that this was both incredibly impressive and very impactful! Thank you so much :)

The case for taking AI seriously as a threat to humanity

2018-12-23T01:00:08.314Z · score: 18 (9 votes)

Pandemic Risk: How Large are the Expected Losses? Fan, Jamison, & Summers (2018)

2018-11-21T15:58:31.856Z · score: 22 (10 votes)