Posts

Study results: The most convincing argument for effective donations 2020-06-28T22:45:28.216Z · score: 67 (31 votes)
How should we run the EA Forum Prize? 2020-06-23T11:15:22.974Z · score: 29 (13 votes)
80K Podcast: Stuart Russell 2020-06-23T01:46:24.584Z · score: 33 (9 votes)
EA Organization Updates: May 2020 2020-06-22T10:44:05.660Z · score: 31 (13 votes)
EA Forum feature suggestion thread 2020-06-16T16:58:58.569Z · score: 34 (12 votes)
Modeling the Human Trajectory (Open Philanthropy) 2020-06-16T09:27:46.241Z · score: 49 (21 votes)
EA Forum Prize: Winners for April 2020 2020-06-08T09:07:31.471Z · score: 24 (8 votes)
Forum update: Tags are live! Go use them! 2020-06-01T16:26:12.900Z · score: 70 (22 votes)
EA Handbook, Third Edition: We want to hear your feedback! 2020-05-28T04:31:27.031Z · score: 43 (20 votes)
Helping those who need it 2020-05-27T11:08:48.393Z · score: 16 (6 votes)
Why effective altruism? 2020-05-27T10:40:20.305Z · score: 10 (2 votes)
Improving the world 2020-05-27T10:39:47.207Z · score: 8 (1 votes)
Evidence and reasoning 2020-05-27T10:37:59.045Z · score: 9 (2 votes)
Excitement, hope, and fulfillment 2020-05-27T10:29:02.817Z · score: 11 (6 votes)
Finding a balance 2020-05-27T10:28:43.903Z · score: 11 (3 votes)
External evaluation of GiveWell's research 2020-05-22T04:09:01.964Z · score: 13 (5 votes)
EA Organization Updates: April 2020 2020-05-19T09:05:38.641Z · score: 30 (13 votes)
EA Forum Prize: Winners for March 2020 2020-05-13T10:09:33.666Z · score: 26 (13 votes)
Open Philanthropy: Our Progress in 2019 and Plans for 2020 2020-05-12T11:49:40.509Z · score: 42 (19 votes)
Notes on getting involved with small EA projects 2020-05-12T09:16:32.808Z · score: 21 (6 votes)
Open Philanthropy: Questions We Ask Ourselves Before Making a Grant 2020-05-04T04:52:32.092Z · score: 20 (11 votes)
EA Organization Updates: March 2020 2020-04-17T11:28:26.976Z · score: 24 (10 votes)
Book Review: The Precipice 2020-04-09T21:21:01.242Z · score: 38 (16 votes)
80,000 Hours' annual review (2020) 2020-04-07T05:48:27.806Z · score: 17 (4 votes)
On the impossibility of supersized machines 2020-04-02T22:09:55.997Z · score: 17 (8 votes)
EA Forum Prize: Winners for February 2020 2020-04-02T00:22:09.933Z · score: 14 (6 votes)
Official EA Forum Feedback Survey 2020-03-31T09:53:28.799Z · score: 38 (12 votes)
What posts do you want someone to write? 2020-03-24T06:41:35.531Z · score: 49 (16 votes)
What are examples of EA work being reviewed by non-EA researchers? 2020-03-24T06:04:58.892Z · score: 56 (28 votes)
Lant Pritchett's "smell test": is your impact evaluation asking questions that matter? 2020-03-18T23:38:35.261Z · score: 36 (10 votes)
What COVID-19 questions should Open Philanthropy pay Good Judgment to work on? 2020-03-18T23:31:18.634Z · score: 36 (12 votes)
What are some 1:1 meetings you'd like to arrange, and how can people find you? 2020-03-18T05:02:16.621Z · score: 27 (10 votes)
EA Organization Updates: February 2020 2020-03-15T06:41:05.426Z · score: 22 (7 votes)
80,000 Hours: Anonymous contributors on flaws of the EA community 2020-03-04T00:32:11.525Z · score: 46 (22 votes)
EA Forum Prize: Winners for January 2020 2020-03-02T12:02:32.207Z · score: 26 (11 votes)
80,000 Hours: Anonymous contributors on EA movement growth 2020-02-18T00:09:58.434Z · score: 30 (12 votes)
EA Organization Updates: January 2020 2020-02-14T06:19:35.194Z · score: 24 (12 votes)
Poverty in Depression-era England: Excerpts from Orwell's "Wigan Pier" 2020-02-12T01:01:42.776Z · score: 15 (4 votes)
Anonymous contributors answer: How honest and candid should high-profile people be? 2020-02-12T00:14:34.254Z · score: 22 (7 votes)
AI Impacts: Historic trends in technological progress 2020-02-12T00:08:21.539Z · score: 55 (23 votes)
Volunteering isn't free 2020-02-04T09:04:26.152Z · score: 40 (22 votes)
80,000 Hours: Ways to be successful that people don't talk about enough 2020-01-31T09:59:02.986Z · score: 11 (5 votes)
EA Forum Prize: Winners for December 2019 2020-01-27T10:33:16.359Z · score: 31 (15 votes)
Lewis Bollard: 10 Years of Progress for Farm Animals 2020-01-24T12:47:21.432Z · score: 23 (9 votes)
EA Organization Updates: December 2019 2020-01-16T11:47:54.077Z · score: 27 (10 votes)
EA Forum Prize: Winners for November 2019 2020-01-16T00:56:19.753Z · score: 26 (8 votes)
Five GCR grants from the Global Challenges Foundation 2020-01-16T00:46:05.580Z · score: 31 (10 votes)
Notes on hiring a copyeditor for CEA 2020-01-09T12:56:37.126Z · score: 89 (45 votes)
Reddit highlight: EA and socialism 2020-01-03T13:46:40.508Z · score: 19 (8 votes)
Purchase fuzzies and utilons separately (Eliezer Yudkowsky) 2019-12-27T02:21:19.723Z · score: 37 (17 votes)

Comments

Comment by aarongertler on Is it possible to change user name? · 2020-07-02T10:17:53.941Z · score: 2 (1 votes) · EA · GW

I second this. Right now, we review all new users when they join the Forum, including their names. We'd also want to review all name changes if users could make them, which isn't too different from users asking us for name changes (though infrastructure allowing that would be nice to have someday).

For anyone who wants an example of how a username change could cause a prblem: If you try to use "Will MacAskilI" (with a capital "I" instead of the second "L") as a username, you'll be caught before your account is approved. So we're also wary of someone changing their name to that and then pretending to be Will for a bit.

Comment by aarongertler on Slate Star Codex, EA, and self-reflection · 2020-07-02T10:13:24.947Z · score: 11 (12 votes) · EA · GW

The original post makes highly damaging claims, but it at least provides links to the sources that led the author to make said claims, allowing for in-depth engagement from commenters. One could argue that it breaks certain Forum rules (e.g. around accuracy), but I wouldn't call it "spam". 

This comment breaks Forum rules itself; it is unclear and unnecessarily rude. I appreciate that you feel strongly about the post's claims, but please refrain from referring to posts as "spam" or "trolling" unless you are at least willing to explain why you believe they are spammy or insincere.

Another way this could have been phrased: 

"I don't think the OP uses appropriate context when making serious, damaging claims about the motives and beliefs of another writer. (IDEALLY, MORE DETAIL AS TO WHY YOU THINK THE OP IS WRONG.) I don't think engaging with this author will be very productive."

Keeping conversation civil takes more time and effort, but it's really important to do this if we want the Forum to avoid many of the standard pitfalls of online discourse.

Comment by aarongertler on Impacts of rational fiction? · 2020-07-01T08:05:51.778Z · score: 5 (3 votes) · EA · GW

I shared some thoughts on this topic on a similar thread posted last year. An excerpt: 

"The key is that you need to show people using an EA mindset (thinking about consequences and counterfactuals, remembering that everyone is valuable), even if they aren't working on EA causes. Show people characters who do incredible things and invite them to contemplate the virtues of those characters, and you don't need to hammer too hard on the philosophy."

...so I suppose I'd say that (1) is important, but mostly when blended with (2). Rational fiction isn't uniquely instructive; instead, it takes lessons a reader could learn in many different ways and drives them deeper into the reader's identity than other media might be able to. There's an element of "I didn't know people could be like this" and an element of "this is the kind of person I want to be." 

I'd guess the second element is more important, since most people have heard about actual moral heroes outside of fiction, but they may not have a sense of how such people think about/experience the world.

Comment by aarongertler on Can I archive the EA forum on the wayback machine (internet archive, archive.org) ? · 2020-07-01T07:57:05.864Z · score: 2 (1 votes) · EA · GW

This is correct.

Comment by aarongertler on My amateur method for translations · 2020-06-30T07:10:26.585Z · score: 5 (3 votes) · EA · GW

Thanks for posting this resource! Questions:

  • Do you find DeepL to be better than Google Translate and other options overall? Just for English/Portuguese translation?
  • Would you be open to adding a bit of material about how your group has used translation and/or how you think it might be useful to other people doing EA work? Right now, this post just reads like a handy guide to a skill; before I move it out of the "personal blog" category, I'd want to see some note on its relevance to EA.
Comment by aarongertler on Slate Star Codex, EA, and self-reflection · 2020-06-29T15:31:21.836Z · score: 20 (9 votes) · EA · GW

Anonymous submitters on the EA Forum have supported ideas like racial IQ differences.

I found many responses to that survey odious for various reasons and share your concerns in that regard. It makes me uneasy to think that friends/fellow movement members may have said some of those things.

However, the post you linked features a survey that was reposted in quite a few different places. I wouldn't necessarily consider people who filled it out to be "submitters to the EA Forum." (For example, some of them seem to detest the EA movement in general, such that I hope they don't spend much time here for their own sake.) That said, it's impossible to tell for sure.

If the New York Times were to run a similar survey, I'd guess that many respondents would express similar views. But I don't think that would say much, if anything, about the community of people who regularly read the Times. I expect that people in the EA community overwhelmingly support racial equality and abhor white supremacy.

(Additional context: 75% of EA Survey respondents are on the political left or center-left; roughly 3% are right or center-right. That seems to make the community more politically left-leaning than the Yale student body, though the comparison is inexact.)

Comment by aarongertler on EA Forum feature suggestion thread · 2020-06-29T13:22:28.771Z · score: 2 (1 votes) · EA · GW

This post (which links to the calendar and other resources) has been pinned on the Community page for weeks. I could also pin it on the main page, but I have a much higher bar for that, because it means everyone will see it every time they come to the Forum (and it doesn't really fit the Frontpage category).

Comment by aarongertler on DontDoxScottAlexander.com - A Petition · 2020-06-29T13:20:36.046Z · score: 10 (15 votes) · EA · GW

I'll add some context to clarify to readers why this could be seen as relevant:

Scott Alexander has done a huge amount of writing about effective altruism, including the following posts that many would regard as "classic" (or at least I do):

His most recent reader survey found that 13% of his readers self-identified as being "effective altruists" (this is from his summary of the survey; I don't know the original text of the question). That's about 1600 people.

Comment by aarongertler on aarongertler's Shortform · 2020-06-25T22:53:49.422Z · score: 13 (5 votes) · EA · GW

Excerpt from a Twitter thread about the Scott Alexander doxxing situation, but also about the power of online intellectual communities in general:

I found SlateStarCodex in 2015. immediately afterwards, I got involved in some of the little splinter communities online, that had developed after LessWrong started to disperse. I don't think it's exaggerating to say it saved my life.

I may have found my way on my own eventually, but the path was eased immensely by LW/SSC. In 2015 I was coming out of my only serious suicidal episode; I was in an unhappy marriage, in a town where I knew hardly anyone; I had failed out of my engineering program six months prior.

I had been peripherally aware of LW through a few fanfic pieces, and was directed to SSC via the LessWrong comments section.

It was the most intimidating community of people I had ever encountered -- I didn't think I could keep up. 

But eventually, I realized that not only was this the first group of people who made me feel like I had come *home,* but that it was also one of the most welcoming places I'd ever been (IRL or virtual).

I joined a Slack, joined "rationalist" tumblr, and made a few comments on LW and SSC. Within a few months, I had *friends*, some of whom I would eventually count among those I love the most.

This is a community that takes ideas seriously (even when it would be better for their sanity to disengage).

This is a community that thinks everyone who can engage with them in sincere good faith might have something useful to say.

This is a community that saw someone writing long, in-depth critiques on the material produced on or adjacent to LW/SSC...and decided that meant he was a friend. 

I have no prestigious credentials to speak of. I had no connections, was a college dropout, no high-paying job. I had no particular expertise, a lower-class background than many of the people I met, a Red-Tribe-Evangelical upbringing and all I had to do, to make these new friends, was show up and join the conversation.

[...]

The "weakness" of the LessWrong/SSC community is also its strength: putting up with people they disagree with far longer than they have to. Of course terrible people slip through. They do in every group -- ours are just significantly more verbose.

But this is a community full of people who mostly just want to get things *right,* become *better people,* and turn over every single rock they see in the process of finding ways to be more correct -- not every person and not all the time, but more than I've seen everywhere else.

The transhumanist background that runs through the history of LW/SSC also means that trans people are more accepted here than anywhere else I've seen, because part of that ideological influence is the belief that everyone should be able to have the body they want.

It is not by accident that this loosely-associated cluster of bloggers, weird nerds, and twitter shitposters were ahead of the game on coronavirus. It's because they were watching, and thinking, and paying attention and listening to things that sound crazy... just in case.

There is a 2-part lesson this community held to, even while the rest of the world is forgetting it: 

  • You can't prohibit dissent
  • It's sometimes worth it to engage someone when they have icky-sounding ideas

It was unpopular six months ago to think COVID might be a big deal; the SSC/LW diaspora paid attention anyways.

You can refuse to hang out with someone at a party. You can tell your friends they suck. But you can't prohibit them from speaking *merely because their ideas make you uncomfortable* and there is value in engaging with dissent, with ideas that are taboo in Current Year.

(I'm not leaving a link or username, as this person's Tweets are protected.)

Comment by aarongertler on How should we run the EA Forum Prize? · 2020-06-25T08:50:29.964Z · score: 3 (2 votes) · EA · GW

No user on the Forum has a "normal" vote worth more than 2 karma. 

(The full karma system is written out in this post.)

Comment by aarongertler on EA Forum feature suggestion thread · 2020-06-25T06:27:32.542Z · score: 4 (2 votes) · EA · GW

On (2), we've considered adding a summary field in the editor, but I don't think we'd make it mandatory unless we did so for a much larger character count. Whether or not we eventually implement that, I encourage anyone reading this to include summaries in their long posts!

Thanks for providing the Elsevier link -- I could imagine us linking to that as an example of how one might compose a summary.

Comment by aarongertler on EA Forum feature suggestion thread · 2020-06-25T06:17:20.930Z · score: 4 (3 votes) · EA · GW

I used to do these, but I think I phased them out when Shortform posts came along, as those appeared to serve a similar role (sharing things that you don't think merit a full post).

As it turns out, while Shortform has been useful, I think it has a different feel than open threads, so bringing them back seems like a good idea. I or another moderator may start posting them soon.

Comment by aarongertler on Longtermism ⋂ Twitter · 2020-06-23T08:19:57.752Z · score: 4 (2 votes) · EA · GW

I agree! I set aside a big chunk of my recent EAGx presentation about posting on the Forum to discussing Shortform posts, which exist largely to encourage brief content/"quick takes".

Comment by aarongertler on EA Forum feature suggestion thread · 2020-06-23T08:08:55.050Z · score: 2 (1 votes) · EA · GW

By the first suggestion, do you mean having each individual tag's page include a link to the list of all tags?

Comment by aarongertler on EA Forum feature suggestion thread · 2020-06-23T08:03:03.277Z · score: 4 (3 votes) · EA · GW

In case you haven't seen it, I created the Facebook group Effective Altruism Polls for this use case. Response rates are generally pretty high!

Comment by aarongertler on Fewer but poorer: Benevolent partiality in prosocial preferences · 2020-06-23T07:38:24.265Z · score: 11 (4 votes) · EA · GW

This was an excellent research summary! I love seeing people write up scientific studies from outside the EA-sphere (this one had some EA links, but I wasn't familiar with either author). 

This sort of thing gives Forum readers a better knowledge base on which to build theories and models; even if any individual study might be flawed, I'm still excited to see more of them get written up, since I'd hope that any given study sheds at least a bit of light on net.

Comment by aarongertler on KR's Shortform · 2020-06-23T07:34:04.839Z · score: 2 (1 votes) · EA · GW

I found this interesting, and I think it would be worth expanding into a full post if you felt like it! 

I don't think you'd need more content: just a few more paragraph breaks, maybe a brief summary, and maybe a few questions to guide responses. If you have questions you'd want readers to tackle, consider including them as comments after the post.

Comment by aarongertler on aarongertler's Shortform · 2020-06-23T07:33:42.858Z · score: 6 (4 votes) · EA · GW

I recommend placing questions for readers in comments after your posts.

If you want people to discuss/provide feedback on something you've written, it helps to let them know what types of discussion/feedback you are looking for.

If you do this through a bunch of scattered questions/notes in your post, any would-be respondent has to either remember what you wanted or read through the post again after they've finished.

If you do this with a list of questions at the end of the post, respondents will remember them better, but will still have to quote the right question in each response. They might also respond to a bunch of questions with a single comment, creating an awkward multi-threaded conversation.

If you do this with a set of comments on your own post -- one for each question/request -- you let respondents easily see what you want and discuss each point separately. No awkward multi-threading for you! This seems like the best method to me in most cases.

Comment by aarongertler on How to Fix Private Prisons and Immigration · 2020-06-23T07:24:38.328Z · score: 5 (4 votes) · EA · GW

I appreciate the effort that went into this post, and the use of actual math to describe the benefits of the proposed system. I also really like that you took feedback from the comments and edited the post -- not enough people do that!

That said, this reads to me as a proposal for massive structural change with no discussion of moderate/feasible reforms that could get us closer to a system with some of these benefits. 

As a moderator, I think posts like this are fine. As a reader of the Forum, I prefer that political discussion on the Forum either:

a) Note clearly that a post is meant to be speculative (e.g. MacAskill's Age-Weighted Voting), or

b) Engage with current political systems by discussing proposals that have existing support, or that at least have some chance of being implemented piece-by-piece without an enormous overhaul of an entire societal institution (I'd say the same had this piece been about education, military spending, etc.)

I think Eliezer Yudkowsky's recent piece on police reform does this well; some of his ideas are more realistic than others, but quite a few could reasonably be proposed as legislation within weeks if a congressperson were to write a bill. And while he doesn't cite the people who side with him on some of these proposals, I'm aware that those people exist.

****

I'd have been more convinced by the post if it referred to any existing policy which mirrors any aspect of these proposals. Does any country in the world create estimates of individual citizens' cost to social services? Does any country in the world have a system where companies can bid for the right to collect individuals' future tax revenue? Has anyone else (politicians, researchers, etc.) ever argued for a system resembling this one?

(I wouldn't be surprised if the answer to that last question were "yes," but I don't know who's made those arguments or whether they ever made any legislative progress.)

In one comment, you note:

I think the system is something to carefully work towards.

I'd have loved to hear thoughts in the post on how we might "carefully work towards" a system that works so differently from any that (AFAIK) exist in the world today. What intermediate steps get us closer to this system without creating a full transition?

Comment by aarongertler on The EA movement is neglecting physical goods · 2020-06-23T06:48:30.319Z · score: 4 (3 votes) · EA · GW

That's some really broad experience! 

Looking for operations roles sounds like a good thing to be doing. Outside of that, you might consider:

* Joining the EA Coronavirus Discussion Facebook group (there may be discussion of logistics there; I know of a few people in EA who have worked on COVID projects with some physical component)

* Writing about your skills on the EA Volunteering Facebook group (pretty sparse right now) to see if anyone has suggestions

Comment by aarongertler on evelynciara's Shortform · 2020-06-23T06:42:25.611Z · score: 3 (2 votes) · EA · GW

Epistemic status: Almost entirely opinion, I'd love to hear counterexamples

When I hear proposals related to instilling certain values widely throughout a population (or preventing the instillation of certain values), I'm always inherently skeptical. I'm not aware of many cases where something like this worked well, at least in a region as large, sophisticated, and polarized as the United States. 

You could point to civil rights campaigns, which have generally been successful over long periods of time, but those had the advantage of being run mostly by people who were personally affected (= lots of energy for activism, lots of people "inherently" supporting the movement in a deep and personal way). 

If you look at other movements that transformed some part of the U.S. (e.g. bioethics or the conservative legal movement, as seen in Open Phil's case studies of early field growth), you see narrow targeting of influential people rather than public advocacy. 

Rather than thinking about "countering anti-science" more generally, why not focus on specific policies with scientific support? Fighting generically for "science" seems less compelling than pushing for one specific scientific idea ("masks work," "housing deregulation will lower rents"), and I can think of a lot of cases where scientific ideas won the day in some democratic context.

This isn't to say that public science advocacy is pointless; you can reach a lot of people by doing that. But I don't think the people you reach are likely to "matter" much unless they actually campaign for some specific outcome (e.g. I wouldn't expect a scientist to swing many votes in a national election, but maybe they could push some funding toward an advocacy group for a beneficial policy).

****

One other note: I ran a quick search to look for polls on public trust in science, but all I found was a piece from Gallup on public trust in medical advice

Putting that aside, I'd still guess that a large majority of Americans would claim to be "pro-science" and to "trust science," even if many of those people actually endorse minority scientific claims (e.g. "X scientists say climate change isn't a problem"). But I could be overestimating the extent to which people see "science" as a generally positive applause light.

Comment by aarongertler on KR's Shortform · 2020-06-23T06:21:56.860Z · score: 2 (1 votes) · EA · GW

Many years ago, Eliezer Yudkowsky shared a short story I wrote (related to AI sentience) with his Facebook followers. The story isn't great -- I bring it up here only as an example of people being interested in these questions.

Comment by aarongertler on EA considerations regarding increasing political polarization · 2020-06-22T15:24:13.798Z · score: 2 (1 votes) · EA · GW

Issa is correct about comments from new users being counted but hidden (until a moderator approves those users). Deleted comments also show up in the comment count for a brief time, though they get removed from the count eventually (otherwise, spam would create a lot more "ghost comments" that are current visible).

Comment by aarongertler on The EA movement is neglecting physical goods · 2020-06-22T14:32:12.267Z · score: 5 (4 votes) · EA · GW

The thought this post generated as I read: 

Currently, the EA community is pretty good at noticing/predicting problems before they happen. There may be future cases where most of the world is caught flat-footed by something we had already begun to prepare for.

Sometimes, the problems we notice will include some physical component -- that is, our ability to solve them will be bottlenecked by physical manufacturing capacity (e.g. masks for COVID). The more people in the community have some sense of how manufacturing works, the more likely it is that we'll be able to start useful projects that resolve these bottlenecks more quickly. 

We'll also have fewer ideas that are doomed to fail because we didn't understand this topic. I'll quote a Facebook comment from a community member with healthcare experience (though I won't link to the comment, since I'm not sure how large an audience they wanted):

[One unpromising idea that some EAs went for is] trying to get people to work on open-source ventilators, when actually many existing licensed manufacturers can scale up production, but just haven't seen huge demand, as ventilators are expensive and hospitals need to get funding to purchase more, and they need freight providers to get them to the right places, and get staff and PPE to operate them. Also, it's easier for related manufacturers (e.g. in the auto industry) to switch to producing ventilators if the need arises. There are many other reasons this isn't the bottleneck too. In this case I think it just annoys me that people haven't done some basic checks on their assumptions, or checked with anyone about what the bottlenecks really are.

Comment by aarongertler on Who should / is going to win 2020 FLI award 2020? · 2020-06-17T09:33:16.668Z · score: 3 (2 votes) · EA · GW

In the case of Petrov, I'm under the impression (based on a documentary about him) that he probably didn't have much money, and that the prize had an element of "help a hero live in comfort." This isn't an impact-focused reason to give money, but does play into the "unsung hero" element (by creating the impression of the hero finally being "sung"/rewarded).

It's also plausible to me that the prize could have been funded by a donor who really wanted to give out cash rewards, and just worked with FLI to implement their idea (but I have no idea whether this is true and I don't think it's likely).

Comment by aarongertler on [Link] "Will He Go?" book review (Scott Aaronson) · 2020-06-15T19:27:53.145Z · score: 2 (1 votes) · EA · GW

I liked that post when it came out, but I had forgotten about it in the ensuing year-plus. Maybe you could link to this post when you make situational-awareness crossposts?

Comment by aarongertler on [Link] "Will He Go?" book review (Scott Aaronson) · 2020-06-15T19:06:47.460Z · score: 2 (1 votes) · EA · GW

I should also mention that a post like this doesn't need to have expected-value calculations attached, or anything in that level of detail; it's just good to have a couple of sentences along the lines of "here's why I posted this, and why I think it demonstrates a chance to make an effective donation // take other effective actions," even if no math is involved.

(This kind of explanation seems more important the further removed a post is from "standard" EA content. When I crossposted Open Phil's 2019 year-in-review post, I didn't include a summary, because the material seemed to have very clear relevance for people who want to keep up with the EA community.)

Comment by aarongertler on [Link] "Will He Go?" book review (Scott Aaronson) · 2020-06-15T18:56:59.351Z · score: 2 (1 votes) · EA · GW

Thanks for sharing the last link, which I think provides useful context (that Open Philanthropy's funder has a history of donating to partisan political campaigns).

The very last line of the Vox interview is the only one I saw which suggests concrete action a person could take to reduce the chances of an electoral crisis (I assume that trying to get relevant laws changed within five months would be really hard):

The only real way to avoid this is to make sure we don’t enter into this scenario, and the best way to do that is to ensure that he loses decisively in November. That’s the best guarantee. That’s the best way that we can secure the future of a healthy constitutional democracy.

Given these points, though, the upshot of this post is effectively an argument that supporting Biden's campaign should be thought of as an EA cause area, because even though it's very hard to tell what impact political donations have, an unclear election result runs the risk of triggering a civil war, which is bad enough that even hard-to-quantify forms of risk reduction are very valuable here? With some bonus value because Biden donations mean a candidate with mostly better policy ideas is more likely to win (though the article doesn't really go into policy differences)?

Does that seem like the right takeaway to you? Did you mean to make a different point about the value of changing electoral laws?

(I realize that the above is me making a lot of assumptions, but that's another reason why it's helpful to summarize what you found valuable/actionable in a given crosspost; it saves readers from having to work through all of the implications themselves.)

Comment by aarongertler on [Link] "Will He Go?" book review (Scott Aaronson) · 2020-06-14T20:41:04.530Z · score: 2 (1 votes) · EA · GW

Milan: I've categorized the post as "personal blog" for now. Can you say any more about how this relates to EA, or how readers might be able to take action if they want to find a way to help? 

Comment by aarongertler on External evaluation of GiveWell's research · 2020-06-12T21:53:25.456Z · score: 4 (2 votes) · EA · GW

If you really think GiveWell or Open Philanthropy is missing out on a lot of value by failing to pursue a certain strategy, it seems like you should aim to make the most convincing case you can for their sake!

(Perhaps it would be safer to write a post specifically about this topic, then send it to them; that way, even if there's no reply, you at least have the post and can get feedback from other people.)

Comment by aarongertler on External evaluation of GiveWell's research · 2020-06-12T05:38:34.461Z · score: 2 (1 votes) · EA · GW

I would say "comparing the crowd's accuracy to reality" would be best, but "future GiveWell evaluations" is another reasonable option. 

Consider Metaculus's record vs any other paid experts.

Metaculus produces world class answers off a user base of 12,000.

I don't know what Metaculus's record is against "other paid experts," and I expect it would depend on which experts and which topic was up for prediction. I think the average researcher at GiveWell is probably much, much better at probabilistic reasoning than the average pundit or academic, because GiveWell's application process tests this skill and working at GiveWell requires that the skill be used frequently.

I also don't know where your claim that "Metaculus produces world-class answers" comes from. Could you link to some evidence? (In general, a lot of your comments make substantial claims without links or citations, which can make it hard to engage with them.)

Open Philanthropy has contracted with Good Judgment Inc. for COVID forecasting, so this idea is definitely on the organization's radar (and by extension, GiveWell's). Have you tried asking them why they don't ask questions on Metaculus or make more use of crowdsourcing in general? I'm sure they'd have a better explanation for you than anything I could hypothesize :-)

Comment by aarongertler on Improving Giving with Nudges · 2020-06-11T09:11:39.464Z · score: 2 (1 votes) · EA · GW

This isn't a perfect match for your request, but the UK Behavioural Insights team has done a lot of nudge-style work on charitable giving. This report links to a few of their results, but I think they've done quite a bit more past that.

As for request (A) in your post; is there a certain size of company you'd need in order to run the study you are considering? I can think of at least one person who might be interested, but I think the company they run might be too small.

Comment by aarongertler on Will protests lead to thousands of coronavirus deaths? · 2020-06-11T08:49:00.034Z · score: 15 (4 votes) · EA · GW

I don't think this is an accurate portrayal of what Dale was trying to say.

I don't see them actively recommending a particular policy in the post -- just noting that some studies of repressive behavior find that it may lead to a certain outcome. It can be true that repression sometimes quells riots while also being true that it has many other negative outcomes and should clearly be avoided. (Though I didn't see Dale say that, either, and I don't want to put words in their mouth.)

Of course, the vague term "repression" and the differing social context of the examples Dale cited mean that blanket statements like "literature suggests that repression is effective" aren't very useful, and I wish they'd acknowledged that more clearly in their post, especially given the awful consequences of policies like "harsher prison sentences for a lot of people."

*****

As for the claim that "justice" will clear up protests quickly; leaving aside the question of which specific demands will have a positive impact on their own merit (likely many), have we seen enough demands granted so far to have a sense of what usually happens after vis-a-vis public protest? Especially in cases where actually following through on promises of change will take a long time?

The clearest example of responsiveness to protest I can recall (haven't been following the topic too closely) was action taken by the Minneapolis City Council to ban certain restraint practices and explore "dismantling" the police department. Did either action lead directly to a reduction in public protest?

Comment by aarongertler on EA and tackling racism · 2020-06-11T02:29:10.466Z · score: 12 (5 votes) · EA · GW

But when it comes to acknowledging and internally correcting for the types of biases which result from growing up in a society which is built upon exploitation, I don't really think the EA community does better than any other randomly selected group of people who are from a similar demographic (lets say, randomly selected people who went to prestigious universities).

What are some of the biases you're thinking of here? And are there any groups of people that you think are especially good at correcting for these biases?

My impression of the EA bubble is that it leans left-libertarian; I've seen a lot of discussion of criminal justice reform and issues with policing there (compared to e.g. the parts of the mainstream media dominated by people from prestigious universities). 

I suppose the average EA might be more supportive of capitalism than the average graduate of a prestigious university, but I struggle to see that as an example of bias rather than as a focus on the importance of certain outcomes (e.g. average living standards vs. higher equity within society).

Comment by aarongertler on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-10T03:54:50.065Z · score: 2 (1 votes) · EA · GW

I also think the essays are exciting and have a good track record of convincing people. And my goal with the Handbook isn't to avoid jargon altogether. To some extent, though, I'm trying to pack a lot of points into a smallish space, which isn't how Eliezer's style typically works out. Were the essay making the same points at half the length, I think it would be a better candidate.

Maybe I'll try to produce an edited version at some point (with fewer digressions, and e.g. noting that ego depletion failed to replicate in a "Fuzzies and Utilons" footnote). But the more edits happen in a piece, the longer I expect it to take to get approval, especially from someone who doesn't have much time to spare — another trade-off I had to consider when selecting pieces (I don't think anything in the current series had more than a paragraph removed, unless it were printed as an excerpt).

I don't want to push you to spend a lot of time on this, but if you're game, would you want to suggest an excerpt from either piece (say 400 words at most) that you think gets the central point across without forcing the reader to read the whole essay? This won't be necessary for all readers, but it's something I've been aiming for.

 

I do expect that further material for this project will contain a lot more jargon and complexity, because it won't be explicitly pitched as an introduction to the basic concepts of EA (and you really can't get far in e.g. global development without digging into economics, or X-risk without getting into topics like "corrigibility").

 

A note on the Thiel point: As far as I recall, his thinking on startups became a popular phenomenon only after Blake Masters published notes on his class, though I don't know whether the notes did much to make Thiel's thinking more clear (maybe they were just the first widely-available source of that thinking).

Comment by aarongertler on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-10T03:42:49.946Z · score: 2 (1 votes) · EA · GW

I think either way, if they're going to engage seriously with intellectual thought in the modern world they need to take responsibility and learn to engage with writing about the world which doesn't expect that there's an interventionist aligned superintelligence.

If there were no great essays with similar themes aside from Eliezer's, I'd be much more inclined to include it in a series (probably a series explicitly focused on X-risk, as the current material really doesn't get into that, though perhaps it should). But I think that between Ord, Bostrom, and others, I'm likely to find a piece that makes similar compelling points about extinction risk without the surrounding Eliezerisms.

Sometimes, Eliezerisms are great; I enjoy almost everything he's ever written. But I think we'd both agree that his writing style is a miss for a good number of people, including many who have made great contributions to the EA movement. Perhaps the chance of catching people especially well makes his essays the highest-EV option, but there are a lot of other great writers who have tackled these topics.

(There's also the trickiness of having CEA's name attached to this, which means that — however many disclaimers we may attach, and  — there will be readers who assume it's an important part of EA to be atheist, or to support cryonics, etc.)

To clarify, I wouldn't expect an essay like this to turn off most religious readers, or even to complete alienate any one person; it's just got a few slings and arrows that I think can be avoided without compromising on quality.

Of course, there are many bits of Eliezer that I'd be glad to excerpt, including from this essay; if the excerpt sections in this series get more material added to them, I might be interested in something like this:

What can a twelfth-century peasant do to save themselves from annihilation?  Nothing.  Nature's little challenges aren't always fair.  When you run into a challenge that's too difficult, you suffer the penalty; when you run into a lethal penalty, you die.  That's how it is for people, and it isn't any different for planets.  Someone who wants to dance the deadly dance with Nature, does need to understand what they're up against:  Absolute, utter, exceptionless neutrality.

Comment by aarongertler on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-06-09T23:14:16.408Z · score: 12 (5 votes) · EA · GW

I'd recommend labeling your titles using the name of this series of posts, rather than only with numbers. For example:

Moral anti-realism #3: Against Irreducible Normativity

Starting titles with a numeral looks a bit wonky whenever the Forum has a list of posts displayed.

Also, I'm really happy to see you keep releasing these posts! I look forward to when we release the public version of our sequencing feature so that they can be saved in that format.

Comment by aarongertler on EA Forum Prize: Winners for April 2020 · 2020-06-09T21:47:23.728Z · score: 4 (2 votes) · EA · GW

That's a good point; I should survey past winners to get their views on this, too (though I expect people to report that they were influenced a bit more than was actually the case).

Users who were more active on the Forum were more likely to endorse the Prize, but that seems like it results partly from their being more aware of the Prize in general/reading more of the posts. Given that the posts are pinned for at least a few days each month, and shared with the biggest EA Facebook group, awareness/readership numbers were lower than I'd have hoped for.

Thanks for the feedback, and for giving me an idea on data to collect!

Comment by aarongertler on Will protests lead to thousands of coronavirus deaths? · 2020-06-09T21:30:52.153Z · score: 4 (2 votes) · EA · GW

Negotiation is certainly possible. So, one might lay additional covid deaths at the step of a government which failed to negotiate.

Even if it isn't difficult to cast blame at one's government, this doesn't mean much for the people who have died. It also seems unlikely that governments are going to feel much additional pressure from deaths for which they bear only indirect responsibility.

I don't have any developed opinion on the original post, but I did want to take mild issue with the idea of thinking about deaths as a bargaining tool. (I'm sure you meant for this to be a neutral/factual point about negotiating, but it's hard for me to shake off the devastating impact of additional deaths.)

Comment by aarongertler on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-09T21:25:51.221Z · score: 1 (4 votes) · EA · GW

Thanks for this feedback! 

I considered Fuzzies/Utilons and The Unit of Caring, but it was hard to find excerpts that didn't use obfuscating jargon or dive off into tangents; trying to work around those bits hurt the flow of the bits I most wanted to use. But both were important for my own EA journey as well, and I'll keep thinking about ways to fit in some of those concepts (maybe through other pieces that reference them with fewer tangents).

As for "Beyond the Reach of God," I'd prefer to avoid pieces with a heavy atheist slant, given that one goal is for the series to feel welcoming to people from a lot of different backgrounds.

Scott's piece was part of the second edition of the Handbook, and I agree that it's a classic; I'd like to try working it into future material (right now, my best guess is that the next set of articles will focus on cause prioritization, and Scott's piece fits in well there). As an addition to this section, I think it makes a lot of the same points as the Singer and Soares pieces, though it might be better than one or the other of those.

Comment by aarongertler on External evaluation of GiveWell's research · 2020-06-09T20:57:26.798Z · score: 15 (4 votes) · EA · GW

As far as I can tell, Simon's allegations are either unsubstantiated (no proof offered, or even claims specific enough to verify) or seem irrelevant to GiveWell's work (why do I care whether their CEO makes more or less money than a Berkeley economics professor?).

The closest thing I can find to a verifiable claim is his claim that early assessments of GiveWell were either written by interns/volunteers or were "neutral to negative." 

Going off of this page, I'd judge independent evaluations as "neutral to positive" (I don't see any evaluations that seem more negative than positive, though I didn't read every review in full). The evaluation with the most criticism (of those I read in full) was written by a GiveWell volunteer (Pierre Thompson).

As for claims like "we sent some of their materials to faculty who actually do charity evaluations"; who are these faculty? Where can I read their evaluations? Simon summarizes these experts' judgment of GiveWell's work as "book reports": What does that mean? What did GiveWell get wrong? Were their issues based on disagreement around contentious points, or on obvious mistakes that anyone should have caught?

(And of course, it sounds like this is almost all in reference to GiveWell circa ~2012, which has limited bearing on the very different set of recommendations they make today.)

*****

Then there's the claim that GiveWell takes a fraction of donors' money when they regrant to charities, which is false (unless you check an unchecked-by-default box that adds a small extra donation for GiveWell's operations). Maybe things were different eight years ago?

As for the claim that GiveWell supported the Singularity Institute at some point; Holden Karnofsky wrote a long post criticizing the org and explaining why GiveWell had no plans to fund it. If Simon was murky on that detail (as well as on details about SI's missing money, which weren't that hard for me to find), that reduces my credence in his various unsupported claims.

Comment by aarongertler on External evaluation of GiveWell's research · 2020-06-09T20:33:32.166Z · score: 2 (1 votes) · EA · GW

I don't know what you mean by "log in"; you can give feedback on their blog posts just by leaving a name + email address, and their pages don't have comment sections to log into.

By "suggestions on the text of pages," do you mean suggestions other people can view? That seems like it would be a technical challenge, and I'd be surprised if it brought in much additional useful commentary compared to the status quo (that is, sending an email to GiveWell if you have a suggestion).

Can you think of any websites that have implemented "suggestions on the text of pages" in a way that led to their content being better, outside of wikis?

Comment by aarongertler on External evaluation of GiveWell's research · 2020-06-09T20:28:20.539Z · score: 3 (2 votes) · EA · GW

Audience size is a big challenge here. There might be a few thousand people who are interested enough in EA to participate in the community at all (beyond donating to charity or joining an occasional dinner with their university group). Of those, only a fraction will be interested in contributing to crowdsourced intellectual work. 

By contrast, StackOverflow has a potential audience of millions, and Wikipedia's is larger still. And yet, the most active 1% of editors might account for... half, maybe, of the total content on those sites? (Couldn't quickly find reliable numbers.) 

If we extrapolate to the EA community, our most active 1% of contributors would be roughly 10 people, and I'm guessing those people already find EA-focused ways to spend their time (though I can't say how those uses compare to creating content on a website like the one you proposed).

Comment by aarongertler on External evaluation of GiveWell's research · 2020-06-09T20:23:31.903Z · score: 2 (1 votes) · EA · GW

Can you point to any examples of GiveWell numbers that you think a crowd would have a good chance of answering more accurately? A lot of the figures on the sheets either come from deep research/literature reviews or from subjective moral evaluation, both of which seem to resist crowdsourcing.

If you want to see what forecasting might look like around GiveWell-ish questions, you could reach out to the team at Metaculus and suggest they include some on their platform. They are, to my knowledge, the only EA-adjacent forecasting platform with a good-sized userbase. 

Overall, the amount of community participation in similar projects has historically been pretty low (e.g. no "EA wiki" has ever gotten mass participation going), and I think you'd have to find a way to change that before you made substantial progress with a crowdsourcing platform.

Comment by aarongertler on External evaluation of GiveWell's research · 2020-06-09T20:19:12.684Z · score: 3 (2 votes) · EA · GW

I actually think this is a better way to do comments in most cases!

Comment by aarongertler on It's OK To Also Donate To Non-EA Causes · 2020-06-03T23:55:15.985Z · score: 8 (2 votes) · EA · GW

I didn't read the comment this way at all; Ramiro didn't endorse Open Phil's suggestions or indicate that they had supported those charities themselves.

I think it's very healthy to ask people about other charities they may have considered if they discuss their donation choices on the Forum, in case those people have past analysis they'd be willing to share. I suppose a better phrasing of the question might have been "did you consider any other charities in this space?"

Comment by aarongertler on Biggest Biosecurity Threat? Antibiotic Resistance · 2020-06-02T11:44:56.309Z · score: 5 (3 votes) · EA · GW

This was a really good first comment! Welcome to the Forum, and I hope you continue to read and respond to posts. (I'm the lead moderator here.)

Comment by aarongertler on Climate Change Is Neglected By EA · 2020-06-02T10:51:51.248Z · score: 10 (3 votes) · EA · GW

Nothing you've written here sounds like anything I've heard anyone say in the context of a serious EA discussion. Are there any examples you could link to of people complaining about causes being "too mainstream" or using religious language to discuss X-risk prevention?

The arguments you seem to be referring to with these points (that it's hard to make marginal impact in crowded areas, and that it's good to work toward futures where more people are alive and flourishing) rely on a lot of careful economic and moral reasoning about the real world, and I think this comment doesn't really acknowledge the work that goes into cause prioritization. 

But if you see a lot of these weaker (hipster/religious) arguments outside of mainstream discussion (e.g. maybe lots of EA Facebook groups are full of posts like this), I'd be interested to see examples.

Comment by aarongertler on *updated* CTA: Food Systems Handbook launch event · 2020-06-02T10:05:45.676Z · score: 3 (2 votes) · EA · GW

Together with the team behind the successful Coronavirus Tech Handbook...

What makes you say the Coronavirus Tech Handbook has been successful? I assume it's been useful to many people, but I'm interested in specifics: who's made use of it, what projects have been helped by it, etc.

Comment by aarongertler on Interested in building Effective Altruism Vietnam? · 2020-06-02T09:52:06.183Z · score: 2 (1 votes) · EA · GW

Could you share a link to the Google Site? I'd be curious to see it (have you translated any English-language EA material into Vietnamese?)