Posts

Open Thread: May 2021 2021-05-04T08:26:37.503Z
EA Organization Updates: April 2021 2021-04-28T10:52:57.297Z
EA Forum Prize: Winners for February 2021 2021-04-27T09:32:08.732Z
(Closed) Seeking (paid) volunteers to test introductory EA content 2021-04-22T12:10:23.833Z
New? Start here! (Useful links) 2021-04-16T08:07:29.970Z
What material should we cross-post for the Forum's archives? 2021-04-15T10:06:34.186Z
Your World, Better: Global progress for middle schoolers 2021-04-11T21:57:58.572Z
Reality has a surprising amount of detail 2021-04-11T21:41:00.387Z
The EA Forum Editing Festival has begun! 2021-04-07T10:28:10.155Z
Open Thread: April 2021 2021-04-06T19:42:42.886Z
EA Forum Prize: Winners for January 2021 2021-04-02T02:58:43.260Z
EA Organization Updates: March 2021 2021-04-01T02:42:52.558Z
Opportunity for EA orgs: $5k/year in ETH (tech setup required) 2021-03-15T04:11:41.553Z
Progress Open Thread: March 2021 2021-03-03T08:24:27.482Z
Open and Welcome Thread: March 2021 2021-03-03T08:23:53.275Z
Our plans for hosting an EA wiki on the Forum 2021-03-02T12:45:15.469Z
EA Organization Updates: January 2021 2021-02-28T21:30:04.557Z
Running an AMA on the EA Forum 2021-02-18T01:44:31.077Z
EA Forum Prize: Winners for December 2020 2021-02-16T08:46:30.444Z
Allocating Global Aid to Maximize Utility 2021-02-15T06:42:22.410Z
Many (many!) charities are too small to measure their own impact 2021-02-15T06:39:10.320Z
Progress Open Thread: February 2021 2021-02-02T11:46:47.104Z
Open and Welcome Thread: February 2021 2021-02-02T11:46:08.726Z
80,000 Hours: Where's the best place to volunteer? 2021-01-25T06:57:38.678Z
No, it's not the incentives — it's you 2021-01-25T06:50:01.504Z
EA Organization Updates: December 2020 2021-01-22T11:09:46.097Z
Global Priorities Institute: Research Agenda 2021-01-20T20:09:48.199Z
Hilary Greaves: The collectivist critique of the EA movement 2021-01-19T13:18:47.137Z
Seeking part-time contractor for Facebook group support 2021-01-15T11:20:33.515Z
The ten most-viewed posts of 2020 2021-01-13T12:21:59.412Z
EA Forum Prize: Winners for November 2020 2021-01-11T07:45:11.386Z
EA Course Syllabus: David Manley's "Changing the World" 2021-01-06T04:40:23.641Z
Open Philanthropy: Our Approach to Recruiting a Strong Team 2021-01-05T23:28:48.758Z
Julia Galef and Angus Deaton: podcast discussion of RCT issues (excerpts) 2021-01-04T21:35:24.874Z
Kelsey Piper on "The Life You Can Save" 2021-01-04T20:58:34.025Z
Progress Open Thread: January 2021 2021-01-02T11:26:20.710Z
Open and Welcome Thread: January 2021 2021-01-02T11:25:09.605Z
The Conflicted Omnivore 2020-12-30T11:15:31.850Z
Open Philanthropy: 2020 Allocation to GiveWell Top Charities 2020-12-29T05:00:40.957Z
Requests on the Forum 2020-12-22T10:42:51.574Z
EA Organization Updates: November 2020 2020-12-21T12:01:02.813Z
EA Forum Writing Workshop on Monday 2020-12-20T16:18:32.056Z
Open Philanthropy's AI governance grantmaking (so far) 2020-12-17T12:00:05.507Z
EA Forum Prize: Winners for October 2020 2020-12-11T00:40:41.707Z
Forum update: New features (December 2020) 2020-12-04T06:45:30.607Z
Open and Welcome Thread: December 2020 2020-12-02T12:23:04.601Z
Progress Open Thread: December 2020 2020-12-02T12:21:32.935Z
Open Philanthropy Staff: Suggestions for Individual Donors (2020) 2020-12-02T12:08:09.354Z
Sharing the World with Digital Minds 2020-12-01T08:00:00.000Z
EA Organization Updates: October 2020 2020-11-22T20:37:13.225Z

Comments

Comment by Aaron Gertler (aarongertler) on Open Thread: May 2021 · 2021-05-08T09:02:45.954Z · EA · GW

I think you meant to add a different link.

Comment by Aaron Gertler (aarongertler) on What is your perspective on the ongoing farmer protests and strikes in India over the dramatic changes the government has introduced into the economy? · 2021-05-05T07:35:54.187Z · EA · GW

What do you mean by a "perspective"?

I think you'll be more likely to get useful responses to this question if you ask some sub-questions related to what interests you about the protests, or share your own "perspective" so that people have something to respond to.

Comment by Aaron Gertler (aarongertler) on Could GiveWell create a cryptocurrency to raise a lot of money? · 2021-04-30T08:04:08.389Z · EA · GW

There are many reasons this wouldn't be a good idea, some of which you identified. The first two:

  1. It's completely separate from GiveWell's mission and brand; it has nothing to do with their work
  2. It has nothing to do with effective altruism, and runs counter to many of the things EA tries to promote (we're interested in careful reasoning, long-term thinking, and real-world impact; your use of terms like "pyramid scheme" and "peak euphoria" show why this side of the crypto market doesn't represent those ideals)

In general, the EA movement aims to be exceedingly moral, transparent, and trustworthy, and to hold individuals/organizations in the movement to high standards. Creating speculative investment vehicles in order to take money from people who make foolish decisions just doesn't fit EA at all.

While I don't like this idea, I should emphasize that you didn't do the wrong thing by sharing it here (rather than e.g. on Reddit, or by trying to implement it yourself without asking anyone first). It's not a bad thing to consider unusual fundraising ideas: projects like EA Giving Tuesday have been quite successful despite not looking like standard fundraising. But if the outline of your idea sounds at all like "manipulating people into supporting charity" or "adding overhead cost and risk to a standard fundraiser" (which seem to be the two options for an "EA coin"), that's not a good sign.

For what it's worth, awareness of EA in the crypto community is quite high, largely thanks to the charitable giving of longtime community member Sam Bankman-Fried (founder of FTX).

Comment by Aaron Gertler (aarongertler) on EA Forum Prize: Winners for February 2021 · 2021-04-27T12:24:03.302Z · EA · GW

I see that someone strong-downvoted this post, which is unusual for the prize announcement posts. To that voter, in case they see this: if you have any specific feedback, I'd be really grateful to hear it.

I know that my post summaries don't always capture the winners' best features, and if that's the problem, I'm open to making edits if someone points out "hey, you totally overlooked feature X of post Y" or "hey, this thing you said about post Y isn't accurate, fix it".

Comment by aarongertler on [deleted post] 2021-04-26T22:04:18.028Z

To what extent is this tag redundant with the Biosecurity tag?

I don't mean to say that those two tags cover the same thing, but are there posts that would get this tag and not the "Biosecurity" tag? Or vice-versa?

I suppose the first category could be "posts that describe risks but not strategies to address them", and the second could be "posts that describe biosecurity work against non-catastrophic threats". Not sure whether those edge cases merit having two separate tags.

Comment by Aaron Gertler (aarongertler) on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-23T13:58:07.421Z · EA · GW

It seems like 80K wants to feature some neartermist content in their next collection, but I didn't object to the current collection for the same reason I don't object to e.g. pages on Giving What We Can's website that focus heavily on global development (example): it's good for EA-branded content to be fairly balanced on the whole, but that doesn't mean that every individual project has to be balanced.

Some examples of what I mean:

  • If EA Global had a theme like "economic growth" for one conference and 1/3 of the talks were about that, I think that could be pretty interesting, even if it wasn't representative of community members' priorities as a whole. 
  • Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.

It may have been better for 80K to refer to their collection as an "introduction to global priorities" or an "introduction to longtermism" or something like that, but I also think it's perfectly legitimate to use the term "EA: An Introduction". Giving What We Can talks about "EA" but mostly presents it through examples from global development. 80K does the same but mostly talks about longtermism. EA Global is more balanced than either of those. No single one of these projects is going to dominate the world's perception of what "EA" is, and I think it's fine for them to be a bit different.

(I'm more concerned about balance in cases where something could dominate the world's perception of what EA is — I'd have been concerned if Doing Good Better had never mentioned animal welfare. I don't think that a collection of old podcast episodes, even from a pretty popular podcast, has the same kind of clout.)

Comment by Aaron Gertler (aarongertler) on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-23T13:44:20.559Z · EA · GW

Most of what you've written about the longtermist shift seems true to me, but I'd like to raise a couple of minor points:

The EA.org landing page didn’t include global poverty and had animals near the bottom of an introductory list

 Very few people ever clicked on the list of articles featured on the EA.org landing page (probably because the "Learn More" link was more prominent — it got over 10x the number of clicks as every article on that list combined). The "Learn More" link, in turn, led to an intro article that prominently featured global poverty, as well as a list of articles that included our introduction to global poverty. The site's "Resources" page was also much more popular than the homepage reading list, and always linked to the global poverty article. 

So while that article was mistakenly left out of one of EA.org's three lists of articles, it was by far the least important of those lists, based on traffic numbers. 

EA Globals highlighted longtermist content, etc. 

Do you happen to have numbers on when/how EA Global content topics shifted to be more longtermist? I wouldn't be surprised if this change happened, but I don't remember seeing anyone write it up, and the last few conferences (aside from EA Global: Reconnect, which had four total talks) seem to have a very balanced mix of content.

Comment by Aaron Gertler (aarongertler) on What posts do you want someone to write? · 2021-04-23T08:57:43.909Z · EA · GW

I'm not aware of anything recent that was explicitly pro-"give now". There are some semi-recent posts that weigh both sides of the debate but draw "it depends"-type conclusions. I'd be interested to see your take!

You can see posts on this topic collected in the "timing of philanthropy" tag.

Comment by aarongertler on [deleted post] 2021-04-21T13:13:58.744Z

This doesn't seem like the right place. I've written my thoughts on tagging norms in a few places, but it would be good to collect that (plus others' thoughts) in one place. I'll talk to Pablo and see about one of us creating this resource.

Meanwhile, if you have thoughts on tagging policy, feel free to mention them in replies to this comment (or to create a question post to collect others' thoughts — though I'm not sure how much discussion is required here, as I think our tagging norms/policy will end up being pretty simple).

Comment by Aaron Gertler (aarongertler) on anonysaurus30k's Shortform · 2021-04-21T10:19:01.750Z · EA · GW

This is a good thing to call attention to! One of the reasons we add older EA content to the Forum is to make sure it exists in one more easily accessible place, in case another site or service goes down.

Comment by Aaron Gertler (aarongertler) on Khorton's Shortform · 2021-04-21T10:17:19.001Z · EA · GW

Do you consider this intuition to be a reason that people should be wary of making this type of argument? Or maybe specifically avoid the word "colonize"?

Maybe something like "populate the galaxy" would be better, as it emphasizes that there are no native populations whose members would be harmed by space colonization?

Comment by Aaron Gertler (aarongertler) on How have you become more (or less) engaged with EA in the last year? · 2021-04-19T02:44:44.194Z · EA · GW

That's a good update to hear! (It's cool that you came back to this comment to note whether your expectations were met.)

Comment by Aaron Gertler (aarongertler) on The EA Forum Editing Festival has begun! · 2021-04-12T05:17:16.344Z · EA · GW

We have two categories ("Moral Philosophy", "Long-Term Risks and Flourishing") which capture lots of material relevant to longtermism.

As for the cause area section specifically:

  • AI is its own cluster because we currently have an enormous number of articles about it. If we only had one article about AI risk, I'd put it under "Global Catastrophic Risks" and that would be that.
  • The "Global Catastrophic Risks (other)" cluster feels well-defined to me in a way that a "longtermist" cluster wouldn't. When I look at the "Other" cluster, most of the seemingly "longtermist" causes are still things that many people work on hoping to achieve substantial change within their lifetimes, for the sake of present-day people — anti-aging research, land use reform, climate change...
    • If you ask me about a cause area in that section, I can fairly confidently say whether or not it counts as a GCR. In many cases, I wouldn't be able to say whether or not it counted as "longtermist". (And as you mention, many of the areas could be prioritized for longtermist or non-longtermist reasons.)

I think of longtermism as a common value system in EA. Many causes seem especially valuable to work on given a longtermist value system, but few such causes require a longtermist value system to make sense. (But I spend less time thinking about this kind of thing than you do, so I'm open to counterpoints I might not be considering.)

Comment by Aaron Gertler (aarongertler) on The EA Forum Editing Festival has begun! · 2021-04-12T04:55:32.465Z · EA · GW

I've added a link to your "tag proposal" thread (rather than this article, which isn't meant to be a permanent resource).

Comment by Aaron Gertler (aarongertler) on The EA Forum Editing Festival has begun! · 2021-04-12T04:55:04.760Z · EA · GW

The clusters aren't in alphabetical order — only the articles within clusters.

The clusters are arranged according to a couple of heuristics that I value about equally:

  1. Try to make the columns of roughly equal length
  2. Have the "other" cluster near the bottom-right of the section (seems natural for that to be the last thing people look at)
  3. Have related clusters close together (e.g. "effective giving" and "career choice")

I'd prefer to have all the cluster names aligned horizontally, as on the LW Concepts page, but our extremely varied column lengths discourage that for now (this might change as we continue to add new articles, look at new ways to sort the page, etc.)

If anyone has an idea for making the page better-sorted and/or more evenly arranged, I'm all ears. Graphic design isn't my forte and the current version is quite rough.

Comment by Aaron Gertler (aarongertler) on The EA Forum Editing Festival has begun! · 2021-04-12T04:42:34.685Z · EA · GW

I think of tags as being for "posts that involve X in some way", which encompasses posts written by and about a given organization.

I do think that org update posts are a good way to use an org's tag. If someone is interested in The Humane League, they might want to see what THL was doing in a given month. It's easier to use the tag for this than to make someone filter through all the monthly update posts to see which ones mention THL. (The downside is that many posts tagged with an org won't have much info about it — are you worried about that kind of tag use not being relevant enough?)

The case when I wouldn't use an org is when that org's work is very briefly referenced in a way that doesn't have much to do with them (e.g. someone cites an 80K problem profile as a source for some claim — that doesn't seem like a statement "about" 80K).

As this conversation continues and I arrive at a firmer definition of an org tag policy, I'll try to make it clearly visible in a few places.

Comment by Aaron Gertler (aarongertler) on The EA Forum Editing Festival has begun! · 2021-04-12T04:37:48.508Z · EA · GW

On point #2: Here's what I had suggested on my post about organization tags:

Choose an organization. Add that organization’s tag to every post about their work, every cross-post from their website, etc. (If they have no tag, create one!) You can find these posts by entering the org’s name into the searchbar.

In general, I think it's better to have more posts tagged rather than fewer, and I'd consider "paid work by an employee of X" to be "work paid for by X" and thus, in some sense, "the work of X".

Comment by Aaron Gertler (aarongertler) on The EA Forum Editing Festival has begun! · 2021-04-12T04:33:29.339Z · EA · GW

That's exactly the reason I linked to this list of high-karma, untagged posts as a starting point for people to work on. (And... it's already down at least 80% from where it was — good job, people!)

By changing the "threshold" number at the end of that URL, you can see untagged posts at any level of karma. That's a good way to find posts that may be especially worth tagging. (I've added this notice to the "how can you help" section — thanks for the suggestion.)

Comment by Aaron Gertler (aarongertler) on The EA Forum Editing Festival has begun! · 2021-04-12T04:31:15.099Z · EA · GW

We've gone with a different naming convention than LessWrong (they say "Concepts" while using the same "tags" URL), but given the amount of code our sites share, it will take some time to disentangle our terminology from theirs. 

I'd like our "Tags Portal" page to eventually have the name "Wiki", and people should think of that page as the "homepage" of the Wiki. 

Or maybe even consider renaming tags to wiki - where posts on the forum can be 'tagged' with a wiki article.

The "Tags Portal" contained a section I'd titled "Articles" but have just retitled "Wiki Articles". The first words of that page remain the same:

This page displays a list of articles in the EA Forum Wiki. 

Some of these articles are also tags that can be added to posts, so that people can find posts on certain topics. You can upvote or downvote a tag for a given post to move it higher or lower in the list of posts on that tag's page.

Note also the second sentence of this post:

Some of these articles can be applied to posts: we call those “tags”.

So we have similar intuitions, but I use the word "tag" to represent an article that can be used as a tag, because saying "I tagged this post with article X" also seems confusing.

We're still trying to figure out the extent to which having "wiki only" articles make sense (and what fraction of articles don't work as tags). If we end up making all articles usable as tags, the distinction between "article" and "tag" disappears, which will lead to more terminology change.

Comment by aarongertler on [deleted post] 2021-04-11T09:11:43.514Z

I feel as though someone with broad interests in this area should be able to subscribe to multiple tags, and we should encourage that sort of thing more if it isn't what users are naturally doing. 

Keeping a bigger tag around isn't too harmful, but I think it might lead to lots of people using just that tag rather than looking for more narrow/specific tags. (That's what happened to me when I first made the tag — once it existed, it became an easy catch-all for posts that weren't very similar.)

Comment by Aaron Gertler (aarongertler) on The EA Forum Editing Festival has begun! · 2021-04-08T19:15:57.460Z · EA · GW

Number of tags applied to posts the day before the Editing Festival was announced: ~30

Number of tags applied the day after the announcement: ~300!

Good start, everyone 😊

Comment by aarongertler on [deleted post] 2021-04-08T10:24:24.745Z

After some discussion with Pablo, I've renamed this tag to "Humor". People sometimes write humorous posts on other days, and it seems excessive to have an "April Fools' Day" tag in addition to a "Humor" tag. It's easy enough to see which "Humor" posts were written on April Fools' Day if someone wants to.

This change isn't set in stone  -- I'm open to arguments that we should have an AFD-specific tag.

Comment by Aaron Gertler (aarongertler) on Meta-EA Needs Models · 2021-04-07T06:04:08.899Z · EA · GW

Anecdotally (and also maybe some survey data), there are people that you would consider "top EAs" where it feels like they could have not gotten into EA if things were different, e.g. they were introduced by a friend they respected less or they read the wrong introduction. It seems still quite possible that we aren't catching all the "top people."

I agree with all of this. In particular, saying "all the people in EA seem like they'd have ended up here eventually" leaves out all the people who also "seem like they'd have ended up here eventually" but... aren't here.

I can think of people like this! I had lots of conversations while I was leading the Yale group. Some of them led to people joining; others didn't; in some cases, people came to a meeting or two and then never showed up again. It's hard to imagine there's no set of words I could have said, or actions I could have taken, that wouldn't have converted some people from "leaving after one meeting" to "sticking around" or "never joining" to "attending a first event out of curiosity". 

The Introductory Fellowship is a thing, created and funded by "meta" people, that I think would have "converted" many of those people — if I'd had access to it back in 2014, I think EA Yale could have been twice the size in its first year, because we lost a bunch of people who didn't have anything to "do" or who were stuck toiling away on badly-planned projects because I was a mediocre leader.

*****

I also have at least one friend I think would have been a splendid fit, and who was involved with the community early on, but then had a terrible experience with the person who introduced her to EA (they are no longer a member) and has now soured on everything related to the community (while still holding personal beliefs that are basically EA-shaped, AFAICT). That's the sort of thing that meta/community-building work should clearly prevent if it's going well. 

Had my friend had the bad experience in 2021 rather than nearly a decade earlier, she'd have access to help from CEA, support from several specialized Facebook groups,  and a much larger/better-organized community in her area that would [I hope] have helped her resolve things.

Comment by Aaron Gertler (aarongertler) on RCTs in Development Economics, Their Critics and Their Evolution (Ogden, 2020) [linkpost] · 2021-04-07T05:41:12.353Z · EA · GW

Thank you for finding, reading, and excerpting this paper! I'd likely never have seen it without this Forum post, and your post is now the best resource I know of on a topic that comes up a lot when I talk about EA with people. I expect to refer to it often.

Comment by Aaron Gertler (aarongertler) on EA Debate Championship & Lecture Series · 2021-04-06T19:59:31.075Z · EA · GW

Thanks for the excellent writeup!

A few questions:

  1. How might you use additional funding to boost the program? I think this would be a really strong candidate for the EA Infrastructure Fund, and I'd also be open to providing a small grant ($3,000 or less) if you had a need that couldn't be filled by the Fund or by Open Philanthropy.
  2. If someone wanted to help with your social media, would they need to have debate experience? I know at least one person who might be interested, but I'm not sure they've done debate before.
  3. Does your post-event survey ask about outcomes aside from interest — for example, people taking a giving pledge or joining a local group?
  4. It sounds like the donation match may not have been counterfactual if the money came from Open Phil — did you try to communicate this to the debaters? Or was it in some sense counterfactual? (This is a tiny nitpick in the broader context of the event.)
Comment by Aaron Gertler (aarongertler) on Our plans for hosting an EA wiki on the Forum · 2021-04-06T00:02:25.990Z · EA · GW

We already have a "read more" label that affixes itself to long wiki entries after a certain number of (either words or characters, not sure). 

This is what I see when I open one long entry:

Do you see something else? If so, the "read more" feature might be bugged.

I think this feature means we don't need the others you suggested, though I liked suggestion #4 the most out of those.

Comment by Aaron Gertler (aarongertler) on What are your main reservations about identifying as an effective altruist? · 2021-04-05T23:58:34.272Z · EA · GW

You've drawn a good distinction here, and I should revise what I said before. 

In my previous comment, I lazily copied the explanation I use to tell people they shouldn't capitalize "effective altruism" ("it's not a religion"). As you say, it doesn't fit here.

The thing I don't like about applying "-ist" labeling to EA is the addition of "effective", which (as many others have said) seems to presume impact in a way that seems a bit arrogant and, more importantly, is really hard to prove.

Are you a pianist? Yes, you play the piano.

Are you a virtue ethicist? Yes, you believe that virtue ethics are correct (or whatever).

Are you an altruist? Yes, you give some of your resources to other people for reasons outside of law, contracts, etc.

Are you a great pianist? ...maybe? What defines "great"?

Are you an effective altruist? ...maybe?  What defines "effective"? You might hold a bunch of ethical beliefs that lots of other people who use that label also hold, but it seems unclear exactly which set of beliefs is sufficient for the label to fit. (And even if we could settle on some canonical set, the word "effective" still seems presumptive in a way I don't want to apply to individual people.)

Comment by Aaron Gertler (aarongertler) on Our plans for hosting an EA wiki on the Forum · 2021-04-05T23:51:47.755Z · EA · GW

This is about editing the article pages (we're using "article" to cover everything that has a page, and "tag" for articles that can be used as tags, rather than being "wiki only").

Writing additional posts is also a good thing, of course! If you start to feel as though you are writing a post on the article page, make your content a post instead (and then you can link to it from the article page).

We don't have a ranking system or a "stub" designation yet — that's a LessWrong feature we'll be importing soon, though, at which point we'll probably be marking many articles as stubs.

Comment by Aaron Gertler (aarongertler) on What are your main reservations about identifying as an effective altruist? · 2021-04-02T00:54:04.065Z · EA · GW

Interesting! I don't remember what position I held years ago, but I assume this is about the name of the group we ran? I legitimately don't remember whether it was "Epic Effective Altruism" or "Epic Effective Altruists".

If it was the latter, I don't think I had a strong view in favor of "altruists" — it just felt like the default name to use and I doubt I thought twice about it. (I'm pretty sure this is long before I ever read "EA is a question, not an ideology" or similar posts.)

The first time I saw someone express reservations about the term "effective altruist", I imagine that would have shifted my position quickly, since I don't think I had a strong prior either way. But if that's also not what you remember... well, fill me in, because much of that time is a blur for me now :-P

Comment by Aaron Gertler (aarongertler) on New Cause Area: Programmatic Mettā · 2021-04-01T11:04:12.681Z · EA · GW

An excellent April Fool's Day post! As well as a good prank on me, specifically, because I'm the one who will have to clean up all of those tags :-P

(I'll leave them up until the 2nd.)

Comment by Aaron Gertler (aarongertler) on What drew me to EA: Reflections on EA as relief, growth, and community · 2021-03-31T21:42:24.233Z · EA · GW

I wanted to be a homicide detective until my early teens when my mother pointed out that if I really wanted to help people, I could help a lot more of them through nonprofit work in India. Suffice it to say, my detective career was shelved, and I will never know if I had the potential to be the inspiration for a true crime podcast.

Solution: Solve every global problem, in order of priority, until investigating mysterious murders becomes a promising EA cause area. 

...I now feel even more motivated to solve every global problem. This one's for you, Counterfactual Detective Agarwalla.

Comment by Aaron Gertler (aarongertler) on What are your main reservations about identifying as an effective altruist? · 2021-03-31T21:36:14.155Z · EA · GW

I'm a big fan of EA philosophy and the people I've met through EA. But I think of EA as a philosophy/movement/personal practice, not a political party or a religion or any other thing that naturally seems to grant an "-ist" label.

While I'm sure I sometimes lapse into the "EAs" shorthand in conversation/casual commenting, I try to avoid it in everything I write for CEA, and I push (gently) to keep it out of things I edit/review for others.

My preference is to say "I try to practice effective altruism" or "I try to follow the principles of effective altruism" (more precise, but clunkier). There are things I associate with EA (trying to increase impact, being cause-neutral, weighting different sorts of beings equally, considering counterfactuals) that I try to do, but don't always succeed at. That's what defines an "EA" to me — goals and behaviors.

Comment by Aaron Gertler (aarongertler) on Introducing the Simon Institute for Longterm Governance (SI) · 2021-03-31T21:19:49.524Z · EA · GW

Wow! This is a model for how to write a "new org" post. I look forward to sending it to many other founders in the future.

I especially liked this:

We give ourselves 24 months to scale up operations and generate first signs of impact (until mid-March 2023). Our governing board will decide whether we succeeded or failed to do so. After at most 1.5 years of work, via surveys and case studies, we expect to see:

  • Policymakers use insights and tools from our trainings in their work;
  • Policymakers better understand global catastrophic risks and longtermism;
  • Longtermists better understand international policymaking;
  • Individuals improve their career plans;
  • Our research can pass academic peer-review.

While the metrics seem a bit fuzzy in some cases, I'm impressed by the short timeline and the basic commitment to "review whether this should exist". 

(Though I might be misunderstanding the consequence of the governing board review; if they determine that you haven't achieved your goals, what happens next?)

Comment by Aaron Gertler (aarongertler) on Max_Daniel's Shortform · 2021-03-31T21:15:39.607Z · EA · GW

I think this could be a good non-Shortform post. I can think of some tags I'd like to apply to it, and it's the best short answer I've seen to a question I've heard from multiple people in EA spaces.

Comment by Aaron Gertler (aarongertler) on Animal-free proteins: A bright outlook and a to-do list (BCG report) · 2021-03-31T20:51:24.695Z · EA · GW

Thanks for sharing this!

How did you end up getting the chance to produce this report? Is this the sort of thing you typically work on at BCG, or was some kind of "flex time" involved? 

If it is the sort of thing you typically work on, were you hired for that work, or did you make your way into that area after being hired?

This is the first case I've seen of someone familiar with EA working in an elite consulting firm and focusing on a core EA cause area. I wonder if this is a path other would-be consultants might be able to follow? (Though I'm not sure what range of EA-ish topics are also consulting-ish topics.)

Comment by Aaron Gertler (aarongertler) on What are your main reservations about identifying as an effective altruist? · 2021-03-31T16:54:29.178Z · EA · GW

Since you achieved some internal distance from an EA identity, are there any projects you've worked on, or ideas you've discussed publicly, that fall into the category "I wouldn't have done this before, because it felt like the kind of thing that would have made people angry/raised the 'reputation damage' flag"?

I'm interested in the extent to which the thing that happened was:

a) Feeling empowered to do specific things that run counter to what you think people in the movement would have approved of, vs.

b) Feeling more ambitious and creative in general, even if the results didn't have much to do with controversial-in-EA topics

Comment by Aaron Gertler (aarongertler) on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-29T21:20:18.369Z · EA · GW

It generally looks like you’ve chosen good content and done a reasonable job of providing a balanced overview of EA.

Credit goes to James Aung, Will Payne, and others (I don't know the full list) who created the curriculum! I was one of many people asked to provide feedback, but I'm responsible for maybe 2% of the final content, if that.

Ironically, my main quibble with the content (and it’s note a huge one) is that it’s too EA-centric. For example, if I was trying to convince someone that pandemics are important I’d show them Bill Gates’ TED Talk on pandemics rather than an EA podcast as the former approach leverages Gates’ and TED’s credibility.

I think this is a very reasonable quibble. In the context of "this person already signed up for a fellowship", the additional credibility may be less important, but this is definitely a consideration that could apply to "random people finding the content online".

The Fellowship is for people who opt into participating in an 8 week program with an estimated 2-3 hours of preparation for each weekly session. EA.org is for people who google “effective altruism”. There’s an enormous difference between those two audiences, and the content they see should reflect that difference. 

I wholly agree, and I certainly wouldn't subject our random Googlers to eight weeks' worth of material! To clarify, by "this content" I mean "some of this content, probably a similar amount to the amount of content we now feature on EA.org", rather than "all ~80 articles".

The current introduction to EA, which links people to the newsletter and some other basic resources, will continue to be the first piece of content we show people. Some of the other articles are likely to be replaced by articles or sequences from the Fellowship — but with an emphasis on relatively brief and approachable content.

Comment by Aaron Gertler (aarongertler) on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-29T12:19:28.405Z · EA · GW

Aha! I now believe you were referring to this list:

That's a very good thing to have noticed — we did not, in fact, have the Global  Health and Development article in that list, only at the "Read More" link (which goes to the Resources page). I've added it. Thank you for pointing this out.

For a bit of context that doesn't excuse the oversight: Of ~2500 visitors to EA.org in the last week, more than 1000 clicked through to the "Key Ideas" series (which has always included the article) or the "Resources" page (ditto). Fewer than 100 clicked any of the articles in that list, which is why it didn't come to mind — but I'll be happy to see the occasional click for "Crucial Considerations" go to global dev instead.

Part of my plan for EA.org has been some refactoring on the back end. Looks like this should include "make sure the same reading materials appear in each place, rather than having multiple distinct lists".

Comment by Aaron Gertler (aarongertler) on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-29T10:52:13.831Z · EA · GW

Edit: The screenshots below no longer reflect the exact look of the site, since I went ahead and did some of the reshuffling of the "Key Ideas" series that I mentioned. But the only change to the content of that series was the removal of "Crucial Considerations and Wise Philanthropy, which I'd been meaning to get to for a while. Thanks for the prompt!

*****

Though I'm a bit confused by this comment (see below), I'm really glad you've been keeping up the conversation! At any given time, there are many things I could be working on, and it's quite plausible that I've invested too little time in EA.org relative to other things with less readership. I'm glad to be poked and prodded into rethinking that approach.

Regarding my confusion:

On the other hand, I feel like substantial improvements could be made with negligible effort. For instance, I think you’d make enormous progress if you simply added the introductory article on Global Health and Development to the reading list on the EA.org homepage, replacing “Crucial Considerations and Wise Philanthropy”. 

Which reading list are you referring to? (Edit: see here)

The "Key Ideas" list of introductory articles (see the bottom of this page) has always included the GHD article (at least since I started working at CEA in late 2018):

So has the Resources page:

I think it would be perfectly reasonable to have more than one article on this topic (as we will once the Fellowship content becomes our main set of intro resources). And I do plan to reshuffle the article list a bit this week to move the Global Health and Animal Welfare articles towards the top (I agree they should be more prominent). But I wanted to make sure we didn't have some other part of the site where this article isn't showing up.

As for future variants on our intro content:

You can see the EA Fellowship curriculum here. That set of articles is almost identical to what will show up on the Forum soon (I have several sequences published in "hidden" mode, and will publicize them once my project partner signs off). 

To briefly summarize, there are eight separate "sequences" in the Fellowship:

  • Two on general EA principles + cost-effectiveness calculation (mostly explained through examples from global health)
  • One on moral circle expansion (mostly animal welfare)
  • One on longtermism, generally
  • One on existential risk, generally
  • One on biorisk + AI risk
  • One on epistemics and forecasting
  • One on "putting it into practice" (careers + donations + research ideas)

Once we've adapted EA.org to refer to this content as our default introduction, I anticipate we'll remove most of our current intro articles from prominent places on the site (though I'm not certain of which will remain).

I've already shared this list of articles with a lot of people in the categories "focuses on non-longtermist causes" and/or "has written good critiques of EA things", to get feedback on what they think of the topic balance/exact articles chosen. I'd also welcome feedback from anyone seeing this — and of course, once we actually publish the Forum version, I'll be hoping to get lots of suggestions from the hundreds of people who will see it soon afterward.

Comment by Aaron Gertler (aarongertler) on Proposed Longtermist Flag · 2021-03-25T11:01:25.296Z · EA · GW

Upvoted. 

While I didn't like the initial design for various reasons others have stated, I think the ensuing discussion has been really fun, and this is the kind of content I'd like to see more of on several levels (covers art/design, aims to bring community together, is light and playful). Given its overall impact, I'm very glad the post was published.

Comment by Aaron Gertler (aarongertler) on Preserving natural ecosystems? · 2021-03-24T11:33:59.816Z · EA · GW

This topic seems like it would fall under the Long-Term Future Fund; someone could apply to them for a grant to fund research in this area.

Comment by Aaron Gertler (aarongertler) on KantianEA's Shortform · 2021-03-24T11:30:51.019Z · EA · GW

When you ask people to complete a survey, I think you'll have better results by providing a bit more detail. For example:

  • Who is running the survey?
  • Where are the results going to be published? (Given the claim that the survey will "help the community improve", I assume you mean to create some kind of report on the survey with recommendations based on the results.)
  • Are only college students supposed to take the survey?
Comment by Aaron Gertler (aarongertler) on Sign up for the Forum's email digest · 2021-03-24T10:57:22.239Z · EA · GW

Thanks for asking this! 

The mailing list being run through Mailchimp makes certain kinds of automated integration difficult (for example, it would be a lot of work to allow people to remove themselves from the mailing list by turning off a "receive the Forum Digest" setting). 

I currently promote the digest in a lot of different places, on and off the Forum, but our next step is to edit the signup process so that people become aware of it the moment they create accounts. That's waiting on some broader architectural changes, but should be live in a few weeks to a few months.

We might also add the digest to the sidebar at some point, though that space is valuable and it might be wrong to put something there that won't be relevant to any user more than once.

Comment by Aaron Gertler (aarongertler) on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-23T09:40:52.342Z · EA · GW

(My personal opinion, not trying to speak for Sam or JP.)

People have scraped public Forum data before, and the results have been interesting and informative to readers. As long as the scraping only pulls information someone could find by other means, I find it hard to imagine a scenario where it would be problematic.

Here's info on an API you can use to extract data.

Comment by Aaron Gertler (aarongertler) on Progress Open Thread: March 2021 · 2021-03-23T09:20:18.101Z · EA · GW

Strongly upvoted. This comment provides solid evidence in support of its argument, and led me to substantially raise my estimate of the physical risks involved in adoption. (Edit: This holds even in light of AGB's reply; as he points out, the numbers on risk are still quite high even if you make an adjustment of the type he recommends.)

I also appreciate Habryka's willingness to speak out in favor of a comment that was heavily downvoted and moderated, just as I appreciated Dale's good intentions in his initial comment.

I found Habryka's supporting evidence to be more relevant than Dale's, and his argument clearer — such that it meets the standards of rigor I was hoping for given the topic at hand.

*****

My biggest update here is on the rate of sibling abuse, which I hadn't realized was nearly as high as it seems to be. From page 25 of this report:

The report's definition of "severe assault" (ignore the bit about parents, it seems like sibling violence was measured in the same way):

I'll note that this still leaves room for interpretation. Both of my siblings hit me with objects when I was growing up, leaving marks or bruises in some cases, but I'm still glad they were born, and we get along well as adults. (We also got along well as children, most of the time.)

I'd be really interested to see data like this broken down by the type/severity of violence. I'd guess that a substantial portion of that "37 percent" number comes from things most people would perceive as normal (e.g. a 7-year-old punches her 8-year-old brother in the arm, a 10-year-old hits his twin brother with a Wiffle bat). But I could be wrong, and even much smaller numbers for "truly severe" incidents could mean a lot more risk than I'd expected.

*****

Despite this update, "never adopt" still feels extreme to me, given the chance of a very positive outcome (a child has a loving family who helps to support them, rather than no family) and the high satisfaction rates of adoptive parents (keeping in mind that those numbers are likely to be skewed by a bias towards reporting satisfaction). 

Some factors which seem like they could, in tandem, sharply reduce the risk Habryka identifies:

  • Adopting a child younger than your current children
  • Adopting a girl rather than a boy (the above report found that boys were more likely than girls to commit violent acts, but I don't know how big the difference is)
  • Doing everything you can to learn whether a child has a history of violent behavior (if a 15-year-old has a totally clean record, that seems like useful evidence)

But I'm not confident about the impact of any of these.

Comment by Aaron Gertler (aarongertler) on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-23T06:29:53.139Z · EA · GW

Thanks for this question! As the person who oversees content for that site, here are my thoughts:

EA.org has been a fairly low priority compared to various other CEA content projects, but that might change soon. 

(For context, my role has been tied up in active community engagement work more than web content for a while, so I haven't been able to give the site as much attention as I'd like.)

I've made some small changes over the last year (removing and editing material in the introduction, updating the Resources page), but I predict that I’ll make bigger changes over the next six months, once we've published EA Fellowship material on the Forum and fully launched the wiki.

I'm not making any concrete commitments here — our priorities over that time period aren’t yet set in stone — but here are things I'd like to do, in case anyone has responses to these initial plans:

  1. Change up the introductory material a lot. The current "Introduction to EA" essay seems good to me, but it's been a long time since we overhauled the rest of the intro content. Now that we have a centralized EA Fellowship, I'd want to encourage people interested in EA to read that material, and we may cross-post some of it to EA.org so that people don't have to leave the site to read further.
    1. These changes will also lead to the Resources page changing quite a bit — right now, that page is centered on the EA.org intro material, but there are other org resource pages that have been updated more recently (like GWWC's), and I expect to draw heavily from those.
  2. Change the "Get Involved" section — there are a lot of updates we could make here, but rather than having to make continuous edits to keep this up to date as part of a single website, we'd like there to be a dedicated resource where people can share concrete opportunities to get involved on a day-to-day basis. Some groups have created their own resources of this type (sometimes with a focus on local opportunities), so we're likely to draw from that in creating ours.
    1. This project won't necessarily be owned by CEA; there's a chance that one group's resource list will develop into something we want to link everyone to, or it could be a better fit for the EA Hub.
    2. My best guess is that this won't be hosted on the Forum: the "Get Involved" tag is nice, but doesn't allow much visibility into what an opportunity looks like, isn't very sortable, etc. Something like a big Airtable or other database seems better to me.

This list covers nearly all the content people read on the website — there are some old transcripts and blog posts archived there, which we'll gradually be cross-posting to the Forum as part of a general "cross-post everything to the Forum" project that will stretch across the year. 

Overall, we're looking to de-emphasize EA.org and use the Forum as a portal to a wider range of EA content/opportunities, though that site's great URL and SEO mean we'll still want it to be a good landing page.

Comment by Aaron Gertler (aarongertler) on Progress Open Thread: March 2021 · 2021-03-22T19:26:24.104Z · EA · GW

Upvoted. I felt this reply was engaging in good faith, and it's given me a chance to add clarity about the Forum's moderation policy.

I tried looking through the rules to find anything related to this: "The current comment makes unkind assumptions about a group of people without accurate data to back them — so despite the good intentions, it falls afoul of our rules."

Thanks for asking the question — I should have linked to the exact section I was referencing, which was:

This criterion is based on Scott Alexander's moderation policy, which I'll quote here:

If you want to say something that might not be true – anything controversial, speculative, or highly opinionated – then you had better make sure it is both kind and necessary. Kind, in that you don’t rush to insult people who disagree with you. Necessary in that it’s on topic, and not only contributes something to the discussion but contributes more to the discussion than it’s likely to take away through starting a fight.

(Where he uses "necessary", we use "relevant".)

The version of "kind" I'm thinking of doesn't just encompass "not insulting people", but covers other everyday aspects of the word. The relevant one here is: "Don't insinuate that a given member of a certain group is likely to be dangerous, to the point that you don't recommend people interact closely with them." See the following:

Finally, it is a well established fact that one of the biggest threats to children comes from their mothers getting new boyfriends who are not genetically related to the child; this results in a something like a 10x increase in child abuse risk vs traditional families. I have never seen similar statistics around older adopted children but I would consider whether they might present a similar risk to your son given the 12 year age gap.

If a comment speculates that a sizable group of people pose a serious hazard to the people they live with, I hold it to a higher standard, as I believe Scott would.

This comment is speculative, and I'd argue using the above standard that it isn't kind. It's certainly relevant, in that you are referring to a situation someone plans to seek out, but 1 out of 3 is still an occasion for a warning.

...one that is couched in measured language like 'consider' and 'might'.

It's better to use this kind of language than to not use it. But as a moderator, I'm wary of even measured language (in this context — no data, negative stereotyping) when someone uses a "measured" point in the service of a less measured conclusion: "I encourage you to strongly consider not adopting an older child" (emphasis mine). 

The word "strongly", to me, reads as "I believe the thing I'm about to argue for is true; even if I'm not trying to force you to agree with me, I really think you should." If that wasn't your intention, I'm sorry to have misconstrued what you meant to imply.

In a situation whether there is a logical reason why a course of action is bad, but no hard empirical data, what alternative is there but to share our best attempt at reasoning?

The alternatives I suggested involved finding data, which is a really good thing to do when you are speculating about the violent proclivities of a group of people. 

Given the choice of "make an inference without empirical data that a group of people has violent proclivities" or "don't make that inference", I'd prefer people refrain from making such inferences. It's very easy for online discussions to get tangled up in arguments about stereotyping; asking people to bring hard data if they want to get into those topics seems reasonable to me. 

(For example, under the standard you seem to be suggesting, you could have gone on in your comment to speculate about the safest gender, race, national origin, and IQ of child to adopt. Even if these were all just "inferences", I can't imagine them improving the quality of discussion, because I've never seen the internet work that way.)

Polish: We'd rather see an idea presented imperfectly than not see it at all.

Rather than content, this line is meant to refer to a post's formatting: an "unpolished" post, to my mind, is one that is messy and disorganized, rather than speculative. 

That said, I can see why people might think it refers to speculation. I'll consider edits I could make to clear this up, and to explain more about how we see the trade-off between "polish" (in the sense of accuracy/empiricism), kindness, and relevance.

Worse, I think your objection is an isolated demand for rigor. It is very common for people to express arguments in the absence of hard statistical data; such data-heavy comments are a minority of those on the forum. Even among top-level articles such sources are often omitted - for example this highly upvoted post from the frontpage contains almost no statistics at all, yet I don't think this is a major problem.

I'd gladly acknowledge my objection as an isolated demand for rigor. A comment policy of "unkind comments are held to a higher standard for accuracy" seems to require making isolated demands for rigor. (Of course, we could be more clear in our rules that we do hold such comments to a higher standard — again, I upvoted your reply because it's given me some ideas for ways to improve that page.)

My guess is the true crux of your objection is that my comment expresses a negative view of a group who it is not socially acceptable to criticize (you will note that Denise's comment also implies negative things, about a different group, but she has received no pushback because her target is considered socially acceptable to criticize).

If you ever see anyone on the Forum express negative views of a group without empirical data to back them up, please feel free to report their comment! I'd like to apply our standards fairly, even to criticism of groups that are "socially acceptable" targets in other places.

I don't know which part of Denise's comment you are referring to. My best guess is:

The foster system is often a pretty awful experience to children.

This doesn't imply negative things about any particular people or group of people. It specifically refers to a "system". Even if people who work in the foster system are generally kind and competent, the experience of growing up without a family in a family-centric culture, surrounded by a constantly shifting group of people, and feeling generally unwanted/out of place in society... well, it's often going to be pretty awful.

(Similarly, saying "sewers are an unpleasant work environment" doesn't read to me as a criticism of sewer designers; there are other, more obvious reasons that you'd prefer not to work in a sewer.) 

My comment was written to be specifically action guiding, and the negative facts about adopted children, especially older adopted children, are a crucial component of any fair evaluation of the risks and benefits of this decision. 

The part of your comment that led me to issue a warning was your speculation about the violent tendencies of older adopted children. I've made an edit to my warning to clarify that the rest of my comment was an objection/counterpoint from one user to another.

We should not be straw rationalists who are unable to act in the absence of RCTs; we can and should use evidence from other domains to make logical inferences when we have to make decisions under uncertainty. 

I agree! But there are consequences to making certain logical inferences that I think are important to consider in the context of a public discussion forum (and in the context of any communication between imperfectly rational humans). We should not be straw rationalists who refuse to consider the social implications of communication.

A comment that refers to a negative stereotype about a group of people has fairly high downside risk. Not in isolation, of course (few people read most Forum comments), but in the sense that every instance of a thing permitted makes it more difficult to stop further instances of that thing. And when the thing is "speculation about the violent tendencies of different groups of people", there's a lot of risk to it becoming more common. (I would expect people who don't like that kind of speculation to become less active, and people who enjoy it to become more active, with generally negative consequences.)

Of course, downside risk isn't the only thing a comment can have. There are also potential benefits, which are usually going to be more important!

In this case, the potential benefit could actually be high — if the OP decides not to adopt because they anticipated helping the child become dramatically more wealthy or successful than they'd otherwise have been, and it turns out this (very likely) wouldn't have happened and the OP would have been bitterly disappointed as a result (which in turn seems bad for their adopted child), you've helped them.

However, the part of the comment that I moderated doesn't seem as beneficial, on net. I find the inference shaky (how violent are siblings, relative to parents? How common is violence toward siblings vs. stepsiblings?), and the magnitude of the risk highly unclear. And when net benefits are concerned, I don't want to ignore that the choice of "adopt vs. not adopt" has a substantial impact on the welfare of the child who is or isn't adopted. (Or at least I infer that it does; moderators also have to draw inferences sometimes.)

If we punish such comments, while allowing relatively statistic-light 'positive' comments we create a persistent bias which will lead us to social desirability bias. It is well known that the EA movement has a bias against conservatives; we should not let this bias morph into a moderation policy.

I agree that our current moderation policy seems likely to create some amount of social desirability bias, even if we aim to apply it fairly to "desirable" and "undesirable" topics.

However, there are other factors that influence a forum's success aside from "are people sharing opinions with as little bias as possible?" (Though that's an important factor, and I try not to get in the way of it unless I have a really good reason, as I think I do here.)

I'd guess that most people in EA are more likely to participate in an atmosphere that leans positive and pleasant, and where certain types of comments (e.g. negative stereotypes about groups, harsh criticism) are held to unusually high standards of rigor. And I'd guess that a forum with more participation will be more impactful, because more people will share ideas, give feedback, and so on.

(I could write many, many more paragraphs about the considerations we've had to make around balancing user experience for people with different preferences while aiming at the most impactful overall product, but I've now spent more than an hour on this comment and have to finish up.)

These are the kinds of tradeoffs I have to consider as a moderator. I also have to consider the best way to apply policies established by the people who built the Forum and fund its continued development, even if my personal preferences are a bit different in some cases. I hope that even if we don't see eye to eye on this particular post, you can see that I am at least aware of the concerns you raise.

*****

On the "bias against conservatives" point: I don't understand how this relates to our conversation. If anything, I think of adoption as a "conservative" type of action, because I associate it with large families and religious beliefs. 

There are other posts on the Forum that could be cited more easily for this concern — maybe you meant to refer to some of those?

*****

Now speaking as a commenter, rather than a moderator:

I do think your comment showed an absence of something else we encourage:

You set out to present the best case you could for the argument "don't adopt". You didn't speculate on the benefit of being adopted to a child's well-being, or how their family might benefit — instead, you focused on negative consequences. Given that really good outcomes are also possible, this focus stands out.

(To give another example: I can imagine someone posting about their plans to remarry, and someone else saying "you should strongly consider not remarrying, because stepparents are often violent towards their stepchildren". This may be true, but there are many potential benefits to remarrying, and it seems like people should remarry at least sometimes — is someone who reads your comment likely to better understand whether they should remarry?)

This isn't to say that every comment has to present all the pros and cons of a decision. I was just personally put off by that aspect.

Yet better outcomes for adopted teenagers over non-adopted teenagers is actually a logical consequence of the  considerations I mentioned, because never-adopted children will have even worse adverse selection problems than late-adopted children!

Fair enough! This is a good and helpful critique.

I should have added more detail to that comment (about e.g. trying really hard to compare adopted vs. eligible-but-not-adopted children with similar characteristics, or children who were successfully adopted vs. children who would have been adopted but the paperwork fell through, etc., etc.) I meant to imply "find the most reliable version of this study", but that definitely didn't come through.

Back to being a moderator:

I really don't want the Forum to be a place where people simply can't share certain true (legal, safe) information. However, there are a lot of ways to share information, and I do want to encourage people to share it in certain ways I think are better for discussion.

Aside from the alternatives I proposed in the last comment (finding more data), here are some suggested formats that could have brought your comment more in line with how I'd like Forum discussion to go. These are meant to be "examples of comments with formats that seem good", rather than "examples of exactly what you should have said" — you may well disagree with some of what I say here.

  • "From your quote X, I think you may be hoping to have outcome Y on an adopted child. I think that's unlikely, because..."
  • "You ask whether it's "worth it" to adopt an older child. While I understand the desire to help people whose welfare is especially neglected, I think there are some alternatives A and B you should consider, based on tradeoffs X and Y..."
  • "I think that bringing a happy new life into the world is actually a very good thing! Here are some thoughts on why it could lead to a better world, overall, than deciding to adopt..."
  • (I've replaced my fourth example with Habryka's comment, which does a great job of providing extra supporting data for claims about the risk of violence.)

In particular, I think the last one of these takes risks seriously without assuming that adoption is never the right option.

You don't have to write comments that look like any of these, of course — I'm just trying to show that there are ways to convey "negative" or "socially undesirable" opinions without presenting negative stereotypes of entire groups.

Comment by Aaron Gertler (aarongertler) on Progress Open Thread: March 2021 · 2021-03-22T07:01:42.759Z · EA · GW

Edit: I responded to this comment after someone reported it, and hadn't seen Denise's comment at the time. She sums up one of the points I try to make here better than I did.

Second edit: I think Habryka's comment is a really good example of how to make a similar point to what Dale aimed for, with enough additional supporting evidence to abide by the Forum's rules.

Having read Habryka's comment and internalized how probable the risks of adoption can be, I appreciate Dale's comment more in retrospect. While I still don't like the way the argument for high abuse risk was presented, I could have done more to acknowledge that the spirit of the comment was "I'm taking a risk by writing a controversial comment because I see someone doing something I think is dangerous" — execution aside, this type of content is really valuable. 

*****

Moderator here!

While this comment seems to be well-intentioned, it also takes the form of:

  • "I would like to help someone."
  • "No, you shouldn't do that. They might be a dangerous person who will hurt someone you love."

This kind of argument demands better data than is present in this comment, which takes an (unlinked) statistic and extrapolates it to a fairly different scenario. 

Please don't use this kind of reasoning to make negative claims about entire groups of people (e.g. teenagers who are available for adoption).

*****

Edit: Taking the mod hat off in this section, which is just a comment from one user to another:

The reasoning "you may not be able to change their outcomes, ergo this isn't worth doing" also seems a bit beside the point. Changing a person's life doesn't have to entail e.g. changing their educational attainment or future income. It can also just entail giving them a stable home and a family of people who care about them. 

(I don't have statistics on hand, but I think it's very likely that adopted teenagers report higher life satisfaction than teenagers who don't get adopted. I'll gladly donate $50 to the charity of someone's choice if they find solid data showing otherwise, since my first few minutes of research didn't get me anywhere.)

*****

A version of this comment that would have been better: "Studies X and Y show that parents tend to regret adopting older children and/or that older children tend to report lower life satisfaction after being adopted. Consider that doing this might make you unhappy and not help the person you want to help." Or even: "Studies X and Y show that the risk of violence in the home rises from A% to B% in families that adopt an older child." 

The current comment makes unkind assumptions about a group of people without accurate data to back them — so despite the good intentions, it falls afoul of our rules.

Comment by Aaron Gertler (aarongertler) on AMA: Tom Chivers, science writer, science editor at UnHerd · 2021-03-22T05:32:07.253Z · EA · GW

Thank you for your contributions! It was very kind of you to take time out of your schedule to chat with us, especially during a book-release period :-)

Comment by Aaron Gertler (aarongertler) on EA capital allocation is an inner ring · 2021-03-20T18:22:23.677Z · EA · GW

Yes, that would have been sufficient. The "withdraw this post" part seems a bit harsh (and redundant, since editing a post entails "withdrawing" the old version), but not to the point where I'd say anything about it as a mod.

I appreciate your engaging with my comment — it's hard to do mod stuff without coming across as overbearing, but I really value your contributions to the Forum. It's just a struggle to find balance between our more direct commenters and the people who find the Forum's culture intimidating.