Posts

Cause Exploration Prizes: Announcing our prizes 2022-09-09T13:47:31.064Z
Where can I learn about how DALYs are calculated? 2022-06-10T22:03:38.379Z
Announcing the launch of Open Phil's new website 2022-06-07T20:32:08.230Z
Open Philanthropy's Cause Exploration Prizes: $120k for written work on global health and wellbeing 2022-05-25T12:06:51.133Z
New? Start here! (Useful links) 2022-03-25T09:07:29.970Z
Chris Blattman on the chaotic nature of running surveys 2022-02-20T10:27:46.908Z
Bryan Caplan on EA groups 2022-01-10T20:31:58.577Z
Creative Writing Contest: The Winning Entries 2021-12-26T10:43:36.189Z
Reasons and Persons: Watch theories eat themselves 2021-12-25T03:53:43.120Z
(Answered) Tax question: Help us donate millions, get a $500 bounty! 2021-12-24T06:19:19.703Z
Where are you donating in 2021, and why? 2021-12-16T09:18:36.731Z
Effective Altruism: The First Decade (Forum Review) 2021-12-01T22:18:05.365Z
EA Organization Updates: November 2021 2021-11-29T17:53:29.582Z
Announcing my retirement 2021-11-25T13:55:57.554Z
EA Organization Updates: October 2021 2021-11-04T22:24:50.637Z
Open Thread: Spring 2022 2021-11-01T10:05:31.373Z
Petrov Day Retrospective: 2021 2021-10-21T10:12:32.574Z
EA Forum Prize: Winners for May-July 2021 2021-10-20T11:17:11.064Z
Forum Update: New Features (October 2021) 2021-10-18T14:06:17.281Z
Creative Writing Contest: Now with more prizes! 2021-10-14T07:12:40.278Z
[PR FAQ] Tagging users in posts and comments 2021-10-02T02:50:02.511Z
Open Thread: October 2021 2021-10-01T11:10:15.818Z
EA Organization Updates: September 2021 2021-10-01T03:55:15.112Z
Honoring Petrov Day on the EA Forum: 2021 2021-09-25T23:27:43.088Z
What is your favorite EA meme? 2021-09-21T09:28:48.361Z
Clickhole's take on cause prioritization 2021-09-16T07:33:32.722Z
EA Forum Creative Writing Contest: Submission thread for work first published elsewhere 2021-09-15T08:25:40.949Z
EA Forum Creative Writing Contest: $22,000 in prizes for good stories 2021-09-12T21:15:12.890Z
Open Thread: September 2021 2021-09-01T07:00:00.000Z
EA Organization Updates: August 2021 2021-08-27T11:11:38.777Z
What are the EA movement's most notable accomplishments? 2021-08-23T03:51:46.604Z
Perverse Excited Failure & Justified General Frustration 2021-08-13T23:42:38.654Z
[PR FAQ] Sharing readership data with Forum authors 2021-08-09T10:47:38.112Z
Engineering the Apocalypse: Rob Reid and Sam Harris on engineered pandemics (transcript) 2021-08-06T03:31:46.653Z
Open Thread: August 2021 2021-08-02T10:04:16.302Z
EA Organization Updates: July 2021 2021-07-31T12:20:09.509Z
EA Forum Prize: Winners for April 2021 2021-07-29T01:12:38.937Z
Writing about my job: Content Specialist, CEA 2021-07-19T01:56:14.645Z
You should write about your job 2021-07-19T01:26:59.345Z
Lant Pritchett on the futility of "smart buys" in developing-world education 2021-07-18T23:00:26.556Z
Effective Altruism Polls: A resource that exists 2021-07-10T06:15:12.561Z
The most successful EA podcast of all time: Sam Harris and Will MacAskill (2020) 2021-07-03T21:47:28.540Z
Open Thread: July 2021 2021-07-01T09:16:20.679Z
New Roles in Global Health and Wellbeing (Open Philanthropy) 2021-06-29T19:48:59.625Z
EA Organization Updates: June 2021 2021-06-26T00:37:38.598Z
What are some examples of successful social change? 2021-06-22T22:51:19.955Z
Forum update: New features (June 2021) 2021-06-17T05:01:31.723Z
What are some high-impact paths for a young person in the developing world? 2021-06-14T05:45:15.673Z
What is an example of recent, tangible progress in AI safety research? 2021-06-14T05:29:22.031Z
Open Thread: June 2021 2021-06-03T00:43:21.010Z

Comments

Comment by Aaron Gertler (aarongertler) on New? Start here! (Useful links) · 2022-09-14T06:03:43.669Z · EA · GW

Thanks for the note! Fixed.

Comment by Aaron Gertler (aarongertler) on EA 1.0 and EA 2.0; highlighting critical changes in EA's evolution · 2022-08-10T10:35:29.935Z · EA · GW

I first became closely involved/interested in EA in 2013, and I think the "change to longtermism" is overstated. 

Longtermism isn't new. As a newbie, I learned about global catastrophic risk as a core, obvious part of the movement (through posts like this one) and read this book on GCRs — which was frequently recommended at the time, and published more than a decade before The Precipice. 

And near-term work hasn't gone away. Nearly every organization in the global health space that was popular with EA early on still exists, and I'd guess they almost all have more funding than they did in 2013. (And of course, lots of new orgs have popped up.) I know less about animal welfare, but I'd guess that there is much more funding in that space than there was in 2013. EA's rising tide has lifted a lot of boats.

Put another way: If you want to do impact-focused work on global health or animal welfare, I think it's easier to do so now than it was in 2013. The idea that EA has turned its back on these areas just doesn't track for me.

Comment by Aaron Gertler (aarongertler) on It's OK not to go into AI (for students) · 2022-07-28T21:29:34.432Z · EA · GW

My guess is that the median person who filled out the EA survey isn't being consistent in this way. I expect that they could have a one-hour 1-1 with a top community-builder that makes them realize they could be doing something at least 10% better. This is a crux for me.

I agree with most of this. (I think that other people in EA usually think they're doing roughly the best thing for their skills/beliefs, but I don't think they're usually correct.)

I don't know about "top community builder", unless we tautologically define that as "person who's really good at giving career/trajectory advice". I think you could be great at building or running a group and also bad at giving advice. (There are several ways to be bad at giving advice — you might be ignorant of good options, bad at surfacing key features of a person's situation, bad at securing someone's trust, etc.)

Separately, I do feel a bit weird about making every conversation into a career advice conversation, but often this seems like the highest impact thing.

I'm thinking about conversations in the vein of an EAG speed meeting, where you're meeting a new person and learning about what they do for a few minutes. If someone comes to EAG and all their speed meetings turn into career advice with an overtone of "you're probably doing something wrong", that seems exhausting/dispiriting and unlikely to help (if they aren't looking for help). I've heard from a lot of people who had this experience at an event, and it often made them less interested in further engagement.

If I were going to have an hour-long, in-depth conversation with someone about their work, even if they weren't specifically asking for advice, I wouldn't be surprised if we eventually got into probing questions about how they made their choices (and I hope they'd challenge me about my choices, too!). But I wouldn't try to ask probing questions unprompted in a brief conversation unless someone said something that sounded very off-base to me.

Comment by Aaron Gertler (aarongertler) on It's OK not to go into AI (for students) · 2022-07-25T20:38:20.013Z · EA · GW

Upvoted for explaining your stance clearly, though I'm unclear on what you see as the further implications of:

Because there are good reasons to work on AI safety, you need to have a better reason not to.

This is true about many good things a person could do. Some people see AI safety as a special case because they think it's literally the most good thing, but other people see other causes the same way — and I don't think we want to make any particular thing a default "justify if not X".

(FWIW, I'm not sure you actually want AI to be this kind of default — you never say so — but that's the feeling I got from this comment.)

Note that there are many people who should not work on AI safety because they have >400x more traction on problems 400x smaller, or whatever.

When someone in EA tells me they work on X, my default assumption is that they think their (traction on X * assumed size of X) is higher than the same number would be for any other thing. Maybe I'm wrong, because they're in the process of retraining or got rejected from all the jobs in Y or something. But I don't see it as my job to make them explain to me why they did X instead of Y, unless they're asking me for career advice or something.

There may be exceptional cases where someone is working on something really unusual, but in those cases, I aim for a vibe of "curious and interested" rather than "expecting justification". At a recent San Diego meetup, I met a dentist and was interested to learn how he chose dentistry; as it turns out, his reasoning was excellent (and I learned a lot about the dental business).

Finding the arguments for AI risk unconvincing is not a reason to just not work on AI risk, because if the arguments are wrong, this implies lots of effort on alignment is wasted and we need to shift billions of dollars away from it (and if they have nonessential flaws this could change research directions within alignment), so you should write counterarguments up to allow the EA community to correctly allocate its resources.

This point carries over to global health, right? If someone finds EA strategy in that area unconvincing, do they need to justify why they aren't writing up their arguments?

In theory, maybe it applies more to global health, since the community spends much more money on global health than AI? (Possibly more effort, too, though I could see that going either way.)

Comment by Aaron Gertler (aarongertler) on It's OK not to go into AI (for students) · 2022-07-25T20:02:41.927Z · EA · GW

I've been running EA events in San Francisco every other month, and often I will meet a recent graduate, and as part of their introduction they will explain to me why they are or aren't working on AI stuff.

The other day, I had my first conversation ever where someone explained why they weren't sure about going into AI, unprompted. I said something like "no need to justify yourself, EA is a big tent", which felt like the obvious thing to say (given all my experiences in the movement, meeting people who work on a dozen different problems). If some groups have atmospheres where AI self-justification feels important,  that seems bad.

(Though I think "explaining why you work on X" is very different than "explaining why you don't work on X, not so much"; the former seems fine/natural.)

*****

Related: an old post of mine on why being world-class at some arbitrary thing could be more impactful than being just okay at a high-priority career.

That post is way too long, but in short, benefits to having a diverse set of world-class people in EA include:

  • Wide-ranging connections to many different groups of people (including skilled people who can contribute to valuable work and successful people who have strong networks/influence)
  • EA being a more interesting movement for the people in it + people who might join
Comment by Aaron Gertler (aarongertler) on Leaning into EA Disillusionment · 2022-07-25T18:01:38.462Z · EA · GW

When I see new people setting themselves up so they only spend time with other EAs, I feel worried.

When you see this happen, is it usually because EA fills up someone's limited social schedule (such that they regretfully have to miss other events), or because they actively drop other social things in favor of EA? I'm surprised to see the phrase "setting themselves up", because it implies the latter.

I also wonder how common this is. Even when I worked at CEA, it seems like nearly all of my coworkers had active social lives/friend groups that weren't especially intertwined with EA. And none of us were in college (where I'd expect people to have much more active social lives).

Comment by Aaron Gertler (aarongertler) on Criticism of EA Criticism Contest · 2022-07-18T08:44:48.468Z · EA · GW

As an aside, I'm now curious about how well Eliezer's recent posts would have done in the contest — are those examples of content you'd expect to go unrewarded?

Comment by Aaron Gertler (aarongertler) on Criticism of EA Criticism Contest · 2022-07-18T08:40:23.730Z · EA · GW

Upvoted for:

  1. The interesting framework
  2. The choice of target (I think the contest is basically good and I don't share most of your critiques, but it's good that someone is red-teaming the red-teamers)
  3. The reminder of how important copyediting is (I think that some of the things that bothered you, like the unnecessary "just", would have been removed without complaint by some editors). I hope this does well in the contest!

Most of the items on your "framework" list have been critiqued and debated on the Forum before, and I expect that almost any of them could inspire top contenders in the contest (the ones that seem toughest are "effectiveness" and "scope sensitivity", but that's only because I can't immediately picture  — which isn't the same thing as being impossible).

A few titles of imaginary pieces that clearly seem like the kind of thing the contest is looking for:

  • We Owe The Future Nothing (addressing "Obligation")
  • EA Shouldn't Be Trying To Grow ("Evangelicalism")
  • EA Should Get Much Weirder ("Reputation")
  • EA Is Way Too Centralized ("Coordination")
  • We Need To Improve Existence Before We Worry About Existential Risk ("Existential Risk")
  • Most Grants Should Be Performance-Based, Not Application-Based ("Bureaucracy")
  • We Should Take Our Self-Professed Ideals More Seriously ("Grace")
  • Flying To EA Global Does More Harm Than Eating Six Metric Tons Of Cheese ("Veganism")

 

Question, if you have the time: What are titles for imaginary pieces that you think the criticism contest implicitly excludes, or would be very unlikely to reward based on the stated criteria?

Comment by Aaron Gertler (aarongertler) on Community Builders Spend Too Much Time Community Building · 2022-06-29T03:21:44.363Z · EA · GW

I was surprised to see that the word "class" appears nowhere in this post.

Once you've paid your tuition, college classes are free. And they teach a lot of useful skills if you pick the right ones. It's great to read articles and work on small projects and find other extracurricular ways to skill up. But I'd hope that anyone organizing an EA group is also choosing good classes to take.

Examples of classes I took in college that felt like "skilling up" (which, collectively, took much more time than founding Yale EA, even on a per-semester basis):

  • Several writing classes
  • A negotiation class (funnily enough, Ari Kagan was one of my classmates)
  • An entrepreneurship class focused on building and scoping a specific business idea
  • A class where I learned the R programming language
  • A class on marketing via behavioral economics

I also did a ton of extracurricular campus journalism, which has been exceedingly useful in my career despite being quite disconnected from EA-focused upskilling.

None of this was as time-efficient as targeted reading on EA topics would have been. But targeted reading doesn't come with certain benefits that classes offer (external project deadlines, free project review from experts, office hours with said experts). And because you have to take classes at college anyway, getting at least some value from them is a huge counterfactual win.

*****

It actually seems okay to me if most of organizers' "EA time" is spent on marketing-like activities, as long as they are learning and practicing useful skills in their classes and non-EA activities (and as long as group members can tell that their organizers have cool stuff going on outside of EA marketing).

Comment by Aaron Gertler (aarongertler) on Critiques of EA that I want to read · 2022-06-20T01:49:45.040Z · EA · GW

The fact that everyone in EA finds the work we do interesting and/or fun should be treated with more suspicion.

I know that "everyone" was an intentional exaggeration, but I'd be interested to see the actual baseline statistics on a question like "do you find EA content interesting, independent of its importance?"

Personally, I find "the work EA does" to be, on average... mildly interesting?

In college, even after I found EA, I was much more intellectually drawn to random topics in psychology and philosophy, as well as startup culture. When I read nonfiction books for fun, they are usually about psychology, business, gaming, or anthropology. Same goes for the Twitter feeds and blogs I follow. 

From what I've seen, a lot of people in EA have outside interests they enjoy somewhat more than the things they work on (even if the latter takes up much more of their time).

*****

Also, as often happens, I think that "EA culture" here may be describing "the culture of people who spend lots of time on EA Twitter or the Forum", rather than "the culture of people who spend a lot of their time on EA work".  Members of the former group seem more likely to find their work interesting and/or fun; the people who feel more like I do probably spend their free time on other interests.

Comment by Aaron Gertler (aarongertler) on Announcing the launch of Open Phil's new website · 2022-06-17T10:37:29.822Z · EA · GW

Despite the real visual + other issues, I still think the website is very reasonable! 

The changes to make, including some to the grant page, are tiny relative to the overall size of the project. It seems very easy to find our grants and other content, and overall reception from key stakeholders has been highly positive. OP staff seem to like the changes, too (and we had tons of staff feedback at all points of the process).

If you have other specific feedback, I'm happy to hear it, but I don't know what e.g. "a little more focus and polish" means.

Comment by Aaron Gertler (aarongertler) on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-17T10:34:05.800Z · EA · GW

The 2019 'spike' you highlight doesn't represent higher overall spending — it's a quirk of how we record grants on the website.

Each program officer has an annual grantmaking "budget", which rolls over into the next year if it goes unspent. The CJR budget was a consistent ~$25 million/year from 2017 through 2021. If you subtract the Just Impact spin-out at the end of 2021, you'll see that the total grantmaking over that period matches the total budget.

So why does published grantmaking look higher in 2019?

The reason is that our published grants generally "frontload" payment amounts — if we're making three payments of $3 million in each of 2019, 2020, and 2021, that will appear as a $9 million grant published in 2019.

In the second half of 2019, the CJR team made a number of large, multi-year grants — but payments in future years still came out of their budget for those years, which is why the published totals look lower in 2020 and 2021 (minus Just Impact). Spending against the CJR budget in 2019 was $24 million — slightly under budget.

So the actual picture here is "CJR's budget was consistent from 2017-2021 until the spin-out", not "CJR's budget spiked in the second half of 2019".

Comment by Aaron Gertler (aarongertler) on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-16T19:51:50.759Z · EA · GW

(Writing from OP’s point of view here.)

We appreciate that Nuño reached out about an earlier draft of this piece and incorporated some of our feedback. Though we disagree with a number of his points, we welcome constructive criticism of our work and hope to see more of it.

We’ve left a few comments below.

*****

The importance of managed exits

We deliberately chose to spin off our CJR grantmaking in a careful, managed way. As a funder, we want to commit to the areas we enter and avoid sudden exits. This approach:

  1. Helps grantees feel comfortable starting and scaling projects. We’ve seen grantees turn down increased funding because they were reluctant to invest in major initiatives; they were concerned that we might suddenly change our priorities and force them to downsize (firing staff, ending projects half-finished, etc.)
  2. Helps us hire excellent program officers. The people we ask to lead our grantmaking often have many other good options. We don’t want a promising candidate to worry that they’ll suddenly lose their job if we stop supporting the program they work on.

Exiting a program requires balancing:

  • the cost of additional below-the-bar spending during a slow exit;
  • the risks from a faster exit (difficulty accessing grant opportunities or hiring the best program officers, as well as damage to the field itself).

We launched the CJR program early in our history. At the time, we knew that committing to causes was important, but we had no experience in setting expectations about a program’s longevity or what an exit might look like. When we decided to spin off CJR, we wanted to do so in a way that inspired trust from future grantees and program staff. In the end, we struck what felt to us like an appropriate balance between “slow” and “fast”.[1]

It’s plausible that we could have achieved this trust by investing less money and more time/energy. But at the time, we were struggling to scale our organizational capacity to match our available funding; we decided that other capacity-strained projects were a priority.

*****

Open Phil is not a unitary agent

Running an organization involves making compromises between people with different points of view — especially in the case of Open Phil, which explicitly hires people with different worldviews to work on different causes. This is especially true for cases where an earlier decision has created potential implicit commitments that affect a later decision.

I would avoid trying to model Open Phil (or other organizations) as unitary agents whose actions will match a single utility function. The way we handle one situation may not carry over to other situations.

If this dynamic leads you to put less “trust” in our decisions, we think that’s a good thing! We try to make good decisions and often explain our thinking, but we don’t think others should be assuming that all of our decisions are “correct” (or would match the decisions you would make if you had access to all of the relevant info).

*****

“By working in this area, one could gain leverage, for instance [...] leverage over the grantmaking in the area, by seeding Just Impact.”

Indeed, part of our reason for seeding Just Impact was that it could go on to raise a lot more money, resulting in a lot of counterfactual impact. That kind of leverage can take funding from below the bar to above it.

*****

Open Philanthropy might gain experience in grantmaking, learn information, and acquire expertise that would be valuable for other types of giving. In the case of criminal justice reform, I would guess that the specific cause officers—rather than Open Philanthropy as an institution—would gain most of the information. I would also guess that the lessons learnt haven’t generalized to, for instance, pandemic prevention funding advocacy.

This doesn’t accord with our experience. Over six years of working closely with Chloe, we learned a lot about effective funding in policy and advocacy in ways we do expect to accrue to other focus areas. She was also a major factor when we updated our grantmaking process to emphasize the importance of an organization's leadership for the success of a grant. 

It’s possible that we would have learned these lessons otherwise, but given that Chloe was our first program officer, a disproportionate amount of organizational learning came from our early time working with her, and those experiences have informed our practices.

  1. ^

    Note that when we launched our programs in South Asian Air Quality and Global Aid Policy, we explicitly stated that we "expect to work in [these areas] for at least five years". This decision comes from the experience we’ve developed around setting expectations.

Comment by Aaron Gertler (aarongertler) on Where can I learn about how DALYs are calculated? · 2022-06-12T20:23:07.605Z · EA · GW

This is excellent, thanks!

These two papers, in particular, were what I was looking for. The corresponding information on QALYs was also great.

(For future readers of my post, the relevant info is under the "descriptive system" and "valuation methods" subheadings in Derek's post.)

Comment by Aaron Gertler (aarongertler) on Where can I learn about how DALYs are calculated? · 2022-06-12T20:06:04.324Z · EA · GW

Thanks! The correlation graphs were helpful to see, though I'm sad about the muddled results from the graph in the updated section.

Comment by Aaron Gertler (aarongertler) on Announcing the launch of Open Phil's new website · 2022-06-11T01:58:36.441Z · EA · GW

This is very good feedback — I'll look into making that change.

Comment by Aaron Gertler (aarongertler) on Announcing the launch of Open Phil's new website · 2022-06-11T01:50:19.783Z · EA · GW

Thanks for all of this feedback! Lots of good points to consider moving forward, and exactly the kind of thing I hoped to get from this post.

This website was a weird project — passed around between owners and developers over a period of ~2 years. I think there was a good amount of usability testing before my time, but I'm not sure how much of that was holistic and focused on the final design (vs. focused on specific elements). I agree with most of your points myself and also trust your experience in this area.

Comment by Aaron Gertler (aarongertler) on Announcing the launch of Open Phil's new website · 2022-06-11T01:44:39.935Z · EA · GW

A couple of reports had their footnotes get jumbled — a fix is in progress. Thanks for the note!

Comment by Aaron Gertler (aarongertler) on Announcing the launch of Open Phil's new website · 2022-06-11T01:44:17.125Z · EA · GW

Thanks for this feedback. The horizontal scroll is a matter of having long email addresses on those page, and I'll clean that up after checking with page owners.

Agree with info density dropping on the grants page — I think there's an easy improvement or two to be made here (e.g. removing the "Learn More" arrow), which I'll be aiming to make as the new site owner (with input from others at OP).

Comment by Aaron Gertler (aarongertler) on Where can I learn about how DALYs are calculated? · 2022-06-11T01:25:55.163Z · EA · GW

Thanks for the link! I was aware of the most recent study, but you prompted me to dig deep and see what they said about their survey methodology. 

The most relevant bits I found were sections 4.8 and 4.8.1 in this PDF, which describe multiple surveys done across a bunch of countries. 

I'm still not sure where to find actual response counts by country or demographic data on respondents — it's easy to find tons of data on how different health issues are ranked and how common they are, but not to find a full "factory tour" of how the estimates were put together. I'd still be interested in more data on those points (I have to imagine it's buried somewhere in those 1800 pages).

Comment by Aaron Gertler (aarongertler) on Announcing the launch of Open Phil's new website · 2022-06-09T22:29:28.474Z · EA · GW

Thanks, all resolved!

Comment by Aaron Gertler (aarongertler) on Announcing the launch of Open Phil's new website · 2022-06-08T00:27:49.125Z · EA · GW

The license still applies! We'll have it back up on the footer soon.

Comment by Aaron Gertler (aarongertler) on Little (& effective) altruism · 2022-06-03T08:07:22.647Z · EA · GW

This was a nice little post!

One of the biggest draws to the EA community for me — and something that's kept me involved — is how much small-scale altruism goes on here. Unsurprisingly, a movement founded on practical altruism draws a lot of people who enjoy helping and actually care about providing good help. 

This manifests in a bunch of ways. Two that come to mind: EA Global participants swarming me to help carry heavy conference items through a shopping mall when I was at CEA, and a bunch of cases where someone in the community encountered a personal issue and got massive support from their extended social network — here's one example.

*****

I've met people who seem to get by without caring much about small-scale altruism (they are good at fixing their eyes on the biggest problems and attacking them relentlessly). But for many people, I think that small-scale altruism reinforces the bigger stuff. A habit of small good deeds helps you maintain your altruistic character, self-identity, and motivation. 

(Picking up some trash in my apartment complex was among the most satisfying altruistic things I've ever done, even though my donations do much more actual good.)

That said, there's no reason you can't start working on the big stuff alongside the small stuff. Donations, self-education, career planning, and small good deeds can all be part of a balanced EA diet (no false pride required).

Comment by Aaron Gertler (aarongertler) on "Big tent" effective altruism is very important (particularly right now) · 2022-05-28T00:37:03.443Z · EA · GW

I don't share your view about what a downvote means.

What does a downvote mean to you? If it means "you shouldn't have written this", what does a strong downvote mean to you? The same thing, but with more emphasis?

It'd be interesting to have some stats on how people on the forum interpret it.

Why not create a poll? I would, but I'm not sure exactly which question you'd want asked.

Most(?) readers won't know who either of them is, not to mention their relationship.

Which brings up another question — to what extent should a comment be written for an author vs. the audience? 

Max's comment seemed very directed at Luke — it was mostly about the style of Luke's writing and his way of drawing conclusions. Other comments feel more audience-directed. 

Comment by Aaron Gertler (aarongertler) on Open Philanthropy's Cause Exploration Prizes: $120k for written work on global health and wellbeing · 2022-05-26T11:43:37.162Z · EA · GW

The flower was licensed from this site.

The designer saw and appreciated this comment, but asked not to be named on the Forum.

Comment by Aaron Gertler (aarongertler) on "Big tent" effective altruism is very important (particularly right now) · 2022-05-26T11:34:21.159Z · EA · GW

I didn't get that message at all. If someone tells me they downvoted something I wrote, my default takeaway is "oh, I could have been more clear" or "huh, maybe I need to add something that was missing" — not "yikes, I shouldn't have written this". *

I read Max's comment as "I thought this wasn't written very clearly/got some things wrong", not "I think you shouldn't have written this at all". The latter is, to me, almost the definition of a strong downvote.

If someone sees a post they think (a) points to important issues, and (b) gets important things wrong, any of upvote/downvote/decline-to-vote seems reasonable to me.

 

*This is partly because I've stopped feeling very nervous about Forum posts after years of experience. I know plenty of people who do have the "yikes" reaction. But that's where the users' identities and relationship comes into play — I'd feel somewhat differently had Max said the same thing to a new poster.

Comment by Aaron Gertler (aarongertler) on EA is more than longtermism · 2022-05-26T11:20:43.567Z · EA · GW

I'll read any reply to this and make sure CEA sees it, but I don't plan to respond further myself, as I'm no longer working on this project. 

 

Thanks for the response. I agree with some of your points and disagree with others. 

To preface this, I wouldn't make a claim like "the 3rd edition was representative for X definition of the word" or "I was satisfied with the Handbook when we published it" (I left CEA with 19 pages of notes on changes I was considering). There's plenty of good criticism that one could make of it, from almost any perspective.

It’s pretty clear that the curriculum places way more weight on the content it frames as “essential” than a content linked to at the bottom of the “further reading” section.

I agree.

But much, maybe most, of the "essential" reading in the first three sections isn’t really about neartermist (or longtermist) causes. For instance, “We are in triage every second of every day” is about… triage.  I’d also put  “On Fringe Ideas”, “Moral Progress and Cause X”, “Can one person make a difference?”, “Radical Empathy”, and “Prospecting for Gold” in this bucket.

Many of these have ideas that can be applied to either perspective. But the actual things they discuss are mostly near-term causes. 

  • "On Fringe Ideas" focuses on wild animal welfare.
  • "We are in triage" ends with a discussion of global development (an area where the triage metaphor makes far more intuitive sense than it does for longtermist areas).
  • "Radical Empathy" is almost entirely focused on specific neartermist causes.
  • "Can one person make a difference" features three people who made a big difference — two doctors and Petrov. Long-term impact gets a brief shout-out at the end, but the impact of each person is measured by how many lives they saved in their own time (or through to the present day).

This is different from e.g. detailed pieces describing causes like malaria prevention or vitamin supplementation. I think that's a real gap in the Handbook, and worth addressing.

But it seems to me like anyone who starts the Handbook will get a very strong impression in those first three sections that EA cares a lot about near-term causes, helping people today, helping animals, and tackling measurable problems. That impression matters more to me than cause-specific knowledge (though again, some of that would still be nice!).

However, I may be biased here by my teaching experience. In the two introductory fellowships I've facilitated, participants who read these essays spent their first three weeks discussing almost exclusively near-term causes and examples.

By contrast, the essential reading in the “Longtermism”, “Existential Risk”, and “Emerging technologies” section is all highly focused on longtermist causes/worldview; it’s all stuff like “Reducing global catastrophic biological risks”, “The case for reducing existential risk”, and “The case for strong longtermism”.

I agree that the reading in these sections is more focused. Nonetheless, I still feel like there's a decent balance, for reasons that aren't obvious from the content alone:

  • Most people have a better intuitive sense for neartermist causes and ideas. I found that longtermism (and AI specifically) required more explanation and discussion before people understood them, relative to the causes and ideas mentioned in the first three weeks. Population ethics alone took up most of a week.
  • "Longtermist" causes sometimes aren't. I still don't quite understand how we decided to add pandemic prevention to the "longtermist" bucket. When that issue came up, people were intensely interested and found the subject relative to their own lives/the lives of people they knew. 
    • I wouldn't be surprised if many people in EA (including people in my intro fellowships) saw many of Toby Ord's "policy and research ideas" as competitive with AMF just for saving people alive today.
    • I assume there are also people who would see AMF as competitive with many longtermist orgs in terms of improving the future, but I'd guess they aren't nearly as common.

“Pascal’s mugging” is relevant to, but not specific to, longtermism

I don't think I've seen Pascal's Mugging discussed in any non-longtermist context, unless you count actual religion. Do you have an example on hand for where people have applied the idea to a neartermist cause?

"The case of the missing cause prioritization research” doesn’t criticize longtermist ideas per se,  it more argues that the shift toward prioritizing longtermism hasn’t been informed by significant amounts of relevant research. 

I agree. I wouldn't think of that piece as critical of longtermism.

As far as I can tell, no content in this whole section addresses the most frequent and intuitive criticism of longtermism I’ve heard (that it’s really really hard to influence the far future so we should be skeptical of our ability to do so).

I haven't gone back to check all the material, but I assume you're correct. I think it would be useful to add more content on this point.

This is another case where my experience as a facilitator warps my perspective; I think both of my groups discussed this, so it didn't occur to me that it wasn't an "official" topic.

Process-wise, I don’t think the use of test readers was an effective way of making sure the handbook was representative. Each test reader only saw a fraction of the content, so they’d be in no position to comment on the handbook as a whole. 

I agree. That wasn't the purpose of selecting test readers; I mentioned them only because some of them happened to make useful suggestions on this front.

While I’m glad you approached members of the animal and global development communities for feedback, I think the fact that they didn’t respond is itself a form of (negative) feedback (which I would guess reflects the skepticism Michael expressed that his feedback would be incorporated). 

I wrote to four people, two of whom (including Michael) sent useful feedback . The other two also responded; one said they were busy, the other seemed excited/interested but never wound up sending anything. 

A 50% useful-response rate isn't bad, and makes me wish I'd sent more of those emails. My excuse is the dumb-but-true "I was busy, and this was one project among many".

(As an aside, if someone wanted to draft a near-term-focused version of the Handbook, I think they'd have a very good shot at getting a grant.) 

I’d feel better about the process if, for example, you’d posted in poverty and animal focused Facebook groups and offered to pay people (like the test readers were paid) to weigh in on whether the handbook represented their cause appropriately. 

I'd probably have asked "what else should we include?" rather than "is this current stuff good?", but I agree with this in spirit.

(As another aside, if you specifically have ideas for material you'd like to see included, I'd be happy to pass them along to CEA — or you could contact someone like Max or Lizka.)

Comment by Aaron Gertler (aarongertler) on "Big tent" effective altruism is very important (particularly right now) · 2022-05-25T00:23:29.822Z · EA · GW

This is a minor point in some ways but I think explicitly stating "I downvoted this post" can say quite a lot (especially when coming from someone with a senior position in the community).

I ran the Forum for 3+ years (and, caveat, worked with Max). This is a complicated question.

Something I've seen many times: A post or comment is downvoted, and the author writes a comment asking why people downvoted (often seeming pretty confused/dispirited). 

Some people really hate anonymous downvotes. I've heard multiple suggestions that we remove anonymity from votes, or require people to input a reason before downvoting (which is then presumably sent to the author), or just establish an informal culture where downvotes are expected to come with comments.

So I don't think Max was necessarily being impolite here, especially since he and Luke are colleagues who know each other well.  Instead, he was doing something that some people want a lot more of and other people don't want at all. This seems like a matter of competing access needs (different people wanting different things from a shared resource).

In the end, I think it's down to individual users to take their best guess at whether saying "I downvoted" or "I upvoted" would be helpful in a given case. And I'm still not sure whether having more such comments would be a net positive — probably depends on circumstance.

***

Max having a senior position in the community is also a complicated thing. On the one hand, there's a risk that anything he says will be taken very seriously and lead to reactions he wouldn't want. On the other hand, it seems good for leaders to share their honest opinions on public platforms (rather than doing everything via DM or deliberately softening their views).

There are still ways to write better or worse comments, but I thought Max's was reasonable given the balancing act he's trying to do (and the massive support Luke's post had gotten already — I'd feel differently if Max had been joining a pile-on or something).

Comment by Aaron Gertler (aarongertler) on EA is more than longtermism · 2022-05-23T03:13:00.324Z · EA · GW

While at CEA, I was asked to take the curriculum for the Intro Fellowship and turn it into the Handbook, and I made a variety of changes (though there have been other changes to the Fellowship and the Handbook since then, making it hard to track exactly what I changed). The Intro Fellowship curriculum and the Handbook were never identical.

I exchanged emails with Michael Plant and Sella Nevo, and reached out to several other people in the global development/animal welfare communities who didn't reply. I also had my version reviewed by a dozen test readers (at least three readers for each section), who provided additional feedback on all of the material. 

I incorporated many of the suggestions I received, though at this point I don't remember which came from Michael, Sella, or other readers. I also made many changes on my own.

 

It's reasonable to argue that I should have reached out to even more people, or incorporated more of the feedback I received. But I (and the other people who worked on this at CEA) were very aware of representativeness concerns. And I think the 3rd edition was a lot more balanced than the 2nd edition. I'd break down the sections as follows:

  • "The Effectiveness Mindset", "Differences in Impact", and "Expanding Our Compassion" are about EA philosophy with a near-term focus (most of the pieces use examples from near-term causes, and the "More to Explore" sections share a bunch of material specifically focused on anima welfare and global development).
  • "Longtermism" and "Existential Risk" are about longtermism and X-risk in general.
  • "Emerging Technologies" covers AI and biorisk specifically.
    • These topics get more specific detail than animal welfare and global development do if you look at the required reading alone. This is a real imbalance, but seems minor compared to the imbalance in the 2nd edition. For example, the 3rd edition doesn't set aside a large chunk of the only global health + development essay for "why you might not want to work in this area".
  • "What might we be missing?" covers a range of critical arguments, including many against longtermism!
    • Michael Plant seems not to have noticed the longtermism critiques in his comment, though they include "Pascal's Mugging" in the "Essentials" section and a bunch of other relevant material in the "More to Explore" section.
  • "Putting it into practice" is focused on career choice and links mostly to 80K resources, which does give it a longtermist tilt. But it also links to a bunch of resources on finding careers in neartermist spaces, and if someone wanted to work on e.g. global health, I think they'd still find much to value among those links.
    • I wouldn't be surprised if this section became much more balanced over time as more material becomes available from Probably Good (and other career orgs focused on specific areas).

In the end, you have three "neartermist" sections, four "longtermist" sections (if you count career choice), and one "neutral" section (critiques and counter-critiques that span the gamut of common focus areas).

Comment by Aaron Gertler (aarongertler) on Bad Omens in Current Community Building · 2022-05-22T21:12:34.039Z · EA · GW

This is a tricky question to answer, and there's some validity to your perspective here. 

I was speaking too broadly when I said there were "rare exceptions" when epistemics weren't the top consideration.

Imagine three people applying to jobs:

  • Alice: 3/5 friendliness, 3/5 productivity, 5/5 epistemics
  • Bob: 5/5 friendliness, 3/5 productivity, 3/5 epistemics
  • Carol: 3/5 friendliness, 5/5 productivity, 3/5 epistemics

I could imagine Bob beating Alice for a "build a new group" role (though I think many CB people would prefer Alice), because friendliness is so crucial. 

I could imagine Carol beating Alice for an ops role.

But if I were applying to a wide range of positions in EA and had to pick one trait to max out on my character sheet, I'd choose "epistemics" if my goal were to stand out in a bunch of different interview processes and end up with at least one job.

 

One complicating factor is that there are only a few plausible candidates (sometimes only one) for a given group leadership position. Maybe the people most likely to actually want those roles are the ones who are really sociable and gung-ho about EA, while the people who aren't as sociable (but have great epistemics) go into other positions. This state of affairs allows for "EA leaders love epistemics" and "group leaders stand out for other traits" at the same time.

 

Finally, you mentioned "familiarity" as a separate trait from epistemics, but I see them as conceptually similar when it comes to thinking about group leaders.

Common questions I see about group leaders include "could this person explain these topics in a nuanced way?" and "could this person successfully lead a deep, thoughtful discussion on these topics?" These and other similar questions involve familiarity, but also the ability to look at something from multiple angles, engage seriously with questions (rather than just reciting a canned answer), and do other "good epistemics" things.

Comment by Aaron Gertler (aarongertler) on Aaron Gertler's Shortform · 2022-05-20T09:04:01.801Z · EA · GW

Memories from starting a college group in 2014

In August 2014, I co-founded Yale EA (alongside Tammy Pham). Things have changed a lot in community-building since then, and I figured it would be good to record my memories of that time before they drift away completely.

If you read this and have questions, please ask!

 

Timeline

I was a senior in 2014, and I'd been talking to friends about EA for years by then. Enough of them were interested (or just nice) that I got a good group together for an initial meeting, and a few agreed to stick around and help me recruit at our activities fair. One or two of them read LessWrong, and aside from those, no one had heard of effective altruism.

The group wound up composed largely of a few seniors and a bigger group of freshmen (who then had to take over the next year — not easy!). We had 8-10 people at an average meeting.

Events we ran that first year included:

  • A dinner with Shelly Kagan, one of the best-known academics on campus (among the undergrad population). He's apparently gotten more interested in EA since then, but during the dinner, he seemed a bit bemused and was doing his best to poke holes in utilitarianism (and his best was very good, because he's Shelly Kagan).
  • A virtual talk from Rob Mather, head of AMF. Kelsey Piper was visiting from Stanford and came to the event; she was the first EA celebrity I'd met and I felt a bit star-struck.
  • A live talk from Julia Wise and Jeff Kaufman (my second and third EA celebrities).  They brought Lily, who was a young toddler at the time. I think that saying "there will be a baby!" drew nearly as many people as trying to explain who Jeff and Julia were. This was our biggest event, maybe 40 people.
  • A lunch with Mercy for Animals — only three other people showed up.
  • A dinner with Leah Libresco, an atheist blogger and CFAR instructor who converted to Catholicism before it was cool. This was a weird mix of EA folks and arch-conservatives, and she did a great job of conveying EA's ideas in a way the conservatives found convincing.
  • A mixer open to any member of a nonprofit group on campus. (I was hoping to recruit their altruistic members to do more effective things — this sounds more sinister in retrospect than it did at the time.)
    • We gained zero recruits that day, but — wonder of wonders — someone's roommate showed up for the free alcohol and then went on to lead the group for multiple years before working full-time on a bunch of meta jobs. This was probably the most impactful thing I did all year, and I didn't know until years later.
  • A bunch of giving games, at activities fairs and in random dining halls. Lots of mailing-list signups, reasonably effective, and sponsored by The Life You Can Save — this was the only non-Yale funding we got all year, and I was ecstatic to receive their $300.
    • One student walked up, took the proffered dollar, and then walked away. I was shook.

We also ran some projects, most of which failed entirely:

  • Trying to write an intro EA website for high school students (never finished)
  • Calling important CSR staff at major corporations to see if they'd consider working with EA charities. It's easy to get on the phone when you're a Yale student, but it turns out that "you should start funding a strange charity no one's ever heard of" is not a compelling pitch to people whose jobs are fundamentally about marketing.
  • Asking Dean Karlan, development econ legend, if he had ideas for impactful student projects.
    • "I do!"
    • Awesome! What is it? 
    • "Can you help me figure out how to sell 200,000 handmade bags from Ghana?"
    • Um... thanks?
      • We had those bags all year and never even tried to sell them, but I think Dean was just happy to have them gone. No idea where they wound up.
  • Paraphrased ideas that we never tried:
    • See if Off! insect repellant (or other mosquito-fighting companies) would be interested in partnering with the Against Malaria Foundation?
    • Come up with a Christian-y framing of EA, go to the Knights of Columbus headquarters [in New Haven], and see if they'll support top charities?
    • Benefit concert with the steel drum band? [Co-president Pham was a member.]
    • Live Below the Line event? [Dodged a bullet.]
    • Write EA memes! [Would have been fun, oh well.]
    • The full idea document is a fun EA time capsule.
  • The only projects that achieved anything concrete were two fundraisers — one for the holidays, and one in memory of Luchang Wang, an active member (and fantastic person) whose death cast a shadow over the second half of the year. We raised $10-15k for development charities, of which maybe $5k was counterfactual (lots came from our members).
  • Our last meeting of the year was focused on criticism — what the group (and especially me) didn't do well, and how to improve things. I don't remember anything beyond that.
  • The main thing we accomplished was becoming friends. My happiest YEA-related journal entries all involve weird conversations at dinner or dorm-room movie nights. By the end of that year, I'd become very confident that social bonding was a better group strategy than direct action.

 

What it was like to running a group in 2014: Random notes

  • I prepared to launch by talking to 3-4 leaders at other college groups, including Ben Kuhn, Peter Wildeford, and the head of a Princeton group that (I think) went defunct almost immediately. Ben and Peter were great, but we were all flying by the seats of our pants to some degree.
  • While I kind of sucked at leading, EA itself was ridiculously compelling. Just walking through the basic ideas drove tons of people to attend a meeting/event (though few returned).
  • Aside from the TLYCS grant and some Yale activity funding, I paid for everything out of pocket — but this was just occasional food and maybe a couple of train tickets. I never even considered running a retreat (way too expensive).
  • Google Docs was still new and exciting back then. We didn't have Airtable, Notion, or Slack.
  • I never mention CEA in my journal. I don't think I'd really heard of them while I was running the group, and I'm not sure they had group resources back then anyway.
  • Our first academic advisor was Thomas Pogge, an early EA-adjacent philosopher who melted from public view after a major sexual harassment case. I don't think he ever responded to our very awkward "we won't be keeping you as an adviser" email.

 

But mostly, it was really hard

 The current intro fellowships aren't perfect, and the funding debate is real/important, but oh god things are so much better for group organizers than they were in 2014.

I had no idea what I was doing. 

There were no reading lists, no fellowship curricula, no facilitator guides, no nothing. I had a Google doc full of links to favorite articles and sometimes I asked people to read them.

I remember being deeply anxious before every meeting, event, and email send, because I was improvising everything and barely knew what we were supposed to be doing (direct impact? Securing pledges? Talking about cool blogs?).

Lots of people came to one or two meetings, saw how chaotic things were, and never came back. (I smile a bit when I see people complaining that modern groups come off as too polished and professional — that's not great, but it beats the alternative.)

I looked at my journal to see if the anxious memories were exaggerated. They were not. Just reading them makes me anxious all over again.

But that only makes it sweeter that Yale's group is now thriving, and that EA has outgrown the "students flailing around at random" model of community growth.

Comment by Aaron Gertler (aarongertler) on Some potential lessons from Carrick’s Congressional bid · 2022-05-19T09:37:02.222Z · EA · GW

I'd recommend cross-posting your critiques of the "especially useful" post onto that post — will make it easier for anyone who studies this campaign later (I expect many people will) to learn from you.

Comment by Aaron Gertler (aarongertler) on Some potential lessons from Carrick’s Congressional bid · 2022-05-19T09:33:55.897Z · EA · GW

Thanks for sharing all of this!

I'm curious about your fear that these comments would negatively affect Carrick's chances. What was the mechanism you expected? The possibility of reduced donations/volunteering from people on the Forum? The media picking up on critical comments?

If "reduced donations" were a factor, would you also be concerned about posting criticism of other causes you thought were important for the same reason?  I'm still working out what makes this campaign different from other causes (or maybe there really are similar issues across a bunch of causes). 

 

One thing that comes to mind is time-sensitivity: if you rethink your views on a different cause later, you can encourage more donations to make up for a previous reduction. If you rethink views on a political campaign after Election Day, it's too late. 

If that played a role, I can think of other situations that might exert the same pressure — for example, organizations running out of runway having a strong fundraising advantage if people are worried about dooming them. Not sure what to do about that, and would love to hear ideas (from anyone, this isn't specifically aimed at Michael).

Comment by Aaron Gertler (aarongertler) on Some potential lessons from Carrick’s Congressional bid · 2022-05-18T23:12:13.501Z · EA · GW

I think that the principal problem pointed out by the recent "Bad Omens" post was peer pressure towards conformity in ways that lead to people acting like jerks, and I think that we're seeing that play out here as well, but involving central people in EA orgs pushing the dynamics, rather than local EA groups. And that seems far more worrying.

What are examples of "pressure toward conformity" or "acting like jerks" that you saw among "central people in EA orgs"? Are you counting the people running the campaign as “central”? (I do agree with some of Matthew’s points there.)

I guess you could say that public support for Carrick felt like "pressure". But there are many things in EA that have lots of support and also lots of pushback (e.g. community-building strategies, 80K career advice). Lots of people are excited about higher funding levels in EA; lots of people are worried about it; vigorous discussion follows. 

Did something about the campaign make it feel different? 

*****

Habryka expressed concern that negative evidence on the campaign would be "systematically filtered out". This kind of claim is really hard to disprove. If you don't see strong criticism of X from an EA perspective, this could mean any of:

  1. People are critical, but self-censor for the sake of their reputation or "the greater good"
  2. People are critical, but no one took the time to write up a strong critical case
  3. People aren't critical because they defer too much to non-critical people
  4. People aren't critical because they thought carefully about X and found the pro-X arguments compelling

I think that (2) and (4) are more common, and (1) less common, than many other people seem to think. I do think that (3) is common, and I wish it were less so, but I don't see that as "pressure".

 

If someone had published  a post over the last few months titled "The case against donating to the Flynn campaign", and it was reasonably well-written, I think it would have gotten a ton of karma and positive comments — just like this post or this post or this post

Why did no one write this?

Well, the author would need (a) the time to write a post, (b) good arguments against donating, (c) a motive (improving community epistemics, preventing low-impact donations, getting karma), and (d) comfort with publishing the post (that is, not enough self-censorship to override (c)). 

I read Habryka as believing that there are (many?) people who fulfill (a), (b), and (c) but are stopped by (d). My best guess is that for many issues, including the Flynn campaign, no one fulfilled all of (a), (b), and (c), which left (d) irrelevant. 

I'm not sure how to figure out which of us is closer to the truth. But I will note that writing a pseudonymous post mostly gets around (d), and lots of criticism is published that way.

(If you are someone who was stopped by (d), let me know! That's really important evidence. I'm also curious why you didn't write your post under a pseudonym.)*

I also hope the red-teaming contest will help us figure this out, by providing more people with a reason to conduct and publish critical research. If some major topic gets no entries, that seems like evidence for (b) or (d), though with the election over I don't expect anyone to write about the Flynn campaign anyway.

 

*I've now heard from one person who  said that (d) was one factor in why they didn't leave comments — a mix of not wanting to make other commenters angry and not wanting to create community drama (the drama would happen even with a pseudonym). 

Given that this response came in soon after I made my comment, I've updated  moderately toward the importance  of (d), though I'm still unsure what fraction of (d) is about actual Forum comments vs. the author's reputation/relationships outside of the Forum.

Comment by Aaron Gertler (aarongertler) on Some potential lessons from Carrick’s Congressional bid · 2022-05-18T22:24:43.295Z · EA · GW

Here are some impressions of him from various influential Oregonians. No idea how these six were chosen from the "more than a dozen" originally interviewed.

Comment by Aaron Gertler (aarongertler) on Choosing causes re Flynn for Oregon · 2022-05-18T05:48:03.900Z · EA · GW

Thanks for writing this. While I don’t personally enjoy being featured, I appreciate the post as a Forum reader and former mod.

A few notes on my approach to donating, since I was quoted:

  • Before choosing to donate, I spoke with two members of Carrick's campaign team about their plans and what the early donations would go toward, did some background reading on the district and Oregon politics, and looked over Carrick's campaign website and work history.
  • My view wound up looking similar to Zach's (updated) view — Carrick's chances weren't great, but this still seemed like a strong opportunity. I didn't have an estimate for "marginal % increase in his chances per dollar"; had I been asked to make one, I think I'd have come out somewhat lower than ASB. But I still thought the value proposition was strong.
  • I donated before ASB's post went up, and well before I knew that Protect Our Future would join in. Had I known that a PAC would spend millions, there's some chance I wouldn't have donated (my views would have changed, but it's hard to say exactly how).
    • It's arguable that I should have considered the chance that some wealthy donor would provide PAC support to Flynn. I'll take (dis)credit for that. But it honestly never crossed my mind; I'd just been on the phone with an obviously scrappy and overstretched campaign, and I saw a time-sensitive opportunity for marginal impact.
  • Looking back, I wish I'd worded my comment more carefully. Rather than "I recommend this highly to people looking for impactful donations", I should have said "I highly recommend that people looking for impactful donations consider this as one option". Or just “I made a donation and I’m excited to see where this goes”, full stop.
    • I didn't expect or intend my comment to convince anyone to donate — I have no political experience or research background, and I didn't include any models or estimates! I mostly wrote the comment out of excitement.
    • But I did want to be open about my choice (I share all my donations online), and I did think it would be valuable for more people to consider donating (and of course, to do their own research and thinking beforehand).

Other thoughts:

  • I was disappointed by some of the voting patterns I saw in the comments on ASB's post, including on Zach's comment. I was even more disappointed in the patterns on this post; I upvoted or strong-upvoted many of _pk's comments there, and reported one especially bad comment to the moderators.
  • Looking back, I could have posted a bounty for the best argument against donating to the campaign. I did offer funding to the red-teaming contest, but they didn’t end up needing it, and it would have been too late for the election. Something to do next time I share a donation in public, maybe…
  • I think that the “socially punished” sentence of Habryka's comment was wrong. I wouldn't expect any large-scale reputational hit* for anyone who argued against supporting Carrick, pointed out flaws in his work, etc., as long as the arguments were solid.
    • I do think low-quality support skates by in ways that low-quality criticism doesn’t, which is a problem, but a different problem.

*I won’t say “no reputational hit at all”, because thousands of people read the Forum and some of them would probably be annoyed. Public online discussion is rough.

Comment by Aaron Gertler (aarongertler) on Leftism virtue cafe's Shortform · 2022-05-15T09:17:41.135Z · EA · GW

I found this post harder to understand than the rest of the series. The thing you're describing makes sense in theory, but I haven't seen it in practice and I'm not sure what it would look like.

 

What EA-related lifestyle changes people would other people find alienating? Veganism? Not participating in especially expensive activities? Talking about EA?

I haven't found "talking about EA" to be a problem, as long as I'm not trying to sell my friends on it without their asking first. I don't think EA is unique in this  way — I'd be annoyed if my religious friends tried to proselytize to me or if my activist friends were pressuring me to come and protest with them. 

If I talk about my job or what I've been reading lately in the sense of "here's my life update", that goes fine, because we're all sharing those kinds of life updates. I avoid the EA-jargon bits of my job and focus on human stories or funny anecdotes. (Similarly, my programmer friends don't share coding-related stories I won't understand.)

And then, when we're not sharing stories, we're doing things like gaming or hiking or remembering the good times, all of which seem orthogonal to EA. But all friendships are different, and I assume I'm overlooking obstacles that other people have encountered.

(Also, props for doing the research!)

Comment by Aaron Gertler (aarongertler) on Bad Omens in Current Community Building · 2022-05-13T09:11:44.811Z · EA · GW

This is a great post! Upvoted. I appreciate the exceptionally clear writing and the wealth of examples, even if I'm about 50/50 on agreeing with your specific points.

I haven't been involved in university community building for a long time, and don't have enough data on current strategies to respond comprehensively. Instead, a few scattered thoughts:

I was talking to a friend a little while ago who went to an EA intro talk and is now doing one of 80,000 Hours' recommended career paths, with a top score for direct impact. She’s also one of the most charismatic people I know, and she cares deeply about doing good, with a healthy practical streak.

She’s not an EA, and she’s not going to be. She told me that she likes the concept and the framing, and that since the intro talk she’s often found that when faced with big ethical questions it’s useful to ask “what would an EA do”. But she’s not an EA.

I don't like using "EA" as a noun. But if we do want to refer to some people as "EAs", I think your friend has the most important characteristics described by that term.

Using EA's core ideas as a factor in big decisions + caring a lot about doing good + strong practical bent + working on promising career path = yes, you are someone who practices effective altruism (which seems, to me, like the best definition of "an EA"). You don't have to attend the conferences or wear the t-shirts to qualify.

Second, despite some pushback, current EA community building doctrine seems to focus heavily on producing ‘Highly Engaged EAs’ (HEAs). It is relatively easy to tell if someone is a HEA. 

Not sure about current doctrine, but my impression is that "HEA" isn't meant to be a binary category. Based on your statement:

Is one EA in government policy worth more than a hundred civil servants who, though not card-carrying EAs, have seriously considered the ideas and are in touch with engaged EAs who can call them up if need be? What about great managers and entrepreneurs?

I'd be surprised if even the most literal interpretation of any community-building advice would have an organizer favoring "one person in policy" over "one hundred policy people being interested in EA" (feels an order of magnitude off, maybe?).

Bolding for emphasis:  People often overestimate how important "full-time EA people are" to the movement, relative to people who "have seriously considered the ideas and are in touch". 

That’s largely because people who discuss EA online are frequently in the first group. But when it comes to impactful projects, a massive amount of work is done by people who are very focused on their own work and less interested in EA qua EA. 

When I see my contacts excitedly discussing a project, it often looks like "this person who was briefly involved with group X/is friends with person Y is now pursuing project Z, and we think EA played a role". The person in question will often have zero connection with "the EA community" at large, no Forum account, etc.

You see less of this on the Forum because “this person got a job/grant” and “this person has started a new project” aren’t exciting posts unless the person in question writes them. And the non-Forum-y people don’t write those posts!

I asked around, and quickly stumbled upon some people who confidently told me that EA was an organisation that wanted to trick me into signing away my future income to them in exchange for being part of their gang. 

I got this reaction a lot when I was starting up Yale EA in 2014, despite coming up with all my messaging alone and having no connection to the wider EA community. Requests to donate large amounts of money are suspicious!

I'd expect to see less of this reaction now that donating and pledge-taking get less emphasis than in 2014, especially in college groups. But I think it's hard to avoid while also trying to convey that the things we care about are really important.

 (Doesn't mean we shouldn't try, but I wouldn't see the “donations are a scam” perspective as strong evidence that organizers are making the wrong choices.)

If better epistemics trades off against getting more alignment researchers, maybe you think it’s not worth doing. However, it’s not clear at all that this is the case.

Almost everyone I've interacted with in EA leadership/CB is obsessed with good epistemics — they value them highly when recruiting/evaluating people, much more than with any other personal trait (with rare exceptions, e.g. strong technical skills in roles where those are crucial).* 

My impression is that they'd be happy to trade a bunch of alignment for epistemic skill/virtue at the margin for most people, as long as alignment didn't dip to the point where they had no interest in working on a priority problem.

This doesn't mean that current CB strategy is necessarily encouraging good epistemics. (I'm sure it varies dramatically between and within groups.) It's possible for a group’s strategy not to achieve the ends they want — and it's easier to Goodhart on alignment than epistemics, because the former is easier to measure.

But I am confident that leaders' true desire is "find people who have great epistemics [and are somewhat aligned]", not "find people who are extremely aligned [and have okay epistemics]".

*To clarify my perspective: I've seen discussion of 100+ candidates for jobs/funding in EA. Alignment comes up often, but mostly as a checkbox/afterthought, while "how the person thinks" is the dominant focus most of the time. Many terms are used — clear thinking, independent thinking, nuanced thinking — but they point to the same cluster of traits.

You should expect there to be whole types of reason (like ‘you guys seem way more zealous than I’m comfortable with’) which you’ll be notably less likely to hear about relative to how much people think it, especially if you’re not prioritising getting this kind of feedback.

This is very true! 

One good way to hear a wider range of feedback is to have friends and activities totally separate from your EA work who can give you a more "normal" perspective on these things. This was automatic for me in college; our EA group was tiny and there wasn't much to do, so we all had lots of other stuff going on, and I’d been making friends for years before I discovered EA. 

I gather that EA groups are now, in some cases, more like sports teams or music groups — things that can easily consume most of someone's non-class hours and leave them in a place where most of their friends are in the same club. It’s good to have a close-knit group of altruistic friends, but spending all of your time around other people in EA will limit your perspective; guard against this! 

(Also, having hobbies not related to your life's central purpose seems healthy for a lot of reasons.)

Assume that people find you more authoritative, important, and hard-to-criticise than you think you are. It’s usually not enough to be open to criticism - you have to actually seek it out or visibly reward it in front of other potential critics.

Also very true! 

Flagging this because it is very hard to account for properly, I've had to adjust my expectation of how hard-to-criticize I am several times (especially after I started getting jobs within EA).

Comment by Aaron Gertler (aarongertler) on Bad Omens in Current Community Building · 2022-05-12T17:29:21.765Z · EA · GW

Privately discussed info in a CRM seems like an invasion of privacy.

I've seen non-EA college groups do this kind of thing and it seems quite normal. Greek organizations track which people come to which pledge events, publications track whether students have hit their article quota to join staff, and so on.

Doesn't seem like an invasion of privacy for an org's leaders to have conversations like "this person needs to write one more article to join staff" or  "this person was hanging out alone for most of the last event, we should try and help them feel more comfortable next time".

Comment by Aaron Gertler (aarongertler) on Bad Omens in Current Community Building · 2022-05-12T17:26:42.045Z · EA · GW

I've seen people make these complaints about EA since it first came to exist. 

As EA becomes bigger and better-known, I expect to see a higher volume of complaints even if the average person's impression remains the same/gets a bit better (though I'm not confident that's the case either).

This includes groups with no prior EA contact learning about it and deciding they don't like it — but I think they'd have had the same reaction at any point in EA's history.

Are there notable people or groups whose liking/trust of EA has, in your view, gone down over time?

Comment by Aaron Gertler (aarongertler) on EA is more than longtermism · 2022-05-06T21:10:38.404Z · EA · GW

The 80K board is an understandable proxy for "jobs in EA". But that description can be limiting.

Many non-student EA Global attendees had jobs at organizations that most wouldn't label "EA orgs", and that nevertheless fit the topics of the conference. 

Examples:

  • The World Bank
  • Schmidt Futures
  • Youth for the Treaty on the Prohibition of Nuclear Weapons
  • UK Department of Health and Social Care
  • US Treasury Department
  • House of Commons
  • Development Innovation Lab
  • A bunch of think tanks

Some of these might have some of their jobs advertised by 80K, but there are also tons of jobs at those places that wouldn't make the 80K job board* but that nevertheless put people in an excellent position to make an impact across any number of areas. And because global development is bigger than all the LT areas put together**, I expect there to be many more jobs on the non-LT side in this category.

*Not necessarily because 80K examined them and found them wanting, but as (I'd expect) a practical matter — there are 157 open jobs at the World Bank right now, and I wouldn't expect 80K to evaluate all of them (or turn the World Bank into 15% of the whole job board). 

**Other than biosecurity, maybe? As a quick sanity-check, USAID's budget is ~4x the CDC budget, results may vary across countries and international institutions.

Comment by Aaron Gertler (aarongertler) on EA is more than longtermism · 2022-05-06T20:54:25.300Z · EA · GW

A count of topics at EAG and EAGx's from this year show a roughly 3:1 AI/longtermist to anything else ratio

I'm not sure where to find agendas for past EAGx events I didn't attend. But looking at EAG London, I get a 4:3 ratio for LT/non-LT (not counting topics that fit neither "category", like founding startups):

LT

  • "Countering weapons of mass destruction"
  • "Acquiring and applying information security skills for long-term impact"
  • "How to contribute to the UN's 'Our Common Agenda' report" (maybe goes in neither category? Contributions from EA people so far have been LT-focused, but I assume the process is the same for anything someone wants to add)
  • "Exploring AI Futures with Role Play"
  • "Speed meeting + discussion: biosecurity and engineering interventions"
  • "Ambitious thinking in longtermist community building"
  • "What's new in biosecurity? Concepts and priorities for the coming decade"
  • "Transformer interpretability tool walk-through"
  • "Longtermist talent search"
  • "Workshop: New research topics in global priorities research" (maybe goes in neither category, most speakers from LT-focused orgs but topics were broad/varied)
  • "So, how freaked out should we be about AI?"
  • "Workshop: Possible research projects for AI safety"

Non-LT

  • "Dispensers for Safe Water Experiments"
  • "Community Event: Alt Proteins"
  • "The state of aquatic animal advocacy"
  • "Workshop: Wild animal advocacy"
  • "Workshop: The international development sector should be much better at learning"
  • "Are you a good fit for nonprofit entrepreneurship?" (talk by Charity Entrepreneurship, I assume it was non-LT-focused)
  • "The changing landscape of climate and energy and its implications for high-impact climate philanthropy" (maybe goes in neither category, but feels pretty distinct from what people mostly think about when they discuss LT in EA)
  • "How to present big ideas and complex data to the public", Our World in Data (maybe goes in neither category?)
  • "Community Event: EA Africa" (maybe goes in neither category, but I'd guess that community building in Africa ends up leading to a lot of additional staff/projects in the global dev/animal welfare spaces relative to LT spaces)

Note that I've focused on talks and events rather than office hours, since the latter feel more like meetings than "official topics" to me. The split didn't seem very different for the latter two.

I don't know whether there's a public agenda I can link to, but let me know if you think I'm missing or miscategorizing something. 

(I also remain confused as to whether things like bio or nuclear war actually fit cleanly into longtermism or whether their "focus" is best thought of as split between LT/non-LT, these are not natural categories.)

Comment by Aaron Gertler (aarongertler) on Aaron Gertler's Shortform · 2022-05-02T10:36:53.776Z · EA · GW

If you want recommendations, just take the first couple of  items in each category. They are rated in order of how good I think they are. (That's if you trust my taste — I think most people are better off just skimming the story summaries and picking up whatever sounds interesting to them.)

Comment by Aaron Gertler (aarongertler) on You Should Write a Forum Bio · 2022-04-30T02:27:26.340Z · EA · GW

Huzzah! Hope you enjoy your time here.

Comment by Aaron Gertler (aarongertler) on NunoSempere's Shortform · 2022-04-30T02:26:57.458Z · EA · GW

We publish our giving to political causes just as we publish our other giving (e.g. this ballot initiative).

As with contractor agreements, we publish investments and include them in our total giving if they are conceptually similar to grants (meaning that investments aren't part of the gap James noted).  You can see a list of published investments by searching "investment" in our grants database.

Comment by Aaron Gertler (aarongertler) on NunoSempere's Shortform · 2022-04-30T00:17:49.530Z · EA · GW

We're still in the process of publishing our 2021 grants, so many of those aren't on the website yet. Most of the yet-to-be-published grants are from the tail end of the year — you may have noticed a lot more published grants from January than December, for example. 

That accounts for most of the gap. The gap also includes a few grants that are unusual for various reasons (e.g. a grant for which we've made the first of two payments already but will only publish once we've made the second payment a year from now). 

We only include contractor agreements in our total giving figures if they are conceptually very similar to grants (Kurzgesagt is an example of this). Those are also the contractor agreements we tend to publish. In other words, an agreement that isn't published is very unlikely to show up in our total giving figures.

Comment by Aaron Gertler (aarongertler) on Concerns about AMF from GiveWell reading - Part 4 · 2022-04-29T18:13:39.202Z · EA · GW

A belated thanks for this reply! I've reached the end of my knowledge/spare time for research at this point, but I'll keep an eye out for any future posts of yours on these topics.

Comment by Aaron Gertler (aarongertler) on Aaron Gertler's Shortform · 2022-04-25T01:09:44.517Z · EA · GW

The group was small and didn't accomplish much, and this was a long time ago. I don't think the post would be interesting to many people, but I'm glad you enjoyed reading it!

Comment by Aaron Gertler (aarongertler) on Aaron Gertler's Shortform · 2022-04-24T07:31:45.995Z · EA · GW

Memories from running a corporate EA group

From August 2015 - October 2016, I ran an effective altruism group at Epic, a large medical software corporation in Wisconsin. Things have changed a lot in community-building since then, but I figured it would be good to record my memories of that time, and what I learned.

If you read this and have questions, please ask!

Launching the group

  • I launched with two co-organizers, both of whom stayed involved with the group while they were at Epic (but who left the company after ~6 months and ~1 year, respectively, leaving me alone).
  • We found members by sending emails to a few company mailing lists for employees interested in topics like philosophy or psychology. I'm not sure how common it is for big companies to have non-work-related mailing lists, but they made our job much easier. We were one of many "extracurricular" groups at Epic (the others were mostly sports clubs and other outdoorsy things).
  • We had 40-50 people at our initial interest meeting, and 10-20 at most meetings after that.

Running the group

  • Meeting topics I remember:
    • A discussion of the basic principles of EA and things our group might be able to do (our initial meeting)
    • Two Giving Games, with all charities proposed by individual members who prepared presentations (of widely varying quality — I wish I'd asked people to present to me first)
    • A movie night for Life in a Day
    • Talks from:
      • Rob Mather, head of the Against Malaria Foundation
      • A member's friend who'd worked on public health projects in the Peace Corps and had a lot of experience with malaria
      • A member of the group who'd donated a kidney to a stranger (I think this was our best-attended event)
      • Unfortunately, Gleb Tsipursky
      • Someone from Animal Charity Evaluators (don't know who, organized after I had left the company)
  • We held all meetings at Epic's headquarters; most members lived in the nearby city of Madison, but two organizers lived within walking distance of Epic and didn't own cars, which restricted our ability to organize things easily. (I could have set up dinners and carpooled or something, but I wasn't a very ambitious organizer.)
  • Other group activities:
    • One of our other organizers met with Epic's head of corporate social responsibility to discuss EA. It didn't really go anywhere, as their current giving policy was really far from EA and the organizer came in with a fairly standard message that didn't account for the situation,
      • That said, they were very open to at least asking for our input. We were invited to leave suggestions on their list of "questions to ask charities soliciting Epic's support", and the aforementioned meeting came together very quickly. (The organizer kicked things off by handing Epic's CEO a copy of a Peter Singer book — don't remember which — at an intro talk for new employees. She had mentioned in her talk that she loved getting book recommendations, and he had the book in his bag ready to go. Not sure whether "always carry a book" is a reliable strategy, but it worked well in that case.)
    • Successfully lobbying Epic to add a global-health charity (Seva, I think) to the list of charities for which it matched employee donations (formerly ~100% US-based charities)
      • This was mostly just me filling out a suggestion form, but it might have helped to be able to point to a group of people who also wanted to support that charity.
      • Most of the charities on the list were specifically based in Wisconsin so employees could give as locally as possible. On my form, I pointed out that Epic had a ton of foreign-born employees, including many from developing countries, who might want to support a charity close to their own "homes". Not sure whether this mattered in Epic's decision to add the charity.
      • I suggested several charities, including AMF, but Epic chose Seva as the suggestion they implemented (possibly because they had more employees from India than Africa? Their reasoning was opaque to me)
    • An Epic spokesperson contacted the group when some random guy was found trying to pass out flyers for his personal anti-poverty project on a road near Epic. Once I clarified that he had no connection to us, I don't remember any other followup.
  • Effects of the group:
    • I think ~3 members wound up joining Giving What We Can, and a few more made at least one donation to AMF or another charity the group discussed.
    • I found a new person to run the group before leaving Epic in November 2016, but as far as I know, it petered out after a couple more meetings.

Reflections // lessons learned

  • This was the only time I've tried to share EA stuff with an audience of working professionals. They were actually really into it! Meetings weren't always well-attended, but I got a bunch of emails related to effective giving, and lunch invitations from people who wanted to talk about it. In some ways, it felt like being on the same corporate "team" engendered a bit more natural camaraderie than being students at the same school — people thought it was neat that a few people at Epic were "charity experts", and they wanted to hear from us.
  • Corporate values... actually matter? Epic's official motto is "work hard, do good, have fun, make money" (the idea being that this is the correct ordering of priorities). A bunch of more senior employees I talked to about Epic EA took the motto quite seriously, and they were excited to see me running an explicit "do good" project. (This fact came up in a couple of my performance reviews as evidence that I was upholding corporate values.)
  • Getting people to stick around at the office after a day of work is not easy. I think we'd have had better turnout and more enthusiasm for the group if we'd added some meetups at Madison restaurants, or a weekend hike, or something else I'd have been able to do if I owned a car (or was willing to pay for an Uber, something which felt like a wasteful expense in that more frugal era of EA).
Comment by Aaron Gertler (aarongertler) on Aaron Gertler's Shortform · 2022-04-21T17:43:36.564Z · EA · GW

The book poses an interesting and difficult problem that characters try to solve in a variety of ways. The solution that actually works involves a bunch of plausible game theory and feels like it establishes a realistic theory of how a populous universe might work. The solutions that don't work are clever, but fail for realistic reasons. 

Aside from the puzzle element of the book, it's not all that close to ratfic, but the puzzle is what compelled me. Certainly arguable whether it belongs in this category.