Posts

An argument that EA should focus more on climate change 2020-12-08T02:48:06.251Z
fiction about AI risk 2020-11-12T22:36:18.066Z

Comments

Comment by Ann Garth on EA Debate Championship & Lecture Series · 2021-04-13T15:10:50.985Z · EA · GW

I did competitive college debate for four years (American Parliamentary format, which is similar to the BP format used in the EA Debate Championship but not identical) and I think that the extent to which it does/doesn’t encourage truth-seeking is less important than the way it pushes people to justify their values.

Oversimplifying broadly, debate has two layers: one is the arguments about what the impacts of a certain idea/policy are likely to be, and one is arguments about which impacts are more important (known as “weighing”). In order to win rounds, you have to win arguments at both levels. This means that debate requires people to engage with one of issues that is most central to EA — a relatively consequentialist understanding of which issues matter most. In regular life you can say, “I support government funding for the arts because art is good” and not think very hard about how that trades off with, say, funding for healthcare. But if you do that in a debate round, the other team will point out the tradeoff, estimate the number of people who will die as a result of there being less funding for healthcare, and you will lose the round.

I think this is the main benefit of debate from an EA perspective, and I suspect that it has meaningful impacts on people who are forced to confront, over and over again in countless debate rounds, the actual effects (in lives lost and other very serious harms) of different ways of weighing between issues. Anecdotally, a higher-than-average percentage of the debaters I know are EAers, or at least interested in EA. And even debaters who don’t personally support EA very often use EA weighing arguments in rounds. As a result, for some people (I suspect many), debate is the first place they hear about EA. To me, this makes debate leagues a fertile recruiting ground for EA.

Comment by Ann Garth on You have more than one goal, and that's fine · 2020-12-27T04:42:42.715Z · EA · GW

Hi Teo! I know your comment was from a few years ago, but I was so excited to see someone else in EA talk about self-compassion. Self-compassion is one of the main things that lets me be passionate about EA and have a maximalist moral mindset without spiraling into guilt, and I think it should be much more well-known in the community. I don't know if you ever ended up writing more about this, but if you did, I hope you'd consider publishing it -- I think that could help a lot of people!

Comment by Ann Garth on An argument that EA should focus more on climate change · 2020-12-27T01:30:14.223Z · EA · GW

Hi Rocket, thanks for sharing these thoughts (and I'm sorry it's taken me so long to get back to you)!

To respond to your specific points:

  1. Improving the magnitude of impact while holding tractability and neglectedness constant would increase impact on the margin, ie, if we revise our impact estimates upwards at every possible level of funding, then climate change efforts become more cost-effective. 2. It seems like considering co-benefits does affect tractability, but the tractability of these co-benefit issue areas, rather than of climate change per se. Eg, addressing energy poverty becomes more tractable as we discover effective interventions to address it.

I certainly agree with this -- was only trying to communicate that increases in importance might not be enough to make climate change more cost-effective on the margin, especially if tractability and neglectedness are low. Certainly that should be evaluated on a case-by-case basis.

To be fair, other x-risks are also time-limited. Eg if nuclear war is currently going to happen in t years, then by next year we will only have t−1 years left to solve it. The same holds for a catastrophic AI event. It seems like ~the nuance~ is that in the climate change case, tractability diminishes the longer we wait, as well as the timeframe.

This is true (and very well-phrased!). I think there's some additional ~ nuance ~ which is that the harms of climate change are scalar, whereas the risks of nuclear war or catastrophic AI seem to be more binary. I'll have to think more about how to talk about that distinction, but it was definitely part of what I was thinking about when I wrote this section of the post.

Comment by Ann Garth on My mistakes on the path to impact · 2020-12-08T03:20:09.841Z · EA · GW

One data point: I recently got a job which, at the time I initially applied for it, I didn't really want (as I went through the interview process and especially now that I've started, I like it more than I thought I would based on the job posting alone).