Posts

Stephen Clare's Shortform 2022-06-14T11:13:25.653Z
How likely is World War III? 2022-02-15T15:09:04.902Z
Modelling Great Power conflict as an existential risk factor 2022-02-03T11:41:11.051Z
Can we drive development at scale? An interim update on economic growth work 2020-10-27T11:14:44.017Z
How good is The Humane League compared to the Against Malaria Foundation? 2020-04-29T13:40:38.361Z

Comments

Comment by Stephen Clare on Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies · 2022-06-25T17:36:18.379Z · EA · GW

Thanks for this, Michael. It's really valuable to have someone carefully digging into these results. After reading Stevenson and Wolfers I'd sort of dismissed the paradox. This updated me against that view and has me more worried again.

I think I have more credence on the possibility that people's scales are shifting over time than you do. In particular, questions like the Cantril ladder asks people to think about a 10/10 as the "best possible life". But with growth, it's plausible to me that the best possible life is getting better over time. Perhaps people are interpreting that as best possible (attainable) life, rather than as the cosmically-absolute best possible life. And someone living the best possible (attainable) life in 2022 can go to space, travel the world, eat every kind of food, and access every possible entertaining movie and game ever made. None of these was possible in 1922, even for people living their best possible lives.

To account for this, people would have to be shifting their scales over time. Or, it is plausible to me that my 10/10 is different than my grandparents', and in an objective sense my 10/10 is better than my grandparents'.

Comment by Stephen Clare on New cause area: bivalve aquaculture · 2022-06-14T11:28:08.380Z · EA · GW

Yeah, I think this meme is both damaging and mistaken and I'm disappointed to see it crop up again here. There's plenty of evidence against such a broad assertion.

  • The Precipice dedicates an entire chapter to climate change, and I have it on good authority that climate change is discussed seriously in another important, upcoming EA book
  • Climate change has been discussed many times on the 80,000 Hours podcast, including extensively by Will Macaskill here
  • EA Funds lists the Founders Pledge Climate Change Fund on their website, and that Fund has raised millions of dollars for effective climate orgs
  • EA analysis and funding has been instrumentable in supporting a dramatic scale-up of the Clean Air Task Force, one of the best climate organizations in the world
Comment by Stephen Clare on Grokking “Semi-informative priors over AI timelines” · 2022-06-14T11:17:55.021Z · EA · GW

Thanks for this, I think it deepened my understanding of Tom's model. It looks like a lot of work went into this post and I appreciate you taking the time to make your analysis so intelligible!

Comment by Stephen Clare on Stephen Clare's Shortform · 2022-06-14T11:13:25.830Z · EA · GW

I think it's possible there's too much promotion on the EA Forum these days. There are lots of posts announcing new organizations, hiring rounds, events, or opportunities. These are useful but not that informative, and they take up space on the frontpage. I'd rather see more posts about research, cause prioritization, critiques and redteams, and analysis. Perhaps promotional posts should be collected into a megathread, the way we do with hiring.

In general it feels like the signal-to-noise ration on the frontpage is lower now than it was a year ago, though I could be wrong. One metric might be number of comments - right now, 5/12 posts I see on the frontpage have 0 comments, and 11/12 have 10 comments or fewer.

Comment by Stephen Clare on Leftism virtue cafe's Shortform · 2022-06-13T10:59:52.404Z · EA · GW

One thought I had while reading this was just: you run slower during a marathon, but marathons are still really hard. 

Maybe this comment conflates working more than average with giving "everything ... including their soul and weekends"? 

It's tricky because different people perhaps need to hear different things here. I'd like to have a culture where it's possible for people to work normal hours in EA jobs. But I also know people who work more than average because they care deeply about their work and are ambitious, without seeming (to me at least) to be on the verge of crisis.

Comment by Stephen Clare on Things usually end slowly · 2022-06-08T10:43:00.799Z · EA · GW

wars happen much more quickly now (I’m not sure why - maybe because planes are faster than walking?) 

I think advances in strategy, automation, logistics, and transportation have a lot to do with this! And I do think there's a general lesson there - everything has been speeding up, so we should generally expect collapses today to happen faster than they happened in the past.

Comment by Stephen Clare on Things usually end slowly · 2022-06-08T10:40:29.718Z · EA · GW

Nice work Ollie, this is very thought-provoking. It got me thinking a lot more about plausible reference classes for human extinction.

As I've mentioned to you, I think individual species extinctions are a better reference class than mass extinction events. It's a shame you couldn't find a good source that summarizes how quickly species declines tend to happen. Individual species must end faster than extinction events, since species collapses all occur within extinction events. And I strongly suspect if we had data on them, we'd see that species tend to go extinct much faster than extinction events. There's selection bias at work, but I can recall seeing graphs of, e.g., global whale, elephant, and rhinocerous populations that show precipitous declines following an exogenous catastrophe (usually the introduction of humans, or the invention of a new technology like whaling ships).

Your discussion of civilizational decline timelines, on the other hand, does seem directly relevant. It would be great to see a database that tracks the duration of civilizational declines categorized by cause (where possible), to see if we can find more specific reference classes based on different risks!

Comment by Stephen Clare on Will there be an EA answer to the predictable famines later this year? · 2022-05-30T16:20:28.663Z · EA · GW

I'm not sure it's actually the case that interventions in temporary emergencies are "very likely" more cost-effective. Emergencies often lead to an influx of funds that local organizations struggle to absorb, and it's difficult to allocate funds efficiently. This GiveWell blog on the topic is somewhat dated, but I think the main points still stand.

Comment by Stephen Clare on I read Johannes Ackva's post on climate change which mentions three areas that can create global leverage. Is there a (possibly career-oriented) site that provides a higher level of detail and comparison between issues within climate, in a similar way that 80,000 Hours does for global existential problems? · 2022-05-26T08:30:10.675Z · EA · GW

The 80000 Hours problem profile also has some career advice 

Comment by Stephen Clare on I read Johannes Ackva's post on climate change which mentions three areas that can create global leverage. Is there a (possibly career-oriented) site that provides a higher level of detail and comparison between issues within climate, in a similar way that 80,000 Hours does for global existential problems? · 2022-05-25T12:43:14.898Z · EA · GW

Have you read the full Founders Pledge reports on this topic? I'm not sure exactly which post you're referring to, but the full report documents go into a lot more detail.

Comment by Stephen Clare on Focus of the IPCC Assessment Reports Has Shifted to Lower Temperatures · 2022-05-24T13:22:57.265Z · EA · GW

Major kudos to both of you for this bet. I'll probably refer to this thread in future as a great example of respectful, productive disagreement!

Comment by Stephen Clare on The real state of climate solutions - want to help? · 2022-05-22T11:26:11.522Z · EA · GW

You might be interested in reading some existing discussion of Drawdown, and its limitations, in the comments here.

Comment by Stephen Clare on Climate change - Problem profile · 2022-05-20T16:35:15.945Z · EA · GW

I think you'll find answers to those questions in section 1 of John and Johannes's recent post on climate projections. IIRC the answers are yes, and those numbers correspond to RCP4.5.

Comment by Stephen Clare on Climate change - Problem profile · 2022-05-20T09:55:42.453Z · EA · GW

I think this comment demonstrates the importance of quantifying probabilities. e.g. you write:

Could agriculture cope with projected warming? Possibly, maybe probably. Can it do so while supply chains, global power relations and financial systems are disrupted or in crisis? That's a much harder prospect.

I can imagine either kinda agreeing with this comment, or completely disagreeing, depending on how we're each defining "possibly", "probably", and "much harder".

For what it's worth, I also think it's probably that agriculture will cope with projected warming. In fact, I think it's extremely likely that, even conditional on geopolitical disruptions, the effects of technological change will swamp any negative effects of warming. To operationalize, I'd say something like: there's a 90% chance that global agricultural productivity will be higher in 50 years than it is today.[1]

Note that this is true at the global level. I do expect regional food crises due to droughts. On the whole, I again believe with high confidence (again, like 90%) that the famine death rate in the 21st century will be lower than it was in the 20th century. But of course it won't be zero. I'd support initiatives like hugely increasing ODA and reforming the World Food Program (which is literally the worst).

  1. ^

    I haven't modelled this out and I'd expect that probability would change +/- 10 p.p. if I spent another 15 minutes thinking about it.

Comment by Stephen Clare on Where are the cool places to live where there is still *no* EA community? Bonus points if there is unlikely to be one in the future · 2022-05-11T15:25:07.877Z · EA · GW

That's true, good point. Depending on what they're looking for, I can actually see myself encouraging more people to try this out.

Comment by Stephen Clare on Where are the cool places to live where there is still *no* EA community? Bonus points if there is unlikely to be one in the future · 2022-05-11T13:12:33.715Z · EA · GW

If you like the location you're currently in, it seems pretty worth it to try to hang out with other people in your current community first. Join a sports team or games club or something. If you're worried about incentives, then ask a friend for accountability. Say you'll pay them $20 if you don't actually go to the event and ask them to follow up on it.

I'm a bit worried you're underestimating how difficult it would be to move to an entirely different continent on your own. Life as an expat can be expensive and alienating.

Comment by Stephen Clare on EA and the current funding situation · 2022-05-10T11:26:30.265Z · EA · GW

Can you give an example of communication that you feel suggests "only AI safety matters"?

Comment by Stephen Clare on Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"? · 2022-04-19T21:31:44.359Z · EA · GW

I don't think a good name for this exists, and I don't think we need one. It's usually better to talk about the specific cause areas than to try and lump all of them together as not-longtermism.

As you mention, there are lots of different reasons one might choose not to identify as a longtermist, including both moral and practical considerations.

But more importantly, I just don't think that longtermist vs not-longtermist is sufficiently important to justify grouping all the other causes into one group.

Trying to find a word for all the clusters other than longtermism  is like trying to find a word that describes all cats that aren't black, but isn't "not-black cats".

One way of thinking about these EA schools of thought is as clusters of causes in a multi-dimensional space. One of the dimensions along which these causes vary is longtermism vs. not-longtermism. But there are many other dimensions, including  animal-focused vs. people-focused, high-certainty vs low-certainty, etc. Not-longtermist causes all vary along these dimensions, too. Finding a simple label for a category that includes animal welfare, poverty alleviation, metascience, YIMBYism, mental health, and community building is going to be weird and hard.

"Not-longtermism" would just be everything outside of some small circle in this space. Not a natural category.

It's because there are so many other dimensions that we can end up with people working on AI safety and people working on chicken welfare in the same movement. I think that's cool.  I really like that EA space has enough dimensions that a  really diverse set of causes can all count as EA. Focusing so much on the longtermism vs. not-longtermism dimension under-emphasizes this.

Comment by Stephen Clare on A review of Our Final Warning: Six Degrees of Climate Emergency by Mark Lynas · 2022-04-15T14:39:49.679Z · EA · GW

Mark Lynas also said it was "now or never" for climate action in 2005. This kind of messaging is just wildly miscalibrated and counter-productive.

Comment by Stephen Clare on The Vultures Are Circling · 2022-04-06T15:19:40.208Z · EA · GW

I downvoted this post because it doesn't present any evidence to back up its claims. Frankly I also foudn the tone off-putting ("vultures"? really?) and the structure confusing. 

I also think it underestimates the extent to which the following things are noticeable to grant evaluators. I reckon they'll usually be able to tell when applicants (1) don't really understand or care about x-risks, (2) don't really understand or care about EA, (3) are lying about what they'll spend the money on, or (4) have a theory of change that doesn't make sense. Of course grant applicants tailor their application to what they think the funder cares about. But it's hard to fake it, especially when questioned.

Also, something like the Atlas Fellowship is not "easy money". Applicants will be competing against extremely talented and impressive people from all over the world. I don't think the "bar" for getting funding for EA projects has fallen as much as  this post, and some of the comments on this post, seem to assume.

Comment by Stephen Clare on How likely is World War III? · 2022-04-04T11:10:23.256Z · EA · GW

I agree with this. I think there's multiple ways to generate predictions and couldn't cover everything in one post. So while here I used broad historical trends, I think that considerations specific to US-China, US-Russia, and China-India relations should also influence our predictions. I discuss a few of those considerations on pp. 59-62 of my full report for Founders Pledge and hope to at least get a post on US-China relations out within the next 2-3 months.

One quick hot take: I think Allison greatly overestimates the proportion of power transitions that end in conflict. It's not actually true that "incumbent hegemons rarely let others catch up to them without a fight" (emphasis mine). So, while I haven't run the numbers yet, I'll be somewhat surprised if my forecast of a US-China war ends up being higher than ~1 in 3 this century, and very surprised if it's >50%. (Metaculus has it at 15% by 2035).

Comment by Stephen Clare on Unsurprising things about the EA movement that surprised me · 2022-03-31T13:02:22.803Z · EA · GW

Now it’s more like any and all causes are assumed effective or potentially effective off the get-go and then are supported by some marginal amount of evidence.

This doesn't seem true to me, but I'm not an "old guard EA".  I'd be curious to know what examples of this you have in mind.

Comment by Stephen Clare on Reducing Nuclear Risk Through Improved US-China Relations · 2022-03-24T14:58:17.619Z · EA · GW

Really enjoyed the way forecasts were integrated into the essay. Seems like a really useful approach!

I broadly agree that ending the trade war would be good. I'm not sure it's as easy to mitigate the political downsides as you suggest, though. I think it's quite unlikely that "these political costs could be reduced by communicating to the public the evidence showing tariffs are ineffective". Mostly because it's difficult to explain such a complicated issue on which people's intuitions point the other way. But also because it would be a political act and you'd have half the politicians in the country spreading the opposite message. 

One longer-term scenario I'd have some credence in is: if Biden were to follow through on this action, I'd expect it to have a negative effect on his chances for re-election (maybe make it 1-5% less likely?), and any increase in the chance of Biden losing the next election could be worse for US-China relations than the gain from ending the trade war (something like 50% confidence).

I'm also not sure it's true that "other US-China issues are more complex and have less room for meaningful shifts". This seems to neglect the fact that the US and China have mostly managed to continue cooperating on climate change negotiations even though relations on the whole have remained frosty. I'd be a fan of trying to find other issues of common ground, even if they're less important than bilateral trade or territorial issues. For example, perhaps they could coordinate on space governance, clean tech investment, arms control, and maybe foreign aid?

I think cooperation on issues of lesser importance can be helpful as they allow countries the chance to show they can agree and uphold agreements, build trust, build personal ties between elites and diplomats, and reduce misunderstandings and misconceptions of the other side's intentions.

Comment by Stephen Clare on Why randomized controlled trials matter · 2022-03-23T11:43:49.348Z · EA · GW

Thanks for writing this - I think it's accessible, informative, and interesting, which is difficult to pull off when writing about research methods!

I think it's telling that all the examples of the effectiveness of RCTs in this article come from clinical trials. However, you don't limit yourself to this domain in the headline or summary of the article (e.g. "How would we know about the effects of a new idea, treatment or policy?").

Our World in Data is often used by people (including myself) to gather development data. So I think it could be worth adding a caveat that many of the strengths you discuss in the article don't apply to RCTs conducted on social programs or policies. For example, it's difficult or impossible to have double-blinding or a placebo group; it's difficult to randomize effectively due to spillover effects; it's harder to get a large sample size when you're studying effects on villages or countries; and generalization is far more difficult (while a drug that works for a Brazilian is likely to work for an Indonesian, but a policy that works in Brazil is unlikely to have the same effect in Indonesia).

Comment by Stephen Clare on How we failed · 2022-03-23T11:30:35.686Z · EA · GW

This was a bummer to read:

It proved hard to get this version published; the apparent subjectivity of the costs, the inclusion of economic methods in an epidemiology paper, and the specific choice of preference elicitation methods, etc, all exposed a large "attack surface" for reviewers. In the end, we just removed the cost-benefit analysis. 

Clearly, internal documents of at least some governments will have estimated these costs. But in almost all cases these were not made public. Even then: as far as we know, only economic costs were counted in these private analyses; it is still rare to see estimates of the large direct disutility of lockdown.

I don't know exactly which papers you're referring to, but it's plausible to me that the cost-benefit analysis would be similarly valuable to the rest of the content in the paper. So it really sucks to just lose it.

Did you end up publishing those calculations elsewhere (e.g. as a blog post complement to the paper, or in a non-peer-reviewed verison of the article)? Do you have any thoughts on whether, when, and how we should try to help people escape the peer review game and just publish useful things outside of journals?

Comment by Stephen Clare on Against cash benchmarking for global development RCTs · 2022-03-21T11:31:50.945Z · EA · GW

[Epistemic status: Writing off-the-cuff about issues I haven't thought about in a while - would welcome pushback and feedback]

Thanks for this post, I found it thought-provoking! I'm happy to see insightful global development content like this on the Forum.

My views after reading your post are: 

  1. You're probably right that it doesn't make sense for all studies to be benchmarking their intervention against cash transfers;
  2. I still think there are good reasons for practitioners to think hard about whether their programs do more good than bduget-equivalent cash transfers would;
  3. Your post raises issues that challenge the usefulness of RCTs in general, not just RCTs that compare interventions to cash transfers.

Why I like cash benchmarking

You write:

That’s the role that a cash arm plays: rather than just check if a program is better than doing nothing at all (comparing to a control), we index it against a simple intervention that we know works well: cash.

The reason I find a cash benchmark useful feels a bit different than this. IMO the purpose of cash benchmarking is to compare a program to a practical counterfactual: just giving the money to beneficiaries instead of funding a more complicated program. It feels intuitive to me that it's bad to fund a development program that ends up helping people less than just giving people the cash directly instead. So the key thing is not that 'we know cash works well' - it's that giving cash away instead is almost always a feasible alternative to whatever development program one is funding.

That still feels pretty compelling to me. I previously worked in development and was often annoyed, and sometimes furious, about the waste and bureaucratic bs we had to put up with to run simple interventions. Cash benchmarking to me is meant to test whether the beneficiaries would be better off if, instead of hiring another consultant or buying more equipment, we had just given them the money.

Problems with RCTs

You write:

I am most familiar with our own program but I expect this applies to many other international development programs too: your medicine/training/infrastructure/etc program will very likely deliver benefits over a different timeline to cash, making a direct RCT comparison dependent more on survey timing than intervention efficacy. 

This is a really good point. In combination with the graph you posted, I'm not sure I've seen it laid out so clearly previously. But it seems like you've raised an issue with not just cash benchmarking, but with our ability to use RCTs to usefully measure program effects at all.

In your graph, you point out that the timing of your follow-up survey will affect your estimate of the gap between the effects of your intervention and the effects of a cash benchmark. But we'd have the same issue if we wanted to compare the effects of your interventions to all the other interventions we could possibly fund or deliver. And if we want to maximize impact, we should be considering all these different possibilities.

More worringly: what we really care about is not the gap between the effects at a given point in time. What we care about is the difference between the integrals of those curves. The difference in total impact (divided by program cost).

But, as you say, surveys are expensive and difficult. It's rare to even have one follow-up survey, much less a sufficient number of surveys to construct the shape of the benefits curve.

It seems to me people mostly muddle through and ignore that this is an issue. But the people who really care fill in the blanks with assumptions. GiveWell, for example, makes a lot of assumptions about the benefits-over-time of the interventions they compare. To their eternal credit you can see these in their public cost-effectiveness model. They make an assumption about how much of the transfer is invested;[1] they make an assumption about how much that investment returns over time; they make an assumption about how many years that investment lasts; etc. etc. And they do similar things for the other interventions they consider.

All of this, though, is updating me further against RCTs really providing that much practical value for practitioners or funders. Estimating the true benefits of even the most highly-scrutinized interventions requires making a lot of assumptions. I'm a fan of doing this. I think we should accept the uncertainty we face and make decisions that seem good in expectation. But once we've accepted that, I start to question why we're messing around with RCTs at all.

  1. ^

    They base this on a study of cash transfers in Kenya. But of course the proportion of transfer invested likely differs across time and locations

Comment by Stephen Clare on When did EA miss a great opportunity to do good? · 2022-03-17T14:40:08.878Z · EA · GW

EA thinking has been applied to these questions. Founders Pledge has long and, IMO, very good reports on Investing to Give and Impact Investing.

(Disclaimer, I used to work at FP, though I didn't work on either of these reports)

Comment by Stephen Clare on We're announcing a $100,000 blog prize · 2022-03-08T15:26:54.454Z · EA · GW

It's also worth 6.67 Pulitzer Prizes!

Comment by Stephen Clare on Comparing top forecasters and domain experts · 2022-03-07T15:50:20.359Z · EA · GW

Thanks for this, it's really helpful! I find it very plausible to me that "generalist forecasters are the most accurate source for predictions on ~any question" has become too much of a community shibboleth. This is a useful correction.

Given how widely the "forecasters are better than experts!" meme has spread, point 3a seems particularly important to me (emphasis mine):

A common misconception is that superforecasters outperformed intelligence analysts by 30% [...] The forecaster prediction market performed about as well as the intelligence analyst prediction market [...] [85% confidence]

I would have found a couple more discussion paragraphs helpful. As written, it's difficult for me to tell which studies you think are most influential in shaping the conclusions you lay out in the summary paragraph at the beginning of the post. The "Summary" section of the post isn't actually summarizing the rest of the post; instead, that's just where your discussion and conclusions are being presented.

I'm excited to potentially see more critical analysis of the forecasting literature! Plus ideas for new studies that can help identify the conditions under which forecasters are most accurate/helpful.

Comment by Stephen Clare on Phil Harvey (1938 - 2021) · 2022-03-04T21:47:45.352Z · EA · GW

This nice comment doubles as the coldest EA diss I've ever heard - "I hope to read more EA obituaries soon". Pusha T would be proud.

Comment by Stephen Clare on The Future Fund’s Project Ideas Competition · 2022-03-03T16:19:04.004Z · EA · GW

I'm (pleasantly) surprised by the number of entries! But as a result the Forum seems pretty far from optimal as a platform for this discussion. Would be helpful to have a way to filter by focus area, for example.

Comment by Stephen Clare on [deleted post] 2022-03-02T16:19:54.094Z

I agree with your first point here. Looks like various nations have already committed military aid on the order of $2B to Ukraine, plus quite a lot of in-kind donations of military equipment. I'm very unsure about how elastic the supply of military equipment is at the current margin. Is it really the case that there are military supplies available that Ukraine would purchase but for lack of funds? That would surprise me.

It reminds me a bit of the early Covid days when everyone wanted to purchase PPE, but supply was bottlenecked, so donations increased prices and changed the distribution of who received the available supply.

Comment by Stephen Clare on Why aren't EA funders funding the NTI? · 2022-02-28T16:07:50.602Z · EA · GW

I haven't looked into specific nuclear orgs so am pretty uncertain about this, but suspect there are probably good funding opportunities in this space.

To speculate on why no funders have stepped into the breach, though:

  • Macarthur could have good reason to change their priorities. Nuclear work may just be super intractable. Maybe we can still make much more progress on other issues.
  • Macarthur has funded 88 other organizations in nuclear issues in addition to NTI. EAs are aware of NTI because orgs like Open Phil have supported their bio work previously, but it would be good to look at the other orgs that Macarthur funded too to see who else is out there. With 89 orgs to choose from, it's plausible that NTI is not the best funding opp at the margin. But working out which funding opportunities would be most valuable at the margin is a lot fo work.
  • Macarthur represents about 45% of total funding in the space. That's a lot, but I'd expect the remaining 55% to be shifted around a bit and hopefully cover the most marginally-valuable opportunities

To respond to some of your specific points:

  • I'm unsure how relevant the "EA has a lot of money right now" point is. There's lots of stuff to fund, and saving can still be good because (1) we may learn a lot more about good stuff to fund in the coming years and decades and (2) the fields we're pretty sure are good to fund are still growing, and it might be worth saving our money so we can grant more to those fields in the future.
  • There's a war going on now, but I'm pretty sure there's nothing NTI can do to reduce nuclear risk  right now. The question is whether we think total risk from nukes in the medium-to-long term has increased. Or these issues might become more tractable to work on as they're more salient now. This might make funding the work of NTI and similar orgs more attractive. But it's complicated.
  • Not sure I understand the point about "hiding it" - are you asking if there are plans to fund this stuff that funders just aren't discussing yet?

Again, I'm on the whole sympathetic to your view. I'm not sure how many EAs should be thinking about and funding nuclear/conflict issues, but the answer, IMO, is not 0. But I do also think there are good reasons not to rush into the space, and it's not obviously wrong that no one has stepped up to fund NTI.

Comment by Stephen Clare on Why aren't EA funders funding the NTI? · 2022-02-28T15:45:04.788Z · EA · GW

To get a sense of the amount of funding we're talking about: members of the Peace and Security Funders Group, which I'm pretty sure accounts for a majority of  the funders in the area (including MacArthur), grant about $70M-$80M per year for nuclear issues. Macarthur has given a total of $124M in this area since 2014. So, their estimate that Macarthur represents 40-50% of the total funding in the area seems too high.

Im a bit disappointed my question here wasn't answered. It would have been good to have a sense of what we could look at funding if someone wanted to cover some of the Marcarthur shortfall, without investing ~$40M per year into a space in which we don't have deep expertise.

Comment by Stephen Clare on Bibliography of EA writings about fields and movements of interest to EA · 2022-02-22T10:57:22.951Z · EA · GW

This sounds interesting, though I feel slightly confused. I can see why socialism would be a useful thing to know about, but not why it's so much more interesting and useful than, e.g., neoliberalism. I'd also be pretty interested to hear more about how it relates to EA's historical and cultural influences. I guess you're right that I don't even know what the right questions to ask about this are.

If this work is as important as you say here then it seems like a lot of value is being left on the table. Seems like it would be really helpful if you could write out a few bullet points of what needs to be done to get to that stage and how others might be able to help, then reach out to EA Funds or someone else with a proposal.

Comment by Stephen Clare on How likely is World War III? · 2022-02-18T14:46:52.850Z · EA · GW

Good points, thanks! I agree the wording in the main post there could be more careful. In deemphasizing the size of the effect there, I was reacting to claims along the lines of "US-China conflict is unlikely because their economic interdependence makes it too costly". I still think that that's not a particularly strong consideration for reasons discussed in the main post. But you're probably right that I'm probably responding to a strawman, and that serious takes are more nuanced than that.

Comment by Stephen Clare on How likely is World War III? · 2022-02-16T10:01:58.768Z · EA · GW

Fair enough! I think something Braumoeller is trying to get at with his definition of intensity is like: if I were a citizen of one of the nations involved in a war, how likely is it that I would be killed? If you end up dividing by year, then you're measuring how likely is it that I would be killed per year of warfare. But what I would really care about is the total risk over the duration of the war.

Comment by Stephen Clare on How likely is World War III? · 2022-02-15T18:47:19.255Z · EA · GW

Ah, great catch. It's the third-bloodiest war in the time period Braumoeller considers, i.e. 1816-2007. That's super different, so thanks! I've edited the main text.

On intensity - Braumoeller thinks dividing by year can actually mask the intensity of bloody, prolonged conflicts (pp. 39-41 of Only The Dead). For example, there were fewer battle deaths per year in the Vietnam War than in the Korean War, but the Vietnam War was much bloodier overall (~50% more battle deaths):

By any rational accounting, Vietnam was the more intense war. But the more modest annual death totals in Vietnam produce the illusion of a downward trend in battle deaths. That’s because, relative to the Korean War, the Vietnam War produced a much steadier death toll, and it produced it over a longer period. Korea looks incredibly deadly, and Vietnam seems less so, solely because the Korean War was short and intense while the war in Vietnam was long and drawn out

Comment by Stephen Clare on Modelling Great Power conflict as an existential risk factor · 2022-02-15T15:15:38.007Z · EA · GW

Thanks, this is really helpful. I think a hidden assumption in my head was that the hingey time is put on hold while civilization recovers, but now I see that that's pretty questionable. 

I also share your feeling that, for fuzzy reasons, a world with 'lesser catastrophes' is significantly worse in the longterm than a world without them. I'm still trying to bring those reasons into focus, though, and think this could be a really interesting direction for future research.

Comment by Stephen Clare on Modelling Great Power conflict as an existential risk factor · 2022-02-14T11:38:06.653Z · EA · GW

Thanks, this is a great comment! I'm going to edit the main post to reflect some of this.

Does (1) a second catastrophe and (2) failure for civilization to recover exhaust the possibilities for "indirect paths"? I've thought about this less than the other points in my main post, but I think I disagree that these are as worrying as the direct path. I think it's possible they're on the same magnitude, but less likely in expectation, than the direct pathways from war to existential risk via extinction.

First, catastrophes in general are just very unlikely, and I think the 'period of vulnerability' following a war would probably be surprisingly short (on the order of 100 years rather than thousands). Post-WWII recovery in Europe took place over the course of a few years. The US funded some of this recover via the Marshall Plan, but the investment wasn't that big (probably <5% of national income).[1] There's also a paper that found that, just 27 years after the Vietnam War, no difference in economic development between areas that were heavily bombed by the US and areas that weren't.[2] 

A war 10-30 times more severe than WWII would obviously take longer to recover from, but I still think we're talking about decades or centuries rather than millenia for civilization to stabilize somewhere (albeit at a much diminished population).

Second, I find it hard to think of specific reasons why we would expect long-term civilizational stagnation. I think a catastrophic war could wipe out most of the world population, but still leave several million people alive. New Zealand alone has 5M people, for example. Humanity has previously survived much smaller population bottlenecks. Conditional on there being survivors, it also seems likely to me that they survive in at least several different places (various islands and isolated parts of the world, for example). That gives us multiple chances for some population to get it together and restart economic growth, population growth, and scientific advancement.

I'd be interested to hear more about why you think the "less direct paths should be seen as more worrying than the fairly direct paths".

  1. ^

    "The Marshall Plan's accounting reflects that aid accounted for about 3% of the combined national income of the recipient countries between 1948 and 1951" (from Wikipedia; I haven't chased down the original source, so caveat emptor)

  2. ^

    "U.S. bombing does not have a robust negative impact on poverty rates, consumption levels, infrastructure, literacy or population density through 2002. This finding suggests that local recovery from war damage can be rapid under certain conditions, although further work is needed to establish the generality of the finding in other settings." (Miguel & Roland, abstract, https://eml.berkeley.edu/~groland/pubs/vietnam-bombs_19oct05.pdf)

Comment by Stephen Clare on Modelling Great Power conflict as an existential risk factor · 2022-02-10T13:05:16.744Z · EA · GW

Those are just screenshots of diagrams made in Google Docs using the "Insert Drawing" feature!

Comment by Stephen Clare on Research idea: Evaluate the IGM economic experts panel · 2022-01-19T14:19:35.631Z · EA · GW

This does seem useful. At least one similar survey does exist for other fields: the TRIP survey for international relations scholars. I've found this somewhat useful for my research, though often the questions in IR seem less specific than the questions asked of economists.

Comment by Stephen Clare on What are some artworks relevant to EA? · 2022-01-17T13:44:09.606Z · EA · GW

There's lots of great paintings of utopias or apocalypses too, like the Garden of Earthly Delights (though not clear if it's utopic or apocalpytic!)

Comment by Stephen Clare on Cause Area: UK Housing Policy · 2022-01-12T14:20:27.206Z · EA · GW

I think there's also a coordination problem here. A lot of people care a little bit about this, but it's hardly anyone's top priority, so there have been basically no serious, committed, focused campaigns to actually create and promote specific policies.

Comment by Stephen Clare on The Bioethicists are (Mostly) Alright · 2022-01-07T19:34:34.710Z · EA · GW

You can still edit the post to include them! I agree with Khorton that you'll probably get more engagement that way.

Comment by Stephen Clare on Democratising Risk - or how EA deals with critics · 2021-12-28T19:59:14.360Z · EA · GW

I think it's because you're making strong claims without presenting any supporting evidence. I don't know what reading lists you're referring to; I have doubts about not asking questions being an 'unspoken condition' about getting access to funding; and I have no idea what you're conspiratorially alluding to regarding 'quasi-censorship' and 'emotional blackmail'.

Comment by Stephen Clare on AMA: Joan Rohlfing, President and COO of the Nuclear Threat Initiative · 2021-12-07T14:11:23.475Z · EA · GW

In the discussion section of your EAG talk, you and Carl Robichaud talked briefly about the implications of the Macarthur Foundation phasing out its Nuclear Challenges portfolio, likely leaving many of the organizations working in this space with a large funding gap. If Macarthur planned to reduce its nuclear grantmaking by 90% instead of ending it completely, what high-priority interventions would you recommend they continue to fund?

Comment by Stephen Clare on AMA: Joan Rohlfing, President and COO of the Nuclear Threat Initiative · 2021-12-07T14:07:41.162Z · EA · GW

Thank you for doing this!

In your talk at EAG you said that you think the risk of nuclear war today is "high and rising". You also estimate the annual probability of a catastrophic nuclear event is about 0.5%. I wanted to first say kudos for quantifying your beliefs in this way. It's so helpful for communicating clearly about these risks. I have two related questions:

(1) Could you please say more about the main considerations, metrics, and/or data you use to inform this estimate?

(2) How quickly do you think the risk is rising? I'm curious whether you think the annual risk is likely to increase by some tenths of a percentage point, or by factors of 2 or more.

Comment by Stephen Clare on Sasha Chapin on bad social norms in EA · 2021-11-18T13:02:11.255Z · EA · GW

I agree that a good number of people around EA trend towards sadness (or maybe "pits of despair"). It's plausible to me that the proportion of the community in this group is somewhat higher than average, but I'm not sure about that. If that is the case, though, then my guess is that some selection effects, rampant Imposter Syndrome, and the weight of always thinking about ways the world is messed up are more important causes than social norms. 

I have to say, I actually chuckled when I read "don’t ever indulge in Epicurean style" listed as an iron-clad EA norm. That, uhh, doesn't match my experience.

Comment by Stephen Clare on Donating money, buying happiness: new meta-analyses comparing the cost-effectiveness of cash transfers and psychotherapy in terms of subjective well-being · 2021-10-28T08:07:27.324Z · EA · GW

I'm interested in reading critiques of StrongMinds' research, but downvoted this comment because I didn't find it very helpful or constructive.  Would you mind saying a bit more about why you think their standards are low, and the evidence that led you to believe they are "making up" numbers?