Posts

Suggest new charity ideas for Charity Entrepreneurship 2023-03-08T15:08:47.168Z
Example of a personal ethics, values and causes review 2022-12-16T21:54:58.603Z
Concrete actionable policies relevant to AI safety (written 2019) 2022-12-16T18:41:50.020Z
How to change a system from the inside 2022-11-11T00:26:06.764Z
Announcing Charity Entrepreneurship’s 2023 ideas. Apply now. 2022-10-04T16:14:46.659Z
List of donation opportunities (focus: non-US longtermist policy work) 2022-09-30T14:32:34.115Z
Some concerns about policy work funding and the Long Term Future Fund 2022-08-12T13:54:29.382Z
CE Research Report: Aid Quality Advocacy 2022-03-16T12:46:57.820Z
Convergence thesis between longtermism and neartermism 2021-12-30T16:03:43.712Z
APPG for Future Generations Impact Report 2020 - 2021 2021-10-26T14:40:46.182Z
A practical guide to long-term planning – and suggestions for longtermism 2021-10-10T15:37:17.458Z
Which EA organisations' research has been useful to you? 2020-11-11T09:39:13.329Z
How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs 2020-09-05T12:51:01.844Z
The case of the missing cause prioritisation research 2020-08-16T00:21:02.126Z
APPG on Future Generations impact report – Raising the profile of future generation in the UK Parliament 2020-08-12T14:24:04.861Z
Coronavirus and long term policy [UK focus] 2020-04-05T08:29:08.645Z
Where are you donating this year and why – in 2019? Open thread for discussion. 2019-12-11T00:57:32.808Z
Managing risk in the EA policy space 2019-12-09T13:32:09.702Z
UK policy and politics careers 2019-09-28T16:18:43.776Z
AI & Policy 1/3: On knowing the effect of today’s policies on Transformative AI risks, and the case for institutional improvements. 2019-08-27T11:04:10.439Z
Self-care sessions for EA groups 2018-09-06T15:55:12.835Z
Where I am donating this year and meta projects that need funding 2018-03-02T13:42:18.961Z
General lessons on how to build EA communities. Lessons from a full-time movement builder, part 2 of 4 2017-10-10T18:24:05.400Z
Lessons from a full-time community builder. Part 1 of 4. Impact assessment 2017-10-04T18:14:12.357Z
Understanding Charity Evaluation 2017-05-11T14:55:05.711Z
Cause: Better political systems and policy making. 2016-11-22T12:37:41.752Z
Thinking about how we respond to criticisms of EA 2016-08-19T09:42:07.397Z
Effective Altruism London – a request for funding 2016-02-05T18:37:54.897Z
Tips on talking about effective altruism 2015-02-21T00:43:28.703Z
How I organise a growing effective altruism group in a big city in less than 30 minutes a month. 2015-02-08T22:20:43.455Z
Meetup : Super fun EA London Pub Social Meetup 2015-02-01T23:34:10.912Z
Top Tips on how to Choose an Effective Charity 2014-12-23T02:09:15.289Z
Outreaching Effective Altruism Locally – Resources and Guides 2014-10-28T01:58:14.236Z
Meetup : Under the influence @ the Shakespeare's Head 2014-09-12T07:11:14.138Z

Comments

Comment by weeatquince on How much should governments pay to prevent catastrophes? Longtermism’s limited role · 2023-03-24T12:52:56.445Z · EA · GW

Thank so much you for writing this I think it is an excellent piece and makes a really strong case for how longtermists should consider approaching policy. I agree with most of your conclusions here.

I have been working in the space for a number of years advocating (with some limited successes) for a cost effectiveness approach to government policy making on risks in the UK (and am a contributing author to the Future Proof report your cite). Interestingly despite having made progress in the area I am over time leaning more towards work on specific advocacy focused on known risks (e.g. on pandemic preparedness) than more general work on improve government spending on risks as a whole. I have a number of unpublished notes on how to assess the value of such work that might be useful so thought I would share below.

I think there is three points my notes might helpfully add to your work

  1. Some more depth about how to think about cost benefit analysis and in particular what the threshold is for government to take action. I think the cost benefit you describe is below the threshold for government action.
  2. An independent literature review type analysis on what the benefit cost ratio is for on the margin additional funds going into disaster prevention. (Literature list in Annex section).
  3. Some vague reflections as a practitioner in this space on the paths to impact

 

Note: Some of this research was carried out for Charity Entrepreneurship and should be considered Charity Entrepreneurship work. This post is in an independent capacity and does not represent views of any employer

 

1. The cost benefit analysis here is not enough to suggests government action

I think it is worth putting some though into how to interpret cost benefit analyses and how a government policy maker might interpret and use them. Your conservative estimate suggests a benefit $646 billion to a cost of $400 billion – this is a benefit cost ratio (BCR) of 1.6 to 1. 

Naively a benefit cost ratio of >1 to 1 suggests that a project is worth funding. However given the overhead costs of government policy, to governments propensity to make even cost effective projects go wrong and public preferences for money in hand it may be more appropriate to apply a higher bar for cost-effective government spending. I remember I used to have a 3 to 1 ratio, perhaps picked up when I worked in Government although I cannot find a source for this now.

According to https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5537512/ the average cost benefit ratio of government investment into health programs is 8.3 to 1. I highly expect there are many actions the US government could take to improve citizens healthcare with a CBA in the 5-10 to 1 range. In comparison on a 1.6 to 1 does not look like a priority.

Some Copenhagen Consensus analysis I read considers robust evidence for benefits between 5 to 15 times higher than costs as "good" interventions.

So overall if making the case to government or the public I think making the case that there is a 1.6 to 1 BCR is not sufficient to suggest action. I would consider 3 to 1 to be a reasonable bar and 5 to 1 to be a good case for action.

 

2. On the other hand the benefit cost ratio is probably higher than your analysis suggests

As mentioned you directly calculate a benefit cost ratio of 1.6 to 1 (i.e. $646 billion to $400 billion).

Firstly I note that reading your workings this is clearly a conservative estimate. I would be interested to see a midline estimate of the BCR too.

I made a separate estimate that I thought I would share. It was a bit more optimistic than this. It suggested that the benefit costs ratios (BCR) for disaster prevention are that, on the margin, additional spending on disaster preparedness to be in the region of 10 to 1, maybe a bit below that. I copy my sources into an annex section below.

(That said, spending $400 billion is arguably more than “on the margin” and is a big jump in spending so we might expect that spending at that level to have a somewhat lower value. Of course in practice I don’t think advocates are going to get government to spend $400bn tomorrow and that a gradual ramp up in spending is likely justified.)

 

3. A few reflections on political tractability and value 

My experience (based on the UK) is that I expect governments to be relatively open to change and improvement in this area. I expect the technocratic elements of government respond well to highlighting inconsistencies in process and decision making and the UK government has committed to improvements to how they asses risks. I expect governments to be a bit more reticent to make changes that necessitate significant spending or to put in place mechanisms and oversight that can hold them to account for not spending sufficiently on high-impact risks that might ensure future spending.

I am also becoming a bit more sceptical of the value of this kind of general longtermist work when put in comparison to work focusing on known risks. Based on my analysis to date I believe some of the more specific policy change ideas about preventing dangerous research or developing new technology to tackle pandemics (or AI regulation) to be a bit more tractable and a bit higher benefit to cost than then this more general work to increase spending on risks. That said in policy you may want to have many avenues you are working on at once so as to capitalise on key opportunities so these approaches should not be seen as mutually exclusive. Additionally there is a case for more general system improvements from a more patient longtermist view or from a higher focus on unknown unknown risks being critical.
 

 

ANNEX: My estimate 

On the value of general disaster preparedness

We can set a prior for the value of pandemic preparedness by looking at other disaster preparedness spending.

Real-world evidence. Most of the evidence for this comes from assessments of the value of physical infrastructure preparation for natural disasters, such as building buildings that can withstand floods. See table below.

SourceSummaryBCR

Natural Hazard Mitigation Saves: 2019 Report

Link

  • Looks at the BCR for different disaster mitigation prevention. For example:
    • Adopting building codes 11:1
    • Changing buildings 4:1
  • (We think this source has some risk of bias, although it does appear to be high quality.)

11:1

 

4:1

If Mitigation Saves $6 Per Every $1 …
(Gall and Friedland, 2020)

Link

  • “The value of hazard mitigation is well known: the Multihazard Mitigation Council (MMC) upped their initial estimate of $4 (MMC 2005) saved for every $1 spent on hazard mitigation to $6, and $7 with regard to flood mitigation (MMC 2017).”

4:1

 

6½:1

Other estimates. There are also a number of estimates of benefit cost ratios:

SourceSummaryBCR

Does mitigation save? Reviewing cost-benefit analyses of disaster risk reduction

(Shreve and Kelman, 2014)

Link

  • Suggest that disaster relief saves money but at what ratio is unclear and is highly dependent on situation location and kind of disaster.
  • BCR estimates tend to be less than 10 with a few going into the 10s and a very few much higher (largest was 1800)
  • BCRs may underestimate by putting a high discount rate
~10:1

Natural disasters challenge paper

(Copenhagen Consensus, 2015)

  • There are growing economic costs from natural disasters in recent years. This is especially true in developing countries where there may be limited insurance, higher risks and looser building codes. 
  • Looks at retrofitting schools to be earthquake resistant in seismically active countries, suggests this has a BCR close to 1:1
  • Looks at constructing a one-metre high wall to protect homes or elevating houses by one metre to reduce flooding. Suggests this has a BCR of 60:1.

1:1

 

60:1

IFRC 
Link
  • ““We estimate that for each dollar spent on disaster preparedness, an average of four dollars is saved on disaster response and recovery” says Alberto Monguzzi, Disaster Management Coordinator in the IFRC Europe Zone Office.”
4:1

Pandemic preparedness estimates

Other estimates. We found one example of estimates of the value of preparing better for future pandemics. 

SourceSummaryBCR

Not the last pandemic: … (Craven et al., 2021)

Link

  • Suggests a cost over 10 years of $285bn-$430bn would partially mitigate a damage $16,000bn every 50 years.
  • This roughly implies a BCR of <9:1
    • (16000/50) / (((285+430)/2)/10) = 8.95
<9:1

We also found three examples of estimates of the value stockpiling for future pandemics. 

SourceSummaryBCR

The Cost Efectiveness of Stockpiling Drugs, Vaccines and Other Health Resources for Pandemic … 

(Plans‑Rubió, 2020)

Link 

  • Looked at estimates for the cost effectiveness of stockpiling drugs for a pandemic. Example estimates:
    • “US$8550 per LYS in very high severity pandemics and US$13,447 per LYS in moderate severity pandemics”
    • “₤3800 per QALY and ₤28,000 per QALY for the 1918 and 1957/69 pandemic scenarios” 
  • Very roughly if we place a value of $50-100k per year of life this suggests a BCR of roughly 5:1 - 10:1.
~8:1

Cost-Benefit of Stockpiling Drugs …

(Balicer et al, 2005)

Link

  • Suggest various options for stockpiling in Israel for an influenza pandemic have a cost benefit ratios of 0.37, 0.38, 2.49, 2.44, 3.68
  • “investments in antiviral agents can be expected to yield a substantial economic return of >$3.68 per $1 invested, while saving many lives”
4:1
Link
  • “expanding the stockpile of AV drugs to encompass the whole UK population (≈60 million) might even be acceptable (≈£6,500 per QALY gained over a no intervention strategy for the 1918 scenario under base-case assumptions).”
  • Very roughly if we place a value of $50-100k per year of life this suggests a BCR of roughly 10:1.
~10:1

()

Link

  • “Procuring an adequate PPE stockpile in advance at non-pandemic prices would cost only 17% of the projected amount needed to procure it at current pandemic-inflated prices”
6:1

 

Historical data and estimates suggest the value of increasing preparedness is decent but not very high, with estimated benefit cost ratios (BCR) often around or just below 1:10. 

 

How this changes with scale of the disaster

There is some reason to think that disaster preparedness is more cost effective when targeted at worse disasters. Theoretically this makes sense as disasters are heavy-tailed and most of the impact of preventing and mitigating disasters will be in preventing and mitigating the very worst disasters. This is also supported by models estimating the effect of pandemic preparedness, such as those discussed in this talk. (Doohan and Hauck, 202?)

Pandemics affect more people than natural disasters so we could expect a higher than average BCR. This is more relevant if we pick preparedness interventions that scale with the size of the disaster (an example of an intervention that does not have this effect might be stockpiling, for which the impact is capped by the size of the stockpile, not by the size of the disaster).

However overall I did not find much solid evidence to suggest that the BCR ratio is higher for higher scale disasters.

Comment by weeatquince on CE: Announcing our 2023 Charity Ideas. Apply now! · 2023-02-27T13:13:04.700Z · EA · GW

We separately looked at two ideas on new technology:

  1. The idea listed here focused on new market incentives for antimicrobials.
  2. Advocacy for funding for new technology (not antibiotics) to help mitigate pandemics.

(We found this breakdown useful as the problems are different. The current patent system does not work for antimicrobials due to the need to limit the use of last line novel antibiotics. The current patent system works better for preparing for future pandemics but has limits as the pay out is uncertain and might not happen within the life-time of a patent.)

 

Under idea 1. Antimicrobials we didn’t look specifically at phage therapy. I don’t have a strong sense if phage therapy is in or out of scope of the various proposed policy changes, although I think the current focus is on antibiotics which would make phage therapy out of scope. This could be a thing for the new charity to look into. The existence of other emerging health tech that could also address microbial diseases could be seen as a case to reduce the expected impact of developing new antibiotics. This was not explicitly modelled (other than applying a 4% discount rate which should cover such things). 

Under idea 2. new tech for pandemics we very briefly considered phage therapy. It got cut as an idea at the very early stage of our research when considering what new tech will have the biggest effect on future pandemics. This is not to say that it is not a good idea – I tend to describe CE's research as rigorous but not comprehensive and I am sure that unfortunately good ideas are cut at the early stage of our prioritisation.

 

I hope that answers your question. Both reports should be made available in due course.

We also hope that any charity that begins life focusing on shifting market incentives for antibiotics could scale by moving onto policy advocacy and market shaping work for other key technologies. Technologies we were excited about more advocacy for are: platform DNA vaccine technology or UVC sterilisation or point of care diagnostics. 
 

Comment by weeatquince on CE: Announcing our 2023 Charity Ideas. Apply now! · 2023-02-14T10:42:21.378Z · EA · GW

Hi Nick, Great to hear from you and to get your on-the-ground feedback. I lead the research team at CE.

These are all really really great points and I will make sure they are all noted in the implementation notes we produce for the (potential) founders.

All our ideas have implementation challenges, but we think that delivering on these ideas is achievable and we are excited to find and train up potential founders to work on them!!
 

–-–  

One point of clarification, in case it is not clear: on kangaroo care we are recommended an approach of providing and adding extra staff into healthcare facilities to offer kangaroo care support, rather than trying to get current staff to take on the additional burden of teaching kangaroo care. We hope and expect (based on our conversations with experts) that this approach can sidestep at least some of the implementation issues identified by GiveWell.

Comment by weeatquince on We're no longer "pausing most new longtermist funding commitments" · 2023-02-02T01:15:10.877Z · EA · GW

Great! Its good to see things changing :-) Thank you for the update!

Comment by weeatquince on We're no longer "pausing most new longtermist funding commitments" · 2023-02-02T00:17:05.688Z · EA · GW

Yeah, I somewhat agree this would be a challenge, and there is a trade off between the time needed to do this well and carefully (as it would need to be done well and carefully) and other things that could be done.

I think it would surprise a lot if the various issues were insurmountable. I am not an expert in how to publish public evaluations of organisations without upsetting those organisations or misleading people but connected orgs like GiveWell do this frequently enough and must have learnt a thing or two about it in the past few years. To take one the concerns you raise: if you are worried about people reading too much into the list and judging the organisations who requested the grants rather than specific grants you could publish the list in a pseudoanonymised way where you remove names of organisations and exact amounts of funding – sure people could connect the dots but it would help prevent misunderstanding and make it clearer judgement is for grants not organisations.

 

Anyway to answer your questions:

  • On creating new projects – it is  easier for the Charity Entrepreneurship research team to know how to asses funding availability and the bar to beat for global health projects than for biosecurity projects. Sure we can look at where OpenPhil have given but there is no detail there. It is hard to know how much they base their decisions on different factors such as the trusted-ness of the people running the project versus some bar of expected effectiveness versus something else. Ultimately this can make us more hesitant to try and start new organisations that would be aiming to get funding from OpenPhil's longtermist teams than  we are to start new organisations that would be aiming to get funding from GiveWell (or other very transparent organisations).  This uncertainty about future funding is also a barrier we see in potential entrepreneurs and more clarity feels useful
  • On other funders could fill gaps that they believe OpenPhil has missed – I recently wrote a critique of the Lon-Term Future Fund pointing out that they have ignored policy work. This has led to some other funders looking into the space. This was only possible because their grant and grant evaluations are public. (This did require having inside knowledge of the space about who was looking for funding.) Honestly OpenPhil are already pretty good at this, you can see all their grants and identify gaps (like I believe no longtermist team at OpenPhil has ever given to any policy work outside the US) and then direct funds to fill those gaps. It is unclear to me how much more useful the tiers would be but I expect the lower tiers would highlight areas where OpenPhil is unlikely to fund in the future and other funders could look at what they think is valuable in that space and fund it.

 

(All views my own not speaking for any org or for Charity Entrepreneurship etc)

Comment by weeatquince on We're no longer "pausing most new longtermist funding commitments" · 2023-01-31T09:32:18.079Z · EA · GW

Thanks for the useful post Holden.

I think it would be great to see the full published tiered list.

In global health and development funders (i.e. OpenPhil and Givewell) are very specific about the bar and exactly who they think is under it and who they think is over it. Recently global development funders (well GiveWell) have even actively invited open constructive criticism and debate about their decision making. It would be great to have the same level of transparency (and openness to challenge) for longtermist grant making.

Is there a plan to publish the full tiered list? If not what's the reason / best case against having it public?

To flag some of the advantages

  • Those of us who are creating new projects would have a much better understanding of what OpenPhil would fund and be able to create better more aligned projects to OpenPhil's goals. The EA community lacks a strong longtermist incubator and this is I expect one of the challenges.
  • Other funders could fill gaps that they believe OpenPhil has missed, or otherwise use OpenPhil's tiers in their decision making.
  • It allows OpenPhil to receive useful constructive feedback or critiques.
Comment by weeatquince on Doing EA Better · 2023-01-20T16:41:11.442Z · EA · GW

Also, I wonder if we should try (if we can find the time) cowriting a post on giving and receiving critical feedback on EA. Maybe we diverge in views too much and it would be a train wreck of a post but it could be an interesting exercise to try, maybe try to pull out toc. I do agree there are things that both I think I and the OP authors (and those responding to the OP) could do better 

Comment by weeatquince on Doing EA Better · 2023-01-19T10:54:09.004Z · EA · GW

@Buck – As a hopefully constructive point I think you could have written a comment that served the same function but was less potentially off-putting by clearly separating your critique between a general critique of  critical writing on the EA Forum and critiques of specific people (me or the OP author).

Comment by weeatquince on Doing EA Better · 2023-01-19T10:40:05.203Z · EA · GW

Thank you Buck that makes sense :-)

 

“the content/framing seems not very useful and I am sad about the effect it has on the discourse”

I think we very strongly disagree on this. I think critical posts like this have a very positive effect on  discourse (in EA and elsewhere) and am happy with the framing of this post and a fair amount (although by no means all) of the content. 

I think my belief here is routed in quite strong lifetime experiences in favour of epistemic humility, human overconfidence especially in the domain of doing good, positive experiences of learning from good faith criticisms, and academic evidence that more views in decision making leading to better decisions. (I also think there have been some positive changes made as a result of recent criticism contests.)

I think it would be extremely hard to change my mind on this. I can think of a few specific cases (to support your views) where I am very glad criticisms were dismissed (e.g. the effective animal advocacy movement not truly engaging with abolitionist animal advocate arguments) but this seems to be more the exception than the norm. Maybe if my mind was changed on this it would be though more such case studies of people doing good really effectively without investing in the kind of learning that comes from well-meaning criticisms. 

Comment by weeatquince on Doing EA Better · 2023-01-19T01:13:24.481Z · EA · GW

I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant.

I think this is somewhat unfair. I think it is unfair to describe this OP as "unpleasant", it seems to be clearly and impartially written and to go out of its way to make it clear it is not picking on individuals. Also I feel like you have cherry picked a post from my post history that was less well written, some of my critical writing was better received (like this). If you do find engaging with me to be unpleasant, I am sorry, I am open to feedback so feel free to send me a DM with constructive thoughts.

Comment by weeatquince on Doing EA Better · 2023-01-18T18:53:30.455Z · EA · GW

Thank you for the reply Jan. My comment was not about whether I disagree with any of the content of what Buck said. My comment was objecting to what came across to me as a dismissive, try harder, tone policing attitude (see the quotes I pulled out) that is ultimately antithetical to the kind, considerate and open to criticism community that I want to see in EA. Hopefully that explains where I'm coming from.

Comment by weeatquince on The REDD+ framework for reducing deforestation and mitigating climate change: overview, evaluation, and cost-effectiveness · 2023-01-18T12:08:08.445Z · EA · GW

Sorry I don’t have the capacity to dig into all the sources but it would be helpful to understand:

  • Are your end results per year or forever? You say "we conclude with our best guess of cost-effectiveness, which ranges from $6 to $62 per tonne of CO2 (tCO2) abated with 80% confidence. " but is this $6-60 every year or is this a one off payment of $6-60 and then that land is never deforested? This makes a huge diffenrce to understanding so good to be explicit.
  • Do you have an estimate for tCO2 per hectare? The costs per hectare (e.g. in the Ugandan study) seem similar than you costs per tCO2, but there are like 500 tCO2 per hectare so confused about how you are converting one to the other.

Thank you so much for any clarity you can give. 

Comment by weeatquince on Doing EA Better · 2023-01-18T10:23:32.745Z · EA · GW

I strongly downvoted this response.

The response says that EA will not change "people in EA roles [will] ... choose not to", that making constructive critiques is a waste of time "[not a] productive ways to channel your energy" and that the critique should have been better "I wish that posts like this were clearer" "you should try harder" "[maybe try] politely suggesting".

This response seems to be putting all the burden of making progress in EA onto those  trying to constructively critique the movement, those who are putting their limited spare time into trying to be helpful, and removing the burden away from those who are actively paid to work on improving this movement. I don’t think you understand how hard it is to write something like this, how much effort must have gone into making each of these critiques readable and understandable to the EA community. It is not their job to try harder, or to be more polite. It is our job, your job, my job, as people in EA orgs, to listen and to learn and to consider and if we can to do better.

Rather than saying the original post should be better maybe the response should be that those reading the original post should be better at considering the issues raised.

I cannot think of a more dismissive or disheartening response. I think this response will actively dissuade future critiques of EA (I feel less inclined to try my had at critical writing seeing this as the top response) and as such make the community more insular and less epistemically robust. Also I think this response will make the authors of this post feel like their efforts are wasted an unheard. 

Comment by weeatquince on Doing EA Better · 2023-01-18T09:51:06.917Z · EA · GW

I think one of the reasons I loved this post is that my experience of reading it echoed in an odd way my own personal journey within EA. I remember thinking even at the start of EA there was a lack of diversity and a struggle at accept "deep critiques". Mostly this did not affect me – until I moved into an EA longtermist role a few years ago. Finding existing longtermist research to be lacking for the kind of work I was doing I turned to the existent disciplines on risk (risk management, deep uncertainty, futures tool, etc). Next thing I know a disproportionately large amount of my time seemed to be being sunk into trying and failing to get EA thinkers and funders take seriously governance issues and those aforementioned risk disciplines. Ultimately I gave up and ended up partly switching away from that kind of work. Yet despite all this I still find the EA community to be the best place for helping me mend the world.

 

I loved your post but I want to push back on one thing – these problems are not only in the longermist side of EA. Yes neartermist EA is epistemically healthier (or at minimum currently having less scandals), but that there are still problems and we should still be self reflective and looking to learn from posts like this and to consider if there are issues around: diversity of views, limited funding to high impact areas due to over centralisation, rejection of deep critiques, bad actors, and so on. As one example consider the (extremely laudable) criticism contents from GiveWell which was focused heavily on looking at how their quantitative analyses are 10% inaccurate, but not finding ways to highlight where their approach might be fundamentally failing to make good decisions. [section edited]

 

PS. One extra idea for the idea list: run CEA (or other EA orgs) on a cooperative model where every donor/member gets a vote on key issues or leadership decisions.

Comment by weeatquince on StrongMinds should not be a top-rated charity (yet) · 2023-01-02T10:25:12.424Z · EA · GW

Ah. Good point. Replied to the other thread here: https://forum.effectivealtruism.org/posts/ffmbLCzJctLac3rDu/strongminds-should-not-be-a-top-rated-charity-yet?commentId=TMbymn5Cyqdpv5diQ .

Comment by weeatquince on StrongMinds should not be a top-rated charity (yet) · 2023-01-02T10:19:37.849Z · EA · GW

Oh dear, no my bad. I didn't at all realise "top rated" was a label they applied to Strong Minds but not to Give Directly and SCI and other listed charities, and thought you were suggesting StrongMinds be delisted from the site. I still think it makes sense for GWWC to (so far) be trusting other research orgs, and I do think they have acted sensibly (although have room to grow in providing a checks and balance). But I also seemed to have misundestood your point somewhat, so sorry about that.

Comment by weeatquince on StrongMinds should not be a top-rated charity (yet) · 2023-01-01T22:48:02.564Z · EA · GW

I just want to add my support for GWWC here. I strongly support the way they have made decisions on what to list to date:

  • As a GWWC member who often donates through the GWWC platform I think it is great that they take a very broad brush and have lots of charities that people might see as top on the platform. I think if their list got to small they would not be able to usefully serve the GWWC donor community (or other donors) as well.
  • I would note that (contrary to what some of the comments suggest) that GWWC recommend giving to Funds and do recommend giving to these charities (so they do not explicitly recommend Strong Minds). In this light I see the listing of these charities not as recommendations but as convenience for donors who are going to be giving there.
  • I find GWWC very transparent. Simon says ideally "GWWC would clarify what their threshold is for Top Charity". On that specific point I don’t see how GWWC could be any more clear. Every page explains that a top charity is one that has been listed as top by an evaluator GWWC trust. Although I do agree with Simon more description of how GWWC choose certain evaluators could be helpful.

 

That said I would love it if going forwards GWWC could find the time. To evaluate the evaluators and the Funds and their recommendations (for example I have some concerns about the LTFF and know others do too, I know there have been concerns about ACE in the past etc).  

I would not want GWWC to unlist Strong Minds from their website but I could imagine them adding a section on the Strong Minds page saying "The GWWC team views" that says: "this is listed as it is a FP top but our personal views are that .... meaning this might or might not be a good place to give especially if you care about ... etc"

 

(Conflict of interest note: I don’t work at GWWC or FP but I do work at a FP recommended charity and at a charity who's recommendations make it into the GWWC criteria so I might be bias). 

Comment by weeatquince on StrongMinds should not be a top-rated charity (yet) · 2023-01-01T22:33:09.108Z · EA · GW

Simon, I loved your post!

 

But I think this particular point is a bit unfair to GWWC and also just factually inaccurate. 

For a start GWWC do not "recommend" Strong Minds. They very clearly recommend giving to an expert-managed Fund where an expert grantmaker can distribute the money and they do not recommend giving StrongMinds (or to Deworm the World, or AMF, etc). They say that repeatedly across their website, e.g. here. They then also have some charities that they class as "top rated" which they very clearly say are charities that have been "top rated" by another independent organisation that GWWC trusts.

I think this makes sense. Lets consider GWWC's goals here. GWWC exist to serve and grow its community of donors. I expect that maintaining a broad list of charities on their website across cause areas and providing a convenient donation platform for those charities is the right call for GWWC to achieve those goals, even if some of those charities are less proven. Personally as a GWWC member I very much appreciate they have such a broad a  variety of charities (e.g., this year, I donated to one of ACE's standout charities and it was great to be able to do so on the GWWC page.) Note again this is a listing for donors convenience and not an active recommendation.

 

My other though is that GWWC has a tiny and very new research team. So this approach of list all the FP "top rated" charities makes sense to me. Although I do hope that they can grow their team and take more of a role doing research like your critique and evaluating the evaluators / the Funds.



(Note on conflicts of interest: Some what tangential but for transparency I have a role at a different FP recommended charity so this could affect me.)

Comment by weeatquince on Why I am happy to reject the possibility of infinite worlds · 2022-12-27T09:06:46.144Z · EA · GW

So what? What thought experiment does this lead to that causes a challenge for ethics? If infinite undefined-ness causes a problem for ethics please specify it, but so far the infinite ethics thought experiments I have seen either:

  1. Are trivially the same as non-infinite thought experiments. For example undefined-ness is a problem for utilitarianism even without infinity. For example think of the Pascal's mugger who offers to create "an undefined and unspecified but very large amount of utility, so large as to make TREE(3) appear small"
  2. Make no sense. They require assuming  two things that physics says are not true – let us assume that we know with 100% certainty that the universe is infinite and let us assume that we can treat those infinites as anything other than limits in a finite series. This make no more sense than though experiments about what if time travel was true make sense and are little better than what if "consciousness is actually cheesy-bread".

Maybe I am missing something and there are for example some really good solutions to Pascal's mugging that don’t work in the infinite case but work in the very large but undefined cases or some other kind of thought experiment I have not seen yet in which case I am happy to retract my skepticism.

Comment by weeatquince on Why I am happy to reject the possibility of infinite worlds · 2022-12-27T00:52:43.377Z · EA · GW

I am not sure that we disagree here / expect we are talking about slightly different things. I am not expressing any view on fanaticism issues or how to resolve them.

All I was saying is that infinites are no more of a problem for utilitarianism/ethics than large numbers. (If you want to say "infinite" or "TREE(3)" in a thought experiment, well either works.) I am not 100% sure, but based on what you said, I don’t think you disagree on that.

Comment by weeatquince on Why I am happy to reject the possibility of infinite worlds · 2022-12-27T00:41:20.650Z · EA · GW

I think you are saying that, although utility may exist arbitrarily far away (in time/space), the likelihood of it existing tends to zero...

Hi Vasco, No I am not saying that at all. Sorry I don’t know how best to express this. I never said utility should approach zero at all. I said your discount could be infintesimally small if you wish. So utility declines over time but that does not mean it needs to approach zero In fact in the limit it can stay at 1 but still allow a preference ordering. 

For example consider the series  where you start with 1 and then minus a quarter and then minus an 1/8 from that then minus a 1/16 from that and so on*, which goes like: 1, 3/4,  5/8,  9/16, 17/32, 33/64, ...     . This does not get closer to zero over time – it gets closer to 0.5. But also each point in the series is smaller than the previous so you can put them in order. 

Utility could tend to zero if there was a constant discount rate applied to account for a small but steady chance that the universe might stop existing. But it would make more sense to apply a declining discount rate, so there is no need to say it tends to zero or any other number. 

In short if there is a tiny tiny probability the universe will not last forever then that should be sufficient to apply a preference ordering to infinite sequences and resolve any paradoxes involving infinity and ethics.

I think allowing for infities is still a problem. 

You can only get those weird infinite ethics paradoxes if you say lets pretend for a minute that with 100% certainty we live in an infinite world, and it is"literally infinite ... not just tend to infinity as interpreted in physics". Which is just not the case!

I mean you could do that thought experiment but I don’t see how that makes any more sense than saying: lets pretend for a minute that time travel is possible and then point out that utilitarianism doesn’t have a good answer to if I should go back in time and kill Hitler if doing so would stop me from going back in time and killing Hitler.

In my view such though experiments are nonsense, all you are doing is pointing out that in impossible cases where there are paradoxes your decision theory breaks down – well of course it does.

 

Hope that helps.

 
 

Comment by weeatquince on Why I am happy to reject the possibility of infinite worlds · 2022-12-27T00:15:11.063Z · EA · GW

I don’t see why that is different from saying "But, if you're a risk-neutral expected value maximizing total utilitarian, you should be trying to increase the probability of a [very large number]* aggregate or reduce the probability of a negative [very large number]* aggregate (or both), and at [essentially] any finite cost and fanatically."

I don’t think you need infinities to say that very small probabilities of very big positive (or negative) outcomes messes up utilitarian thinking. (See Pascal's Mugging or Repugnant Conclusion.)

My claim is that any paradox with infinites is very easily resolvable (e.g. by noting that there is some chance the universe is not infinite, etc) or can be reduced to an well known existing ethical challenge (e.g. utilitarianism can get fanatical about large numbers) .

I hope that explains where I am coming form and why I might say that actually you "can ignore infinite cases".
 

* E.g. TREE(3)

Comment by weeatquince on Why I am happy to reject the possibility of infinite worlds · 2022-12-26T20:20:43.200Z · EA · GW

I haven't read Amanda's work so I cannot say for certain but yes this sounds correct.  My view would basically equate to moral time discounting. (If you think in a trillion trillion years just maybe the universe might not exist you should discount any good done in a trillion trillion years.)

Comment by weeatquince on Why I am happy to reject the possibility of infinite worlds · 2022-12-26T20:14:33.385Z · EA · GW

May be there is some nuance needed. My perhaps out of date understating of physics is that:

  1. The universe is expected to die a heat death so even if it goes on forever in some sense utility is finite, so at least from the point of view of infinite ethics nothing to worry about. Wikipedia describes this as the current leading theory here.
  2. Quantum mechanics many worlds theories suggest the universe might be very very very very big but not infinite. (I don’t have a good source here.)
  3. Physicists only use infinites in terms of limits, and as far as I know never use the kind of set theory infinites that come up in infinite ethics philosophy and have no basis in the real world as they are inherently paradoxical. See comment here.

 

I don’t know the types 1-4 of which you talk. 

Either way even if you still don’t believe 1 - 3 above the main point I was making about even the possibility of the universe being finite remains sufficient.

Comment by weeatquince on Why I am happy to reject the possibility of infinite worlds · 2022-12-26T19:59:28.295Z · EA · GW

I think you missed my point a bit. Nothing I said was to challenge impartiality. I am not at all saying that people further away in time and space are any less intrinsically morally relevant, only that if you ascribe some non-zero probability of the universe being finite then you can ascribe a preference ordering. Like all else being identical helping someone now is better than helping someone after the universe might no longer exist, because, you know, the universe might not exist then (they are no less morally relevant). And so ta-da all paradoxes to do with infinite ethics go away as you can not longer shift utility infinitely into the future.

Comment by weeatquince on Why I am happy to reject the possibility of infinite worlds · 2022-12-26T02:35:16.764Z · EA · GW

As far as I can tell you don't even need to go as far as rejecting all "possibility" of infinite worlds to respond Amanda paradox / infinite worlds paradoxes. All you need to do is put a non-zero possibility on the world being finite.

As long as you have a non-zero chance of finitenes (even if that chance limits to zero, even if it's only a chance in a trillion years time) then you can apply a preference ordering for sooner (or closer events). So the paradoxes I've seen do not apply (or are simply issues to do with using very large non-infinite numbers). See my comments here https://forum.effectivealtruism.org/posts/CeeeLu9PoDzcaSfsC/on-infinite-ethics?commentId=Aw7W2LNxbtqLiMPe7 and other comments on that article. (Maybe I've missed something but I've yet to see a way in which "infinite worlds wreaks havoc in ethical decisions").

Sorry I've not read Amanda's thesis so don't know her language. My best guess is in Amanda's language it means that for any action you can consider that shifts the world from being w1 to w2 where w1 and w2 appear to be (non-identical) qualitative duplicates to an impartial observer, because you are applying some chance of finitenes you are saying there is some chance that they are not truly qualitative duplicates and can then favour one world above the other based on how close to you the positive utility is, so they are not incomparable to you and there are no problems for any actual decision makers.

That said I also think it is reasonable to think the world is finite based on current physics so that's also a good take if it works for you.

Comment by weeatquince on No Masters/PhD, No Finance Background, No Programming - What are the options? · 2022-12-23T08:45:25.958Z · EA · GW

This group might have useful advice for your background: https://www.highimpactengineers.org/ could reach out to them ...

Good luck with the career moves

Comment by weeatquince on A Case for Voluntary Abortion Reduction · 2022-12-20T22:51:22.529Z · EA · GW

In case helpful note I comment below here:

I am not the best person to answer this question, but will do my best:

My understanding is that FEM only works through large public radio information raising campaigns. There is no behind the scenes where they would / could promote abortion that I know of. So I think it highly unlikely that they have done any work on abortion. 

Maternal Health Initiative is a few months old. They are still at the scoping and research stage so I cannot comment on their plans.

I hope that helps

Comment by weeatquince on A Case for Voluntary Abortion Reduction · 2022-12-20T22:50:21.793Z · EA · GW

I am not the best person to answer this question, but will do my best:

My understanding is that FEM only works through large public radio information raising campaigns. There is no behind the scenes where they would / could promote abortion that I know of. So I think it highly unlikely that they have done any work on abortion. 

Maternal Health Initiative is a few months old. They are still at the scoping and research stage so I cannot comment on their plans.

I hope that helps

Comment by weeatquince on Shapley values: Better than counterfactuals · 2022-12-20T14:57:30.644Z · EA · GW

Hi Nuno, Great post. 

I am thinking about how to calucate the Shapley values for policy work. So far I am just getting confused wo would love your input.

1.

How to think about the case where the government is persuaded to take some action. In general how would you recommend calculating the Shapley value of persuading someone to do something?

If I persuade you to donate out of the goodness of your heart to a charity then I assume you would say that value is impact split between me as the persuader you as the giver (and the charity and other actors). But what if I persuade you to do something for a non-altruistic reason, like I tell you that donating would be good for your companies image and sales would go up, would you imagine that is the same? My naive reading of your post is that in the second case is that I get 100% of the value (minus split with charity and other actors).

2.

How to think about crowd actions. If I organise a ballot initiative on good thing and then 1.5million people vote for it and it happens and makes the world better. I assume I claim something like 50% of the value (50% responsible for each vote)? What about in the case where there is no actual vote but I use the fact (gathered from survey data) that 1.5million people say they would vote for good thing x and persuade policy makers to adopt x. I assume in this case I get 100% of the value of x as the population did not take action. Is that how you would see it?

Comment by weeatquince on EA career guide for people from LMICs · 2022-12-20T14:33:34.487Z · EA · GW

This was an amazing post – well done!! :-)

Comment by weeatquince on A Case for Voluntary Abortion Reduction · 2022-12-20T14:29:49.395Z · EA · GW

Thank you for writing. I had question about this come up a few times when I was community building so it is helpful to see an effective altruism discussion on the topic.

 

– – 

One area of your post that confuses me, where (intuitively) I disagree with you is on your push back against family planning charities.

My understanding is that the charities you mention, Family Empowerment Media and Maternal Health Initiative, are trying to empower women with knowledge about and access to contraception. This supports women's autonomy and right to decide on their family, and is good for maternal health and childhood health (due to more spaced out births). Neither charity has that I know of made a stance for or against abortion and they do not work on abortions, but my assumption would be that more deliberate use of contraception would mean less unwanted pregnancies and less abortions. So, if you care about the moral value of embryos then supporting access to contraception could be among the most effective places to donate. 

So, I would have expected you to advocate people donate more to these charities not less.

(I can see a case for not donating to such charities out of moral uncertainty reasons but I could also see a case for avoiding working at all on abortion reduction for moral uncertainty reasons so not really sure where to take that line of argument).

 

Disclaimer: I work for Charity Entrepreneurship, the organisation that incubated both of the above charities. All views are my own and do not represent Charity Entrepreneurship or anyone else.

Comment by weeatquince on Why development aid is a really exciting field · 2022-12-10T22:22:43.080Z · EA · GW

I think this thread might be overestimating the tractability of this problem... Aid effectiveness definitely isn't a new topic ...

As far as I can tell (as the author of the Charity Entrepreneurship report) driving short term changes to aid budgets and spending patterns is tractable, but driving long-term improvements are much less tractable. It is an inherently non-sticky area of policy where successive governments tend to rewrite the rulebook.

If this hypothesis is correct it explains the trend you identify here. It also suggest that work in this space is limited as improvements to aid quality may only last a few years. Charity Entrepreneurship still considered it high value despite this limitation.

See: https://3394c0c6-1f1a-4f86-a2db-df07ca1e24b2.filesusr.com/ugd/26c75f_9fabb713c4c245f4beab12d0b0ca491d.pdf

Comment by weeatquince on SFF Speculation Grants as an expedited funding source · 2022-12-09T22:38:45.464Z · EA · GW

Hi, Great you are doing this. I have what is hopefully a very quick question:

What is your best guess as to when the next standard SFF grant round will be? Useful to know even if it is a very rough guess – could be helpful for deciding whether to consider applying or suggesting people short of funding apply for these grants or to wait for the next SFF round.

Thanks?

Comment by weeatquince on Why did CEA buy Wytham Abbey? · 2022-12-07T00:18:39.808Z · EA · GW

I really like that you found a counter argument to your own post and posted it. Go you :-)

Comment by weeatquince on The elephant in the bednet: the importance of philosophy when choosing between extending and improving lives · 2022-11-20T09:09:16.358Z · EA · GW

The Economist, last week: "[EA] doesn’t seriously question whether attempting to quantify an experience might miss something essential about its nature. How many WALYs for a broken heart?" (Nov 2022, Source)

HLI, this week: "...Hence, an overall effect of grief per death prevented is (0.72 x 5 x 0.5) x 4.03 = 7.26 WELLBYs" 

 

Great article – well done!!!

Comment by weeatquince on Ask Charity Entrepreneurship Anything · 2022-11-16T11:35:09.794Z · EA · GW

Choosing cause areas

In general, CE looks at cause areas where we can have a significant, legible and measurable impact.

Traditionally, this has meant focusing on cause areas that within EA are commonly considered near-termist, such as animal welfare, global health and wellbeing, family planning and mental health.

However, we think that there are cause areas that fall outside of this remit, and potentially that are traditionally within the long-termist space, where we could potentially find interventions that may have a significant impact and where there are concrete feedback loops. In fact, this is what prompted our research team to look into health security as a cause area.

 

During out intervention prioritisation research

Within health security, there are probably some differences in the way we have defined and operationalised this as a cause area and prioritised interventions compared to others in the community:

  1. We have taken a broader focus than GCBRs (Global Catastrophic Biological Risks), and thought about things like antimicrobial resistance, zoonotic pandemics and other biothreats which are very unlikely to have GCBR potential but which may, in expectation, be quite high priority to work on.
  2. We are probably less likely to be excited by ideas that might be more tailored solely towards tail-risk GCBR threats. This might include something like civilizational refuges, which we imagine is only useful for extinction risks. The reason that we are less excited about these ideas is not necessarily because we think that the risk of such events is low, but firstly because we think that there are unlikely to be strong ways to measure the impact that work on this area would have in the short to medium term, and secondly because this has less impact in the case of lower than extinction level risks.
  3. To operationalise our research and the cost-effectiveness estimates that we have made, we looked at the impact of our interventions using a time frame of the next 50 years. We do not think this is a perfect operationalisation, but think it is fairly useful. We are very sceptical of our ability to know what the biggest risks to the world would be after 50 years.
     

Credit note: This answer Initially drafted by former CE staff Akhil Bansal.

Comment by weeatquince on A dozen doubts about GiveWell’s numbers · 2022-11-16T00:03:30.718Z · EA · GW

Thank you Joel. Makes sense. Well done on finding these issues!

Comment by weeatquince on A dozen doubts about GiveWell’s numbers · 2022-11-11T10:30:49.346Z · EA · GW

I like this post.

One key takeaway I took from it was to have more confidence in GiveWell's analysis of it's current top recommended charities. The suggested changes here mostly move numbers by 10%-30% which is significant but not huge. I do CEAs for my job and this seems pretty good to me. After reading this I feel like GiveWell's cost effectiveness analysis are OK. Not perfect but as a rough decision heuristic they probably work fine. CEAs are ultimately rough approximations at best and this is how CEAs should be / are used by GiveWell.

My suggestion to GiveWell on how they can improve, after reading this post would be: Maybe it is more valuable for GiveWell to spend their limited person hours doing rough assessments of more varied and different interventions, than perfecting their analysis of these top charities. I would be more excited to see GiveWell committing to develop very rough speculative back of the envelop style analyses of a broader range of interventions (mental health, health policy, economic growth, system change, etc) than keep improving their current analyses to perfection. (Although maybe more of both is the best plan if it is achievable.)

I think this is a sentiment that the MichaelPlant (one of the post authors) has expressed in the past, see his comment here. I would be curious to hear if the post authors have thoughts, after doing this analysis, on the value of GiveWell investing more in exploration verses more in accuracy.

Comment by weeatquince on Mildly Against Donor Lotteries · 2022-11-05T11:21:31.398Z · EA · GW

If with some research you have a good chance of identifying better donation opportunities than "give to GiveWell or EA Funds", I'd be excited for you to do that and write up your results. 

Interestingly I recently tried this (here). It led to some money moved but less than I hoped. I would encourage others to do the same but also to have low expectations that anyone will listen to them or care or donate differently, I don’t expect the community is that agile/ responsive to suggestions.

If fact the whole experience made me much more likely to enter a donor lotteries – I now have a long list of places I expect should be funded and no where near enough funds to give so I might as well enter the lottery and see if that helps solve the problem.

Comment by weeatquince on Warning Shots Probably Wouldn't Change The Picture Much · 2022-11-01T08:30:02.534Z · EA · GW

I don’t follow the US pandemic policy but wasn’t some $bn (albeit much less than $30bn) still approved for pandemic preparedness and isn't more still being discussed (a very quick google points to $0.5b here and $2b here etc and I expect there is more)? If so that seems like a really significant win.

Also your reply was about government, not about EA or adjacent organisations. I am not sure anyone in this post / thread has given any evidence of any "a valiant effort" yet, such as listing campaigns run or even policy papers written etc. The only post-COVID policy work I know of (in the UK, see comment below) seemed very successful and I am not sure it makes sense to update against "making the government sane" without understanding what the unsuccessful campaigns have been. (Maybe also Guarding Against Pandemics, are they doing stuff that people feel ought to have had an impact by now, and has it?)

Comment by weeatquince on Warning Shots Probably Wouldn't Change The Picture Much · 2022-11-01T08:14:28.967Z · EA · GW

I just wanted to share as my experience was so radically different from yours. Based in the UK during the pandemic  I felt like:

  • No one in was really doing anything to try to "make the government sane around biorisk". I published a paper targeted at government on managing risks. I remember at the time (in 2020) it felt like no one else was shifting to focus on policy change on lessons learned from COVID.
  • When I tried doing stuff it went super well. As mentioned here  (and here) this work went much better than expected. The government seemed willing to update and commit to being better in future.

 I came away from the situation with a feeling that influencing policy was easy and impactful and neglected and hopefully about what policy work could achieve – but just disappointed that not more was being done to  "make the government sane around biorisk".
 

This leads me to questions Why are our experiences so different? Some hypothesis that I have are:

  • Luck / randomness – maybe I was lucky or US advocates were unlucky and we should assume the truth lies in the middle.
  • Different country – the US is different, harder to influence, or less sane than some (or many) other places.
  • Different methodology – The standard policy advocacy sector really sucks, it is not evidence based and there is little M&E. It might be that advocacy run in an impact-focused way (like was happening in the UK) is just much better than funding standard advocacy organisations (which I guess was happening in the US). See discussion on this here.
  • Different amount of work – your post mentions a "valiant effort" was made, but does not evidence this. This makes it hard to form an opinion on what works and why. Would be great to get an answer to this (see Susan's comment) e.g. links to a few campaigns in this space. 

Grateful for your views.

Comment by weeatquince on Female effective altruists (ideally with research/action in development economics) to suggest for a colloquium talk? · 2022-10-03T23:13:35.971Z · EA · GW

I don't know that she'd call herself an effective altruist, but if you just want someone to talk about doing effective development spending then I'm not sure that it matters...

Comment by weeatquince on Some concerns about policy work funding and the Long Term Future Fund · 2022-10-02T07:39:45.206Z · EA · GW

Note to anyone still following this: I have now written up a long list of longtermist policy projects that should be funded, this gives some idea how big the space of opportunities is here: List of donation opportunities (focus: non-US longtermist policy work) 

Comment by weeatquince on Female effective altruists (ideally with research/action in development economics) to suggest for a colloquium talk? · 2022-10-01T18:43:01.176Z · EA · GW

Or Ester Duflo obviously great and also sensible views on development

Comment by weeatquince on Female effective altruists (ideally with research/action in development economics) to suggest for a colloquium talk? · 2022-10-01T18:38:34.408Z · EA · GW

Rachel Glennester is excellent. In so far as I know them, I'd trust her views on the effectiveness of development policy. She did great things as the UK Chief Economist for the Development department. She is a GWWC member and has talked at an ea conference in the. I'm not sure but I think she might now be in the US.

Comment by weeatquince on Announcing the Future Fund's AI Worldview Prize · 2022-09-28T09:27:11.937Z · EA · GW

Nick, very excited by this and to see what this prize produces. One think I would find super useful is to know your probability of a bio x-risk by 2100.  Thanks.

Comment by weeatquince on How and when should we incentivize people to leave EA bubbles and explore? · 2022-09-25T09:14:51.323Z · EA · GW

I think if we want people to leave EA build skills and experience and come back and share those with the community the community could do a better job at listening to those skills and experience. I wanted to share my story in case useful: 

– –

My experience is of going away, learning a bunch of new things, coming back and saying hey here are some new things but mostly people seem to say that’s nice and keep on doing the old things.

As a concrete example, as one thing among many, I ended up going and talking to people who work in corporate risk management and national risk management and counterterrorism. And I find out that the non-EA expert community worry about putting too much weight on probability estimates over other ways of judging risks and I come back and say things like: hey are we focusing too much of forecasts and using probabilistic risk management tools rather than more up-to-date best practice management tools.

And then what.

I do of course post online and talk to people. But it is hard to tell what this achieves. There are minimal feedback loops and EA organisations don’t have sufficient transparency of their plans for me to tell if my efforts amount to anything. Maybe it was all find all along and no one was making these kinds of mistakes, or maybe I said "hey there is a better way here" and everyone changed what they are doing or maybe the non-EA experts are all wrong and EAs know better than there is a good reason to think this.

I don’t know but I don’t see much change.

– – 

Now of course this is super hard!!

Identifying useful input is heard. It is hard to tell apart a "hey I am new to the community and don’t understand important cruxes and think thing x is wrong and am not actually saying anything particularly new" from "hey I left the community for 10 years but have a decent grasp of key cruxes and have a very good reason why the community gets thing x wrong and it is super valuable to listen to me".

It can even be hard for the person saying these things to know what category they fall in.  I don’t know if my experiences should suggest a radical shift in how EAs think or are already well known.

And  communication is hard here. People who have left the community for a while wont be fully up-to-date with everything or have deep connections or know how to speak the EA-speak.

– – 

So if we value people leaving and learning then we should as a community make an effort to value them on this return. I like your ideas. I think celebrating such people and improving community support structures needs to happen. I am not sure how best to do this. Maybe a red-team org that works with people returning to the community to asses and spread their expertise. Maybe a prize for people bringing back such experience. I also think much more transparency about organisations theories of change and strategy would help people at least get a sense of how organisations work and what if anything is changing.

Comment by weeatquince on Announcing the Future Fund's AI Worldview Prize · 2022-09-25T08:44:42.127Z · EA · GW

Sorry I realise scrolling down that I am making much the same point as MichaelDickens' comment below. Hopefully added some depth or something useful.

Comment by weeatquince on Announcing the Future Fund's AI Worldview Prize · 2022-09-25T08:42:54.330Z · EA · GW

"FTX Foundation will not get submissions that change its mind, but it would have gotten them if only they had [broadened the scope of the prizes beyond just influencing their probabilities]"



Examples of things someone considering entering the competition would presumably consider out of scope are:

  • Making a case that AI misalignment is the wrong level of focus – even if AI risks are high it could be that AI risks and other risks are very heavily weighted towards specific risk factor scenarios, such as a global hot or cold war. This view is apparently expressed by Will (see here).
  • Making a case based on tractability – that a focus on AI risk is misguided as the ability to affect such risks are low (not to far away from the views of Yudkowsky here).
  • Making the case that we should not put much decisions weight on future predictions of risks – E.g. as long-run predictions of future technology as they are inevitably unreliable (see here) or E.g. as  modem risk assessment best practice says that probability estimates should only play a limited role in risk assessments (my view expressed here) or other.
  • Making the case that some other x-risk is more pressing, more likely, more tractable, etc.
  • Making the case against FTX Future's underlying philosophical and empirical assumptions  – this could be claims about the epistemics of focusing on AI risks, for example relating to how we should respond to cluelessness about the future or decisions relevant views about the long run future, for example that it might be bad and not worth protecting or that there might be more risks after AI or that long-termism is false


It seems like any strong case falling into these categories should be decision relevant to FTX Future fund but all are (unless I misunderstand the post) out of scope currently.
 

Obviously there is a trade-off. Broadening the scope makes the project harder and less clear but increases the chance of finding something decision relevant. I don’t have a strong reason to say the scope should be broadened now, I think that depends on FTX Future Funds's current capacity and plans for other competitions and so on.

I guess I worry that the strongest arguments are out of scope and if this competition doesn’t significantly update FTX's views then future competitions will not be run and you will not fund the arguments you are seeking. So flagging as a potential path to failure for your pre-mortem.