The case of the missing cause prioritisation research

post by weeatquince · 2020-08-16T00:21:02.126Z · EA · GW · 88 comments


  Introduction / summary 
  A. The importance of cause prioritisation research
    In short:
  B. The case of the missing cause prioritisation research
    Community building
    My own values
    In short:
  C. Whodunnit?
      1. The basics – partially happening – 5/10
      2. Different views – not happening – 0/10
      3. Policy and beyond – not happening – 2/10
      4. Beyond RCTs – not happening – 1/10
      5. Beyond speculation (practical longtermism) – partially happening – 6/10
    In short:
  D. Why is this underinvested in and next steps
      1. It is unclear what the theory of change would be for research organisations in this space.
      2. It is difficult to compete with the existing organisations that are just not quite doing this.
      3. This work is not intractable but it is difficult
      4. It is difficult to find cause neutral funding.
    In short 

Introduction / summary 

In 2011 I came across Giving What We Can, which shortly blossomed into effective altruism. Call me a geek if you like but I found it exciting, like really exciting. Here were people thinking super carefully about the most effective ways to have an impact, to create change, to build a better world. Suddenly a boundless opportunity to do vast amounts of good opened up before my eyes. I had only just got involved and by giving to fund bednets and had already magnified my impact on the world 100 times. 

And this was just the beginning. Obviously bednets were not the most effective charitable intervention, they were just the most effective we had found to date – with just a tiny amount of research. Imagine what topic could be explored next: the long run effects of interventions, economic growth, political change, geopolitics, conflict studies, etc. We could work out how to compare charities of vastly different cause areas, or how to do good beyond donations (some people were already starting to talk about career choices). Some people said we should care about animals (or AI risk), I didn’t buy it (back then), but imagine, we could work out what different value sets lead to different causes and the best charities for each.

As far as I could tell the whole field of optimising for impact seemed vastly under-explored. This wasn’t too surprising – most people don’t seem to care that much about doing charitable giving well and anyway it was only just coming to light how truly bad our intuitions were at making charitable choices (with the early 2000’s aid skepticism movement).

Looking back, I was optimistic. Yet in some regards my optimism was well-placed. In terms of spreading ideas, my small group of geeky uni friends went on to create something remarkable, to shift £m if not £bn of donations to better causes, to help 1000s maybe 100,000s of people make better career decisions. I am no longer surprised if a colleague, tinder date or complete stranger has heard of effective altruism (EA) or gives money to AMF (a bednet charity).

However, in terms of the research I was so excited about, of developing the field of how to do good, there has been minimal progress. After nearly a decade, bednets and AI research still seem to be at the top of everyone’s Christmas donations wish list. I think I assumed that someone had got this covered, that GPI or FHI or whoever will have answers, or at least progress on cause research sometime soon. But last month, whilst trying to review my career, I decided to look into this topic, and, oh boy, there just appears to be a massive gaping hole. I really don’t think it is happening.

I don’t particularly want to shift my career to do cause prioritisation research right now. So I am writing this piece in the hope that I can either have you, my dear reader, persuade me this work is not of utmost importance, or have me persuade you to do this work (so I don’t have to).



A. The importance of cause prioritisation research

What is your view on the effective altruism community and what it has achieved? What is the single most important idea to come out of the community? Feel free to take a moment to reflect. (Answers on a postcard, or comment).

It seems to me (predictably given the introduction) that far and away the most valuable thing EA has done is the development of and promotion of cause prioritisation as a concept. This idea seems (shockingly and unfortunately) unique to EA.[1] It underpins all EA thinking, guides where EA aligned foundations give and leads to people seriously considering novel causes such as animal welfare or longtermism.

This post mostly focuses on the current progress of and neglectedness of this work over the past few years. But let us start with a quick recap of why cause prioritisation research might be important and tractible. The argument is nicely set out in Paul Christiano’s The Case for Cause Prioritization as the Best Cause (written 2013-14). To give a short summary Paul says:

1. Some causes are significantly higher impact than others. We theoretically expect and empirically observe impact to be “heavy tailed” with some causes being orders of magnitude more impactful (see also Prospecting for Gold [? · GW]). We should not yet be confident in our top causes and many of our current approaches to improve the world rely on highly speculative assumptions (eg about long term effects). So if we could make progress on prioritisation we should expect to have a large positive impact. 

2. it is reasonable to think that research would make progress because:

(Also this week 80000 Hours has just written this: Why global priorities research is even more important than I thought)

In short:

Cause prioritisation is hugely valuable to guide how we do good.



B. The case of the missing cause prioritisation research

Let me take you through my story, and set out some of the research gaps as I have experienced them.


Community building

From 2013 until 2017 I ran the EA community in London. I set myself the goal of building a vibrant welcoming and cohesive community and I like to think I did OK. But occasionally the intellectual framework was just not there. For while I might say “we are a new community, we don’t yet have the answer to this” but after a few years the excuse got thin. The research on specific causes areas got deeper, but the cause prioritisation research did not. In particular I struggled to provide materials to people who did not fall close to thinking along classical utilitarian lines.[2]

And it was damaging. It is damaging. More and more, as I look across the EA movement I see the people who join are not those who are open minded souls keen to understand what it means to do the most good, but people who are already focused on the causes we champion: global development or animal welfare or preventing extinction risk. Now I love my cause committed compatriots, but I do think we are at risk of creating a community that is unwelcoming to the true explorers, a community that is intellectually entrenched and forever doomed to only see those three cause areas.

I think we need to do cause prioritisation from the point of view of different value sets and different cultures. This is important for building a good community, especially for spreading to other countries (as discussed here [EA · GW] and here [EA · GW]). This is also important for reaching truth. Different people with different life experiences will not only ask different questions, but have different hypotheses about what the answers might be.[3] 

I could say more on this but honestly I think most of it is covered in the amazing post by Objections to value alignment between EAs [EA · GW] by CarlaZoeC which I recommend you check out.



One thing I notice is that, with few exceptions, the path to change for EA folk who want to improve the long-run future is research. They work at research institutions, design AI systems, fund research, support research. Those that do not do research seem to be trying to accumulate power or wealth or CV points in the vague hope that at some point the researchers will know what needs doing.

Post community building I moved back into policy and most recently have found myself in the policy space, building support for future generations [EA · GW] in the UK Parliament. Not research. Not waiting. But creating change.

From this vantage point it doesn’t feel like the EA community has thought much about policy. For example there is a huge focus on AI policy, but the justification for this is weak. Even if you fully believe the longtermist arguments that top programmers should work on AI alignment, it does not immediately follow that good policy people can have more long term impact in AI policy compared to policy on resilience, macroeconomics, institution design, nuclear non-proliferation, climate change, democracy promotion, political polarisation, etc, etc.

Most of the cause prioritisation research has been focused on how to do good with money. But there is very little on how to do good if you have political capital, public status, media influence and so on. Trying to weigh up and compare all the different policy approaches I list above would be a mighty undertaking and I do not expect answers soon, but it would be nice to see someone trying to take on the task, and not focusing solely on where to shift money. 


My own values

Most recently I have been thinking about what career route to go down next, what my values are, and what has been written on cause prioritisation.

Looking around it feels a like there is a split down the middle of the EA community:[4] 

  1. On the one hand you have the empiricals: those who believe that doing good is difficult, common sense leads you astray and to create change we need hard data, ideally at least a few RCTs.
  2. On the other side are the theorists: those who believe you just need to think really hard and to choose a cause we need expected value calculations and it matters not if calculations are highly uncertain if the numbers tend to infinity.

Personally I find myself somewhat drawn to the uncharted middle ground. Call me indecisive if you like but it appears to me that both ends of this spectrum are making errors in judgement. Certainly neither of the approaches above come close to how well-run government institutions or large successful corporations make decisions.

(I also don’t think these two areas are as far apart as it first seems. If you look at the structural change and policy research GiveWell is interested in it is not too far away from long-termist research suggestions [EA · GW] on institutional change.)

I think this split provides a way of breaking down the work I would love to see:


Beyond RCTs – It would be lovely to see the ‘empiricals’ crew move beyond basic global health, to have them say “great we have shown that you can, despite the challenges, identify interventions that work and compare them. Now let’s get a bit more complicated and do some more research and find other interventions and consider long run effects and so on”. There could be research looking for strong empirical evidence into:

It honestly shocks me that the EA community has had so little progress in this space in a decade.


Beyond speculation – it would be great if the ‘theorists’ looked a bit more at making their claims more credible. From my point of view, I could save a human life for ~£3000. I don’t want to let kids die needlessly if I can stop it. I personally think that the future is really important but before I drop the ball on all the things I know will have an impact it would be nice to have:


In short:

You could categorise this research in a bunch of different ways but if I had to make a list the projects I would be super excited to see are:

  1. The basics: I think we could see progress just by doing investigations of a broad range of different potentially top causes and comparisons across causes. (The search for “cause X [? · GW]”).
  2. Consideration of different views and ethics and how this affects what causes might be most important.
  3. Consideration of how to prioritise depending on the type of power you have, be it money or political power or media influence or something else.
  4. Empirical cause selection beyond RCTs. The impact of system change and policy change in international development and more consideration of second order effects.
  5. Theoretical cause selection beyond speculation. Evidence of how to reason well despite uncertainty and more comparisons of different causes.

This research would ensure that we continue to learn how to do good, not entrenched in our ways, and taking the actions that will have the biggest impact on the world.



C. Whodunnit?

So is anyone doing this? Lets run through my list. 

[Edit: disclaimer, I have looked though organisations plans, research agendas and so forth and done the best I can but I did not invest time in talking to people at all the organisations in this space – so it is possible I may have mischaracterised specific organisations compared to how they would describe themselves – apologies]


1. The basics – partially happening – 5/10

Shallow investigations of how to do good within a few cause areas are being done by Open Philanthropy Project (OpenPhil) and to a lesser extent by Founders Pledge (FP). The main missing part is that there is little written that compares across these different causes or looks at how one might prioritise one cause over another (except for occasional mentions in the FP reports and the OpenPhil spreadsheets here and here). 

More granular, but still high level intervention research is being done by Charity Entrepreneurship.


2. Different views – not happening – 0/10

No organisation is doing this. There is no systematic work in this space. The most that is going on is a few individuals or small groups that have taken up specific approaches (still largely hedonistic utilitarianism adjacent) and run with it, such as the Happier Lives Institute (HLI) or the Organisation for the Prevention of Intense Suffering (OPIS).


3. Policy and beyond – not happening – 2/10

No organisation is doing research into how to prioritise if you have political power or media influence or something other than money. 80000 Hours (80K) appeared to do some of this in the past but are now focusing on their priority paths [EA · GW]. They have said that the details of what those paths are may change. It is unclear if such changes indicate that they will do more research themselves or if they expect to change in light of others research. Either way  the rough direction feels fairly set so I do not expect much more high level cause prioritisation research from them soon.


4. Beyond RCTs – not happening – 1/10

GiveWell keeps setting out plans to expand the scope of their research (see 2018 plans and 2019 plans) and, in their own words they “failed to achieve this goal” (see 2018 review and 2019 review). When asked they said that “We primarily attribute this to having a limited number of staff who were positioned to conduct this work, and those staff having many competing demands on their time … we are continuing to hire and expect this will enable us to make additional progress in new areas.” I am not super optimistic given their 2020 plan for new research is less ambitious than previously insofar as it focuses solely on public health.

Open Philanthropy are mostly deferring to GiveWell although they express support of GiveWell’s unmaterialised plans to expand their research and they are funding the Center for Global Development’s policy work. The only useful new research in this space seems to be a small amount of work from Founders Pledge, it is unclear the extent to which they plan to do more work in this area.


5. Beyond speculation (practical longtermism) – partially happening – 6/10

The best source of research and experimentation in this space is again OpenPhil. They are experimenting with trying to influence policy related to the far future and doing research on topics relevant to long termism. However as already highlighted it is unclear how OpenPhil are comparing different causes, rather than looking out for giving opportunities across a variety of causes and seeing what they can fund and what the impact of that will be.

The Global Priorities Institute (GPI) are looking to improve the quality of thinking in this space. They have so far produced only philosophy papers. It is useful stuff and valuable for building traction in academia, but personally I am pretty sceptical about humans solving philosophy soon and would rather have some answers within the next few decades.

There are a few others doing small amounts of research on specific topics such as Center on Long Term Risk (CLR) and Future of Humanity Institute (FHI).

Overall there seems to be a lot of longtermsim research but the amount that is going into what you could plausibly call cause prioritisation is small and with the possible, but unclear, exception of OpenPhil progress in this space is minimal.


Now this is just one way of thinking through the work I would like to see based on my subjective experiences of navigating this community for the past decade, I am sure this could be done differently but overall I give the EA community a whopping 28% for cause prioritisation research. Better than Titanic II (tagline: they said it couldn't happen twice) but not quite as good as The Emoji Movie


In short:

There is not nearly enough work in this space.



D. Why is this underinvested in and next steps

I think that this space needs new organisations (and/or existing organisations to significantly refocus in this direction). But before you swallow everything I have said hook line and sinker and head off to start a cause prioritisation organisation I think we need to examine why this work might be underinvested in and what we can learn.

In the order that I think is important, some of the challenges are:


1. It is unclear what the theory of change would be for research organisations in this space.

Different organisations have different theories of change for research.

But for a new organisation to solely focus on doing the research that they believed would be most useful for improving the world it is unclear what the theory of change would be. Some options are:

These paths are valid but they have a difficult extra step. Any organisation entering this space needs to be doing multiple things at once and needs to convince funders that they can create value from the research. For example Let’s Fund has done some useful research but struggled to demonstrate that they can turn research into money moved.

I do not have a magic solution to this. Ideally a new organisation in this space would have enough initial cause neutral funding to allow a reasonable amount of research to be done to demonstrate effectiveness. One idea is to have some level of pre-commitment from a large funder (or from an organisation such as OpenPhil or 80K) that they would use the research. Another idea is to have good influencers on board at the start, for example for policy research having a ex-senior politician on board could help make the case your research would be noticed – the Copenhagen Consensus seemed to start this way.

(Also, I have never worked in academia so there may be theories of change in the academic space that others could identify.)


2. It is difficult to compete with the existing organisations that are just not quite doing this.

I think one of the reasons why not enough has been done in this space is that organisations and individuals reach conclusions about what is most important for themselves (not necessarily in a way that is convincing to others) and then choose to focus on that.

For example 80000 Hours have [edited: focused on specific] priority paths. The Future of Humanity Institute has focused heavily on AI, setting up the Centre for the Governance of AI. Even GiveWell used to have a broader remit before they focused in on global health. (There are of course advantages to focus. For example GiveWell’s focus led to them significantly improving their charity recommendations, they no longer recommend terrible approaches like microfinance, but it has limited exploration.)

I think that people are hesitant to do something new if they think it is being done, and funders want to know why the new thing is different so the abundance of organisations that used to do cause prioritisation research or do research that is a subcategory of cause prioritisation research limits other organisations from starting up.

My solution to this is to write this post to convince others that this work is not being done.


3. This work is not intractable but it is difficult

This work is difficult. It is not like standard academic research as it needs to pull in a vast variety of different areas and topics, from ethics, to economics, to history, to international relations. Finding polymaths to compare across different interventions of different types is very difficult.

For example finding good staff has clearly impacted GiveWell’s ability to expand their research.

I suggest new organisations in this space might want to consider working differently, for example having a large budget for contracting top quality research across different fields and lower numbers of paid staff.

I also suggest interdisciplinary input into drafting research agendas. (One economics student told me that when reading the GPI research agenda, the economics parts read like it was written by philosophers. Maybe this contributes to the lack of headway on their economics research plans.)

When drafting this post I began to wonder if such research is actually intractable. I think Paul’s arguments counter this somewhat but the thing that gives me the most hope is that some of the best research in this space appears to be random posts from individuals on the EA forum. For example Growth and the case against randomista development [EA · GW], Reducing long-term risks from malevolent actors [EA · GW] (part funded by CLR) Does climate change deserve more attention within EA [EA · GW], Increasing Access to Pain Relief in Developing Countries [EA · GW], High Time For Drug Policy Reform [EA · GW]. I am also impressed with new organisations such as the fledgling Happier Lives Institute who are challenging the way we think about wellbeing. This makes me think there is likely a lot of tractable important cause prioritisation research that could be done and the problem is a lack of effort not tractability.


4. It is difficult to find cause neutral funding.

I think funders like to choose their cause and stick with it so there is a lack of cause neutral funding. 

For example Rethink Priorities looked really exciting when it got started with their co-founder expressing strong support for practical prioritisation research. But their research has mostly focused on animal welfare interventions, not on comparing between causes. They cite having to follow the funding as the main reason for this.

I think funders who have benefited from cause prioritisation research done to date should apportion a chunk of their future funding to support more such research.


In short 

There are a bunch of barriers to good cause prioritisation research. But I believe they are all overcomeable, and they do not make a strong case that such research is intractable.




So there we have it dear reader my musing and thoughts on cause prioritisation, mixed in with a broad undercurrent of dissatisfaction with the EA community. Maybe I am just more jaded in my old age (early 30s) but I think I was more optimistic about the intellectual direction of the EA community when it had no power or influence nearly a decade ago. Intellectual progress in the field of doing good has been much slower than I hoped.

But I am an optimistic fellow. I do think we can make progress. There has been just enough traction to give me hope. It just needs a bit more effort, a bit more searching.

So my request to you. Either disagree with me, tell me that sufficient progress is happening, or change how you act in some small way. Be a bit more uncertain, a bit more willing to donate to fund or to go into cause prioritisation research. And if you work in an EA org please stop focusing so much on the cause areas you each believe are most important and increase the amount of cause neutral work and funding that you do.

I am considering starting a new organisation in this space with a focus on policy interventions. If you want to be involved or have ideas, or have some reason to think this is not actually a good use of my time, then comment below or message me. 

And do comment. I want your thoughts big or small. Most of my recent posts on this forum had minimal comments.


Did you read the post [EA · GW] by CarlaZoeC that I linked to above? I hope not because they write better than me so I am going to end by stealing their conclusion:

“EA is not your average activist group on the market-place on ideas on how to live. It has announced far greater ambitions: to research humanity’s future, to reduce sentient suffering and to navigate towards a stable world” 

“But if the ambition is great, the intellectual standards must match it. … Humanity lacks clarity on the nature of the Good, what constitutes a mature civilization or how to use technology. In contrast, EA appears to have suspiciously concrete answers.”

“I wish EA would more visibly respect the uncertainty they deal in. Indeed, some EAs are exemplary - some wear uncertainty like a badge of honour.... For them, EA is a quest, an attempt to approach big questions of valuable futures, existential risk and the good life, rather than implementing an answer. I wish this would be the norm. I wish all would enjoy and commit to the search, instead of pledging allegiance to preliminary answers. … [it is like that that we] have the best chance of succeeding in the EA quest.” 





[1] This is based on my experience of diving into a range of activism spaces, charity projects and other assorted communities of people trying to do good. It is very rare for people to think strategically about what to focus on to the most good. GiveWell also make the case that charitable foundations tend not to think this way in this post.

[2] This experience did lead me to start an EA London charity evaluation giving circle for people who had strong moral intuitions that equality and justice were of value. Write up here [EA · GW].

[3] This sentence is a quote from the discussion about the value of diversity in the most recent 80K podcast. But for more on this I also recommend checking out In Defence of Epistemic Modesty [EA · GW].

[4] I accept this is somewhat caricatured, but I maintain that many people in EA fall close to these archetypes. (Except for the effective animal activism folk who nicely bridge this gap, maybe I should just go join them.)

[5] Look out for my upcoming report with CSER on this topic


Comments sorted by top scores.

comment by trammell · 2020-08-17T00:02:30.056Z · EA(p) · GW(p)

Thanks, I definitely agree that there should be more prioritization research. (I work at GPI, so maybe that’s predictable.) And I agree that for all the EA talk about how important it is, there's surprisingly little really being done.

One point I'd like to raise, though: I don’t know what you’re looking for exactly, but my impression is that good prioritization research will in general not resemble what EA people usually have in mind when they talk about “cause prioritization”. So when putting together an overview like this, one might overlook some of even what little prioritization research is being done.

In my experience, people usually imagine a process of explicitly listing causes, thinking through and evaluating the consequences of working in each of them, and then ranking the results (kind of like GiveWell does with global poverty charities). I expect that the main reason more of this doesn’t exist is that, when people try to start doing this, they typically conclude it isn’t actually the most helpful way to shed light on which cause EA actors should focus on.

I think that, more often than not, a more helpful way to go about prioritizing is to build a model of the world, just rich enough to represent all the levers between which you’re considering and the ways you expect them to interact, and then to see how much better the world gets when you divide your resources among the levers this way or that. By analogy, a “naïve” government’s approach to prioritizing between, say, increasing this year’s GDP and decreasing this year’s carbon emissions would be to try to account explicitly for the consequences of each and to compare them. Taking the lowering emissions side, this will produce a tangled web of positive and negative consequences, which interact heavily both with each other and with the consequences of increasing GDP: it will mean

  • less consumption this year,
  • less climate damage next year,
  • less accumulated capital next year with which to mitigate climate damage,
  • more of an incentive for people next year to allow more emissions,
  • more predictable weather and therefore easier production next year,
  • …but this might mean more (or less) emissions next year,
  • …and so on.

It quickly becomes clear that finishing the list and estimating all its items is hopeless. So what people do instead is write down an “integrated assessment model”. What the IAM is ultimately modeling, albeit in very low resolution, is the whole world, with governments, individuals, and various economic and environmental moving parts behaving in a way that straightforwardly gives rise to the web of interactions that would appear on that infinitely long list. Then, if you’re, say, a government in 2020, you just solve for the policy—the level of the carbon cap, the level of green energy subsidization, and whatever else the model allows you to consider—that maximizes your objective function, whatever that may be. What comes out of the model will be sensitive to the construction of the model, of course, and so may not be very informative. But I'd say it will be at least as informative as an attempt to do something that looks more like what people sometimes seem to mean by cause prioritization.

If the project of “writing down stylized models of the world and solving for the optimal thing for EAs to do in them” counts as cause prioritization, I’d say two projects I’ve had at least some hand in over the past year count: (at least sections 4 and 5.1 of) my own paper on patient philanthropy and (at least section 6.3 of) Leopold Aschenbrenner’s paper on existential risk and growth. Anyway, I don't mean to plug these projects in particular, I just want to make the case that they’re examples of a class of work that is being done to some extent and that should count as prioritization research.

…And examples of what GPI will hopefully soon be fostering more of, for whatever that’s worth! It’s all philosophy so far, I know, but my paper and Leo’s are going on the GPI website once they’re just a bit more polished. And we’ve just hired two econ postdocs I’m really excited about, so we’ll see what they come up with.

Replies from: jackmalde, weeatquince, FCCC, Milan_Griffes
comment by Jack Malde (jackmalde) · 2020-08-17T06:00:22.208Z · EA(p) · GW(p)

Hey Phil. I'm someone who is very interested in the work of GPI and am impressed by what I have seen so far. I'm looking forward to seeing what the new economists get up to!

I had a look at Leopold's paper a while back, have listened to you on the 80K podcast and have watched a few of GPI's videos including Christian Tarsney's one on the epistemic challenge to longtermism. I notice that in a lot of this research, key results are highly sensitive to the value of certain parameters. My memory is slightly hazy on specifics but I think in Christian's paper the validity of longtermism depends largely on the existence and frequency of exogenous nullifying events (ENEs) that can essentially wipe out any trajectory change efforts that came before (apologies if I'm not being perfectly accurate here).

I am wondering if empirical estimation of key parameters is a gap in current cause prioritisation research. Because the value of these parameters is so important in determining results from the models, it seems very high-value to more accurately estimate these parameters. Do you know if anyone is actually doing this? Is anyone for example looking into the nature of ENEs? Is this something new economists at GPI might engage in? If this type of research isn't suitable for GPI, does GPI need closer links to other research institutions that are interested in carrying out more empirical research?

Replies from: trammell
comment by trammell · 2020-08-17T23:14:35.016Z · EA(p) · GW(p)

Thanks! I agree that people in EA—including Christian, Leopold, and myself—have done a fair bit of theory/modeling work at this point which would benefit from relevant empirical work. I don’t think this is what either of the current new economists will engage in anytime soon, unfortunately. But I don’t think it would be outside a GPI economist’s remit, especially once we’ve grown.

Replies from: jackmalde
comment by Jack Malde (jackmalde) · 2020-08-18T09:05:43.551Z · EA(p) · GW(p)

OK that’s good to hear. It probably makes sense to spend some time laying a solid theoretical base to build on. I’m aware of how new GPI still is so I’m looking forward to seeing how things progress!

comment by weeatquince · 2020-08-17T20:20:11.578Z · EA(p) · GW(p)

Hi, Thank you for this really helpful comment. It was really interesting to read about how you work on cause prioritisation research and use IAMs. Glad that GPI will be expanding.

comment by FCCC · 2020-08-23T02:55:55.761Z · EA(p) · GW(p)

“writing down stylized models of the world and solving for the optimal thing for EAs to do in them”

I think this is one of the most important things we can be doing. Maybe even the most important since it covers such a wide area and so much government policy is so far from optimal.

you just solve for the policy ... that maximizes your objective function, whatever that may be. 

I don't think that's right. I've written about what it means for a system to do "the optimal thing" [? · GW] and the answer cannot be that a single policy maximizes your objective function:

Societies need many distinct systems: a transport system, a school system, etc. These systems cannot be justified if they are amoral, so they must serve morality. Each system cannot, however, achieve the best moral outcome on its own: If your transport system doesn’t cure cancer, it probably isn’t doing everything you want; if it does cure cancer, it isn’t just a “transport” system...

Unless by policy, you mean "the entirety of what government does", then yes. But given that you're going to consider one area at a time, and you're "only including all the levers between which you’re considering", you could reach a local optimum rather than a truly ideal end state. The way I like to think about it is "How would a system for prisons (for example) be in the best possible future?" This is not necessarily going to be the system that does the greatest good at the margin when constrained to the domain you're considering (though they often are). Rather than think about a system maximizing your objective function, it's better to think of systems as satisfying goals that are aligned with your objective function.

Replies from: evelynciara
comment by BrownHairedEevee (evelynciara) · 2020-08-24T20:23:09.312Z · EA(p) · GW(p)

I wonder if we could create an open source library of IAMs for researchers and EAs to use and audit.

comment by Milan_Griffes · 2020-08-19T18:08:06.773Z · EA(p) · GW(p)

At a glance, Salesforce's AI Economist seems like an attempted implementation of an IAM.

comment by Ozzie Gooen (oagr) · 2020-08-16T14:17:41.285Z · EA(p) · GW(p)

Thanks for the post! Much of it resonated with me.

A few quick thoughts:

1. I could see some reads of this being something like, "EA researchers are doing a bad job and should feel bad." I wouldn't agree with this (mainly the latter bit) and assume the author wouldn't either. Lots of EAs I know seem to be doing about the best that they know of and have a lot of challenges they are working to overcome. 

2. I've had some similar frustrations over the last few years. I think that there is a fair bit of obvious cause prioritization research to be done that's getting relatively little attention. I'm not as confident as you seem to be about this, but agree it seems to be an issue.

3. I would categorize many of the issues as being systematic between different sectors. I think significant effort in these areas would require bold efforts with significant human and financial capital, and these clusters are rare. Right now the funding situation is still quite messy for ventures outside the core OpenPhil cause areas.

I could see an academic initiative taking some of them on, but that would be a significant undertaking from at least one senior academic who may have to take a major risk to do so. Right now we have a few senior academics who led/created the existing main academic/EA clusters, and these projects were very tied to the circumstances of the senior people. 

If you want a job in Academia, it's risky to do things outside the common tracks, and if you want one outside of Academia, it's often riskier. One in-between is making new small nonprofits. This is also a significant undertaking however. The funding situation for small ongoing efforts is currently quite messy; these are often too small for OpenPhil but too big for EA funds.

4. One reason why funding is messy is because it's thought that groups doing a bad job at these topics could be net negative. Thus, few people are trusted to lead important research in new areas that are core to EA. This could probably be improved with significantly more vetting, but this takes a lot of time. Now that I think about it, OpenPhil has very intensive vetting for their hires, and these are just hires; after they are hired they get managers and can be closely worked with. If a funder funds a totally new research initiative, they will have a vastly lower amount of control (or understanding) over it than organizations do over their employees. Right now we don't have organizations around who can do near hiring-level amounts of funding for small initiatives, perhaps we should though.

5. We only have so many strong EA researchers, and fewer people capable of leading teams and obtaining funding. Right now a whole lot of great ones are focused on AI (this often requires many years of grad school or training) and Animals. My impression is that on the margin, moving some people from these fields to other fields (cause prioritization or experimental new things) could be good, though a big change to several individuals. 

6. It seems really difficult to convince committed researchers to change fields. They often have taken years to develop expertise, connections, and citations, so changing that completely is very costly. An alternative is to focus on young, new people, but those people take a while to mature as researchers.

In EA we just don't have many "great generic researchers" who we can reassign from one topic to something very different on short notice. More of this seems great to me, but it's tricky to setup and attract talent for.

7. I think it's possible that older/experienced researchers don't want to change careers, and new ones aren't trusted with funding. Looking back I'm quite happy that Ellie and Holden started GiveWell without feeling like they needed to work in an existing org for 4 years first. I'm not sure what to do here, but would like to see more bets on smart young people.

8. I think there are several interesting "gaps" in EA and am sure that most others would agree. Solving them is quite challenging, it could require a mix of coordination, effort, networking, and thinking. I'd love to see some senior people try to do work like this full-time. In general I'd love for see more "EA researcher/funding coordination", that seems like the root of a lot of our problems.

9. I think Rethink Priorities has a pretty great model and could be well suited to these kinds of problems. My impression is funding has been a bottleneck for them. I think that Peter may respond to this, so can do so directly. If there are funders out there who are excited to fund any of the kinds of work described in this article, I'd suggest reaching out to Rethink Priorities and seeing if they could facilitate that. They would be my best bet for that kind of arrangement at the moment.

10. Personally, I think forecasting/tooling efforts could help out cause prioritization work quite a bit (this is what I'm working on), but it will take some time, and obviously aren't direct work on the issue.

Replies from: weeatquince, MichaelDickens
comment by weeatquince · 2020-08-16T16:02:09.262Z · EA(p) · GW(p)

Tank you Ozzie. Very very helpful. To respond.

1. EA researchers are doing a great job. Much kudos to them. Fully agree with you on that. I think this is mostly a coordination issue. 

3. Agree a messy funding situation is a problem. Not so sure there is that big huge gap between groups funded by EA Funds and groups funded by OpenPhil.

4. Maybe we should worry less about "groups doing a bad job at these topics could be net negative". I am not a big donor so find this hard to judge this well. Also I am all for funding well evidenced projects (see my skepticism below about funding "smart young people"). But I am not convinced that we should be that worried that research on this will lead to harm, except in a few very specific cases. Poor research will likely just be ignored. Also most Foundations vet staff more carefully than they vet projects they fund.

5-6. Agree research leaders are rare (hopefully this inspires them). Disagree that junior researchers are rare. You said: "We only have so many strong EA researchers, and fewer people capable of leading teams and obtaining funding." + "It seems really difficult to convince committed researchers to change fields" Very good points. That said I think Rethink Priories have been positively surprised at how many very high quality applicants they have had for research roles. So maybe junior researchers are there. My hope this post inspires some people to set up more organisations working in this space. 

7. Not so sure about "more bets on smart young people". Not sure I agree. I tend to prefer giving to or hiring people with experience or evidence of traction. But I don’t have a strong view and would change my mind if there was good evidence on this. There might also be ways to test less experienced people before funding the, like through a "Charity Entrepreneurship" type fellowship scheme.

8. I'd love to have more of your views on what an "EA researcher/funding coordination" looks like as I could maybe make it happen. I am a Trustee of EA London. EA London is already doing a lot of global coordination of EA work (especially under COVID). I have been thinking and talking to David (EA London staff) about scaling this up, hiring a second person etc. If you have a clear vision of what this might look like or what it could add I would consider pushing more on this.

9. Rethink Priorities is OK. I have donated to them in the past but might stop as not sure they are making much headway on the issue listed here. Peter said "I think we definitely do "Beyond speculation (practical longtermism) ...  So far we've mainly been favoring within-cause intervention prioritization".

10. Good luck with your work on forecasting efforts. 

Replies from: oagr
comment by Ozzie Gooen (oagr) · 2020-08-16T17:09:41.081Z · EA(p) · GW(p)

Thanks for the response!

Quick responses:

4. I haven't investigated this much myself, I was relaying what I know from donors (I don't donate myself). I've heard a few times that OpenPhil and some of the donors behind EA Funds are quite worried about negative effects. My impression is that the reason for some of this is simple, but there are some more complicated reasons that go into the thinking here that haven't been written up fully. I think Oliver Habryka has a bunch of views here. 

5-6. I didn't mean to imply that junior researchers are "rare", just that they are limited in number (which is obvious). My impression is that there's currently a bottleneck to give the very junior researchers experience and reputability, which is unfortunate. This is evidenced by Rethink's round. I think there may be a fair amount of variation in these researchers though; that only a few are really the kinds who could pioneer a new area (this requires a lot of skills and special career risks).

7. I'm also really unsure about this. Though to be fair, I'm unsure about a lot of things. To be clear though, I think that there are probably rather few people this would be a good fit for.

I'm really curious just how impressive the original EA founders were compared to all the new EAs. There are way more young EAs now than there were in the early days, so theoretically we should expect that some will be in many ways more competent than the original EA founders, minus in experience of course.

Part of me wonders: if we don't see a few obvious candidates for young EA researchers as influential as the founders were, in the next few years, maybe something is going quite wrong. My guess is that we should aim to resemble other groups that are very meritocratic in terms of general leadership and research. 

8. Happy to discuss in person. They would take a while to organize and write up.

The very simple thing here is that to me, we really could use "funding work" of all types. OpenPhil still employs a very limited headcount given their resources, and EA Funds is mostly made up of volunteers. Distributing money well is a lot of work, and there currently aren't many resources going into this. 

One big challenge is that not many people are trusted to do this work, in part because of the expected negative impacts of funding bad things. So there's a small group trusted to do this work, and a smaller subset of them interested in spending time doing it.

I would love to see more groups help coordinate, especially if they could be accepted by the major donors and community. I think there's a high bar here, but if you can be over it, it can be very valuable.

I'd also recommend talking to the team at EA Funds, which is currently growing.

9. This could be worth discussing more further. RP is still quite early and developing. If you have suggestions about how it could improve, I'd be excited to have discussions on that. I could imagine us helping change it in positive directions going forward.

10. Thanks!

comment by MichaelDickens · 2020-08-17T18:53:24.766Z · EA(p) · GW(p)

Excellent comment.

I think that there is a fair bit of obvious cause prioritization research to be done that's getting relatively little attention.

Do you have a list of the top research areas you'd like to see that aren't getting done?

Personally, I think forecasting/tooling efforts could help out cause prioritization work quite a bit (this is what I'm working on)

I agree. Forecasting is a common good to many causes, so you'd expect it not to be neglected. But in practice, it seems the only people working on forecasting are EA or EA-adjacent (I'd count Tetlock as adjacent). Recently I've had many empirical questions about the future that I thought could use good forecasts, e.g., for this essay I wrote [EA · GW], I made some Metaculus questions and used those to help inform the essay. It would be really helpful if it were easier to get good forecasts.

Replies from: oagr, oagr
comment by Ozzie Gooen (oagr) · 2020-08-17T20:43:13.303Z · EA(p) · GW(p)

Do you have a list of the top research areas you'd like to see that aren't getting done?

Oh boy. I've had a bunch of things in the back of my mind. Some of this is kind of personal (specific to my own high level beliefs, but wouldn't apply to many others).
I'm a longtermist and believe that most of the expected value will happen in the far future. Because of that, many of the existing global poverty, animal welfare, and criminal justice reform interventions don't seem particularly exciting to me. I'm unsure what to think of AI Risk, but "unsure" is much, much better than "seems highly unlikely." I think it's safe to have some great people here; but currently get the impression that a huge number of EAs are getting into this field, and this seems like too many to me on the margin.

What I'm getting to is: when you exclude most of poverty, animal welfare, criminal justice reform, and AI, there's not a huge amount getting worked on in EA at the moment.

I think I don't quite buy the argument that the only long-term interventions to consider are ones that will cause X-risks in the next ~30 years, nor the argument that the only interventions are ones that will cause X-risks. I think it's fairly likely(>20%) that sentient life will survive for at least billions of years; and that there may be a fair amount of lock-in, so changing the trajectory of things could be great.

I like the idea of building "resilience" instead of going after specific causes. For instance, if we spend all of our attention on bio risks, AI risks, and nuclear risks, it's possible that something else weird will cause catastrophe in 15 years. So experimenting with broad interventions that seem "good no matter what" seems interesting. For example, if we could have effective government infrastructure, or general disaster response, or a more powerful EA movement, those would all be generally useful things.

I like Phil's work (above comment) and think it should get more attention, quickly. Figuring out and implementing an actual plan that optimizes for the long term future seems like a ton of work to me.

I really would like to see more "weird stuff." 10 years ago many of the original EA ideas seemed bizarre; like treating AI risk as highly important. I would hope that with 10-100x as many people, we'd have another few multiples of weird but exciting ideas. I'm seeing a few of them now but would like more.

Better estimation, high-level investigation, prioritization, data infrastructure, etc. seem great to me.

Maybe one way to put it would be something like, imagine clusters of ideas as unique as those of Center on Long-Term Risk, Qualia Computing, the Center for Election Science, etc. I want to see a lot more clusters like these.

Some quick ideas:
- Political action for all long term things still seems very neglected and new to me, as mentioned in this post.
- A lot of the prioritization work, even of, "Let's just estimate a lot of things to get expected values."
- I'd like to see research in ways AI could make the world much better/safer; the most exciting part to me is how it could help us reason in better ways, pre-AGI, and what that could lead to.
- Most EA organizations wouldn't upset anyone (are net positives for everyone), but many things we may want would. For instance, political action, or potential action to prevent bio or ai companies from doing specific things. I could imagine groups like, "slightly-secretive strategic agencies" that go around doing valuable things, to have a lot of possible benefit (but of course significant downsides if done poorly).
- This is close to me, but I'm curious if open source technologies could be exciting philanthropic investments. I think the donation to Roam may have gone extremely well, and am continually impressed and surprised by how little money there is in incredible but very early or experimental efforts online. Ideally this kind of work would include getting lots of money from non-EAs.
- In general, trying to encourage EA style thinking in non-EA ventures could be great. There's tons of philanthropic money being spent outside EA. The top few tech billionaires just dramatically increased their net worths in the last few months, many will likely spend those eventually. 
- I really care about growing the size and improve the average experience of the EA community. I think there's a ton of work to be done here of many shapes and forms.
- I think many important problems that feel like they should be done in Academia aren't due to various systematic reasons. If we could produce researchers who do "the useful things, very well", either in Academia or outside, that could be valuable, even in seemingly unrelated fields like anthropology, political science, or targeted medicine (fixing RSI, for instance). "Elephant and the Brain" style work comes to mind.
- On that note, having 1-2 community members do nothing but work on RSI, back, and related physical health problems for EAs/rationalists, could be highly worthwhile at this point. We already have a few specific psychologists and a productivity coach. Maybe eventually there could be 10-40+ people doing a mini-industry of services tailored to these communities.
- Unlikely idea: insect farms. Breed and experiment with insects or other small animals in ways that seem to produce the most well-being for the lowest cost. Almost definitely not that productive, but good for diversification, and possibly reasonably cheap to try for a few years.
- Much better EA funding infrastructure, in part for long-term funding.
- Investigation and action to reform/improve the UN and other global leadership structures.
- I'm curious about using extensive Facebook ads, memes, Youtube sponsorship, and similar, to both encourage Effective Altruism, and to encourage ideas we think are net valuable. These things can be highly scalable.

Also, I'd be curious to get the suggestions of yourself and others here.

Replies from: MichaelDickens, Milan_Griffes, Milan_Griffes
comment by MichaelDickens · 2020-08-19T18:46:34.670Z · EA(p) · GW(p)

This is a really good comment.

  • A lot of the prioritization work, even of, "Let's just estimate a lot of things to get expected values."

I would like to see more of this, and I would also like to see people be less uniformly critical of this sort of work. I've written a few things like this, and I inevitably get a few comments along the lines of, "This estimate isn't actually accurate, you can't know the true expected value, this research is a waste of time." IME I get much more strongly negative comments when I write anything quantitative than when I don't. But I might just be noticing that type of criticism more than other types.

  • Much better EA funding infrastructure, in part for long-term funding.

The rate of institutional value drift is something like 0.5% [EA · GW]. Halving this would be extremely beneficial for anyone who wants to invest their money for future generations. It seems likely that if we put more effort into designing stable institutions, we could create EA investment funds that last for much longer.

The rate of individual value drift is even higher, something around 5%. That's really bad. Is there anything we can do about it? Is bringing new people into the movement more important than improving retention?

Some other neglected problems (with some shameless references to my own writings):

  • I like GPI's research agenda. Right now there are only about half a dozen people working on these problems.
  • What is the correct "philosophy of priors"? The choice of prior distribution heavily affects how we should behave in areas of high uncertainty. For example, see Will MacAskill's post [EA · GW] and the Toby Ord's reply [EA(p) · GW(p)]. (edit: see also this relevant post [EA · GW])
  • With a simple model, I calculated [EA · GW] that improving our estimate of the discount rate could matter more than any particular cause. The rationale is that the we should spent our resources at some optimal rate, which is largely determined by the philanthropic discount rate. Moving our spending schedule slightly closer to the optimal rate substantially increases expected utility. This is just based on a simple model, but I'd like to see more work on this.
  • In the conclusion of the same essay [EA(p) · GW(p)], I gave a list of relevant ideas for potential top causes with my rough guesses on their importance/neglectedness/tractability. The ideas not mentioned so far are: improving the ability of individuals to delegate their income to value-stable institutions; and making expropriation and value drift less threatening by spreading altruistic funds more evenly across actors and countries.
  • IMO there are some relatively straightforward ways that EAs could invest better, which I wrote about here. Improving EAs' investments could be pretty valuable, especially for "give later"-leaning EAs.
  • Reducing the long-term probability of extinction, rather than just the probability over the next few decades. (I'm currently writing something about this.)
  • If you accept that improving the long-term value of the future is more important than reducing x-risk, is there anything you should do now, or should you mainly invest to give later? Does movement building count as investing? [EA · GW] What about cause prioritization research? When is it better to work on movement building/cause prioritization rather than simply investing your money in financial assets?
Replies from: oagr, MichaelA
comment by Ozzie Gooen (oagr) · 2020-08-19T20:46:48.989Z · EA(p) · GW(p)

IME I get much more strongly negative comments when I write anything quantitative than when I don't. But I might just be noticing that type of criticism more than other types.


I haven't seen these specific examples, but there definitely seems to be a similar bias in other groups. Many organizations are afraid to make any kinds of estimates at all. At the extreme end are people who don't even make clear statements, they just speak in vague metaphors or business jargon that are easy to defend but don't actually convey any information. Needless to say, I think this is an anti-pattern. I'd be curious if anyone reading this would argue.

The rate of individual value drift is even higher, something around 5%. That's really bad. Is there anything we can do about it? Is bringing new people into the movement more important than improving retention?

It seems to me like some modeling here would be highly useful, though it can get kind of awkward. I imagine many decent attempts would include numbers like, "total expected benefit of one member". Our culture often finds some of these calculations too "cold and calculating." It could be worth it for someone to do a decent job at some of this, and just publicly write up the main takeaways.

I find the ideas you presented quite interesting and reasonable, I'd love to see more work along those lines.

Replies from: MichaelA, Milan_Griffes, MichaelA, MichaelDickens
comment by MichaelA · 2020-08-23T08:26:36.030Z · EA(p) · GW(p)

I'd be curious if anyone reading this would argue.

I think it would depend a lot on how we operationalise the stance you're arguing in favour of. 

Overall, at the margin, I'm in favour of: 

  • less use of vague-yet-defensible language
  • EAs/people in general making and using more explicit, quantitative estimates (including probability estimates)

(I'm in favour of these things both in general and when it comes to cause priorisation work.)

But I'm somewhat tentative/moderate in those views. For the sake of conversation, I'll skip stating the arguments in favour of those views, and just focus on the arguments against (or the arguments for tentativenesss/moderation). 

Essentially, as I outlined in this post [EA · GW] (which I know you already read and left useful comments on), I think making, using, and making public quantitative estimates might sometimes:

  1. Cost more time and effort than alternative approaches (such as more qualitative, "all-things-considered" assessments/discussions)
  2. Exclude some of the estimators' knowledge (which could’ve been leveraged by alternative approaches)
  3. Cause overconfidence and/or cause underestimations of the value of information
  4. Succumb to the optimizer’s curse
  5. Cause anchoring
  6. Cause reputational issues

(These downsides won't always occur, can sometimes occur more strongly if we use approaches other than quantitative estimates, and can be outweighed by the benefits of quantitative estimates. But here I'm just focusing on "arguments against".)

As a result:

  • I don't think we should always aim for or require quantitative estimates (including in cause prioritisation work)
  • I think it may often be wise to combine use of quantitative estimates, formal models, etc. with more intuitive / all-things-considered / "black-box" approaches (see also)
  • I definitely think some statements/work from EAs and rationalists have used  quantitative estimates in an overconfident way (sometimes wildly so), and/or has been treated by others as more certain than it is
  • It's plausible to me that this overconfidence problem has not merely co-occurred or correlated with use of quantitative estimates, but that it tends to be exacerbated by that
    • But I'm not at all certain of that. Using quantitative estimates can sometimes help us see our uncertainty, critique people's stances, have reality clearly prove us wrong (well, poorly calibrated), etc.
  • Relatedly, I think people using quantitative estimates should be very careful to remember how uncertain they are and communicate this clearly
    • But I'd say the same for most qualitative work in domains like longtermism
  • It's plausible to me that the anchoring and/or reputational issues of making  one's quantitative estimates public outweigh the benefits of doing so (relative to just making more qualitative conclusions and considerations public)
    • But I'm not at all certain of that (as demonstrated by me making this database [EA · GW])
    • And I think this'll depend a lot on how well thought-out one's estimates are, how well one can communicate uncertainty, what one's target audiences are, etc.
    • And it could still be worth making the estimates and not communicating them, or communicating them less publicly

I don't think this position strongly contrasts with your or Michael's positions. And indeed I'm a fan of what I've seen of both your work, and overall I favour more work like that. But these do seem like nuances/caveats worth noting.

Replies from: oagr
comment by Ozzie Gooen (oagr) · 2020-08-23T11:23:11.886Z · EA(p) · GW(p)

Nice post. I think I agree with all of that. 

I'm not advocating for "poorly done quantitative estimates." I think anyone reasonable would admit that it's possible to bungle them. 

I'm definitely not happy with a local optimum of "not having estimates". It's possible that "having a few estimates" can be worse, but I imagine we'll want to get to the point of "having lots of estimates, and becoming more mature to be able to handle them." at some point, so that's the direction to aim for.

Replies from: MichaelA
comment by MichaelA · 2020-08-23T12:27:45.379Z · EA(p) · GW(p)

I think the "local vs global optima" framing is an interesting way of looking at it. 

That reminds me of some of my thinking when I was trying to work out whether it'd be net positive to make that database of existential risk estimates (vs it being net negative due to anchoring, reputational issues to EA/longtermists, etc.). In particular, a big part of my reasoning was something like:

It's plausible that it's worse for this database to exist than for there to be no public existential risk estimates. But what really matters is whether it's better that this database exist than that there be a small handful of existential risk estimates, scattered in various different places, and with people often referring to only one set in a given instance (e.g., the 2008 FHI survey), sometimes as if it's the 'final word' on the matter. 

That situation seems probably even worse from an anchoring and reputational perspective than there being a database. This is because seeing a larger set of estimates side by side could help people see how much disagreement there is and thus have a more appropriate level of uncertainty and humility.

With your comment in mind, I'd now add:

But all of that is just about how good various different present-day situations would be. We should also consider what position we ultimately want to reach. 

It seems plausible that we could end up with a larger set of more trustworthy and more independently-made existential risk estimates. And it seems likely that this would be better than the situation we're in now. 

Furthermore, it seems plausible that making this database moves us a step towards that destination. This could be a reason to make the database, even if doing so was slightly counterproductive in the short term.

comment by Milan_Griffes · 2020-08-19T20:48:54.972Z · EA(p) · GW(p)
I haven't seen these specific examples, but there definitely seems to be a similar bias in other groups. Many organizations are afraid to make any kinds of estimates at all...

Reminds me of the thing where corporations don't want to implement internal prediction markets because implementing a market isn't in the self-interest of any individual decision-maker.

Replies from: oagr
comment by Ozzie Gooen (oagr) · 2020-08-23T11:09:31.331Z · EA(p) · GW(p)

Yea, I think there are similar incentives at play in both cases

comment by MichaelA · 2020-08-23T08:32:59.386Z · EA(p) · GW(p)

I imagine many decent attempts would include numbers like, "total expected benefit of one member". Our culture often finds some of these calculations too "cold and calculating."

I think this is a good point. A three-factor model of community building comes to mind as a prior post that had to tackle and communicate about this sort of tricky thing, and that did a good job of that, in my opinion. That post might be useful reading for other people who have to tackle and communicate about this sort of tricky issue in future. (E.g., I quoted it in a recent post of mine [EA · GW].) 

The most relevant parts of that post are the section on "Elitism vs. egalitarianism", and the following paragraph:  

[Variation in the factors this post focuses on] often rests on things outside of people’s control. Luck, life circumstance, and existing skills may make a big difference to how much someone can offer, so that even people who care very much can end up having very different impacts. This is uncomfortable, because it pushes against egalitarian norms that we value. [...] We also do not think that these ideas should be used to devalue or dismiss certain people, or that they should be used to idolize others. The reason we are considering this question is to help us understand how we should prioritize our resources in carrying out our programs, not to judge people.

Replies from: oagr
comment by Ozzie Gooen (oagr) · 2020-08-23T11:24:44.357Z · EA(p) · GW(p)


comment by MichaelDickens · 2020-08-19T22:12:26.760Z · EA(p) · GW(p)

It seems to me like some modeling here would be highly useful

The basic model is really easy. Total number of community members at time t is , where r is the movement growth rate and v is the value drift rate. So if the value of the EA community is proportional to the number of members, then increasing r by some number of percentage points is exactly as good as decreasing v by the same amount.

It's less obvious how to model the tractability of changing r and v.

comment by MichaelA · 2020-08-23T06:39:57.375Z · EA(p) · GW(p)

I liked this comment.

If you accept that improving the long-term value of the future is more important than reducing x-risk

Do you mean "If you accept that improving the long-term value of the future is more important than reducing extinction risk" (as distinct from existential risk more broadly, which already includes other ways of improving the value of the future)?

 Or "If you accept that improving the long-term value of the future is more important than reducing the risk of existential catastrophe in the relatively near future?" 

Or something else (e.g., about smaller trajectory changes [LW · GW])?

Replies from: MichaelDickens
comment by MichaelDickens · 2020-08-24T20:18:05.637Z · EA(p) · GW(p)

I meant to distinguish between long-term efforts and reducing x-risk in the relatively near future (the second case on your list), sorry that was unclear.

comment by Milan_Griffes · 2020-08-19T20:44:21.852Z · EA(p) · GW(p)

Here's a list I came up with from thinking about this for ~30 minutes:

Better ways of measuring what matters

Help EAs see more clearly, unpack + resolve personal traumas, and boost their efficacy + motivation

  • Emotional healing as a prerequisite to rationality [LW · GW]
  • CFAR, OAK, Leverage, etc.
  • Plus building methods to audit which projects are working, which are failing, which are stagnating
  • Perhaps also a data collection project that vacuums up outcomes from the object-level projects?

Strengthen EA community ties / our sense of fellowship

  • More honesty about how weird effective research methods [LW · GW] can be
  • More acknowledgement of the interdependent causal complex that gives rise to good research (e.g. Alex Flint's introduction here [LW · GW])
  • More Ben Franklin-esque Juntos
  • Import more of Silicon Valley's "pay it forward" culture
  • Less reputation management / more psychological safety
  • Less sniping
  • OAK, Bay Area group houses, EA Hotel
  • Again, building out (non-dominating) ways to audit & collect data from the object-level projects

Less scrupulosity

  • Ties into the above but deserves its own bullet given how our collective psychology skews
  • Compassionate fighting against the thought-pattern Scott Alexander describes here

Make EA sexier

  • Market to retail donors / the broader public (e.g. Future Perfect, e.g. 80k, e.g. GiveWell running ads on Vox podcasts)
  • Market to impact investors (e.g. Lionheart) and big philanthropy
  • Cultivating more "I want to be like that" energy
  • Seems easy to walk back if it isn't working because so many interest groups are competing for mindshare

Support EA physical health

  • Propagate effective treatments for RSI & back problems, as above
  • Take the mind-body connection seriously
  • Propagate best practices for nutrition, sleep, exercise; make the case that attending to these is prerequisite to having impact (rather than trading off against having impact)

Advance our frontier of knowledge

  • e.g. GPI's research agenda, e.g. the stuff Michael Dickens laid out in his comment
  • More work on how to solve coordination problems
  • More work on governance (e.g. Vitalik's stuff, e.g. the stuff Palladium is exploring)

Fund many moonshots / speculative projects

  • Fund projects that can be walked back if they aren't working out (which is most projects, though some tech projects may be hard-to-reverse)
  • Worry less about brand management
Replies from: oagr, MichaelA
comment by Ozzie Gooen (oagr) · 2020-08-19T21:24:05.614Z · EA(p) · GW(p)

That's an interesting list, especially for 30 minutes :) (Makes me wonder what you or others could do with more time.)

Much of it focused on EA community stuff. I kind of wonder if funders are extra resistant to some of this because it seems like they're just "giving money to their friends", which in some ways, they are. I could see some of it feeling odd and looking bad, but I think if done well it could be highly effective.

Many religious and ethnic groups spend a lot of attention helping each other, and it seems to have very positive effects. Right now EA (and the subcommunities I know of in EA) seem fairly far from that still.

A semi-related point on that topic; I've noticed that for many intelligent EAs, it feels like EA is a competition, not a collaboration. Individuals at social events will be trying to one-up each other with their cleverness. I'm sure I've contributed to this. I've noticed myself becoming jealous when I hear of others who are similar in some ways doing well, which really should make no sense at all. I think in the anonymous surveys 80K did a while back a bunch of people complained that there was a lot of signaling going on and that status was a big deal.

Many companies and open source projects live or die depending on the cultural health. Investments in the cultural health of EA may be difficult to measure, but pay off heavily in the long run.

Replies from: Milan_Griffes
comment by Milan_Griffes · 2020-08-19T21:36:59.173Z · EA(p) · GW(p)


100% agree that cultural health is very important, and that EA is under-investing in it. (The "we don't want to just give money to our friends" point resonates, and other scrupulosity-related stuff is probably at play here as well.)

Individuals at social events will be trying to one-up each other with their cleverness. I'm sure I've contributed to this. I've noticed myself becoming jealous when I hear of others who are similar in some ways doing well, which really should make no sense at all.

Thank you for talking about this!

I've noticed similar patterns in my own mind, especially around how I engage with this Forum. (I've been stepping back from it more this year because I've noticed that a lot of my engagement wasn't coming from a loving place.)

These dynamics may not make any sense, but there are deep biological & psychological forces giving rise to them. [insert Robin Hanson's "everything you do is signaling" rant here]

... I think in the anonymous surveys 80K did a while back a bunch of people complained that there was a lot of signaling going on and that status was a big deal.

Right. Last year concerns about status made a lot of heat on the Forum (1 [EA · GW], 2 [EA · GW], 3 [EA · GW]), but as far as I know nothing has really changed since then, perhaps other than more folks acknowledging that status is a thing.

(Status seems closely related to scrupulosity & to EA being vetting-constrained [EA · GW]; I haven't unpacked this yet.)

comment by MichaelA · 2020-08-23T08:46:02.993Z · EA(p) · GW(p)

(A bunch of those ideas seem interesting, but I'll just comment on the one where I have something to say)

Seems easy to walk back if it isn't working because so many interest groups are competing for mindshare

This does seem to me like it makes it easy to walk back efforts to make EA sexier, but it doesn't seem like it makes it easy to do it again later in a different way (without the odds of success being impaired by the first attempt). 


  • I think we could make EA relatively small/non-prominent/whatever again if we wanted to
  • But it also seems plausible to me that EA can only make "one big first impression", and that that'll colour a lot of people's perceptions of EA if it tries to make a splash again later (even perhaps 10-30 years later).

Put another way:

  • They might stop thinking about EA if we stop actively reminding them
  • But then if we start competing for their attention again later they'll be like "Wait, aren't those the people who [whatever impression they got of us the first time]?"

Posts that informed my thinking here:

comment by Milan_Griffes · 2020-08-19T20:13:27.365Z · EA(p) · GW(p)

Your list reminds me of this thread: What EA Forum posts do you want someone to write? [EA · GW]

comment by Ozzie Gooen (oagr) · 2020-08-17T20:50:20.446Z · EA(p) · GW(p)

Forecasting is a common good to many causes, so you'd expect it not to be neglected. But in practice, it seems the only people working on forecasting are EA or EA-adjacent (I'd count Tetlock as adjacent)

I think I've become a bit convinced that incentive and coordination problems are so poor that many "common goods" are surprisingly neglected. The history of the slow development and proliferation of Bayesian techniques in general (up to around 20 years ago maybe, but even now I think the foundations can be improved a lot) seems quite awful. 

Also, at this point, I feel quite strong about much of the EA community; like we've gathered up many of the most [intelligent + pragmatic + agentic + high-level-optimizing] people in the world. As such I think we can compete and do a good job in many areas we may choose to focus on. So it could be that we could move up from "absolutely, incredibly neglected", to "just somewhat neglected", which could open up a whole bunch of fields.

Replies from: MichaelDickens, oagr
comment by MichaelDickens · 2020-08-17T23:00:38.997Z · EA(p) · GW(p)

like we've gathered up many of the most [intelligent + pragmatic + agentic + high-level-optimizing] people in the world

It seems like I routinely learn about some smart and insightful person through non-EA channels and then later find out they're involved in EA or at least subscribe to EA principles—most recent example for me is Gordon Irlam, who I originally learned about through his writings on portfolio selection.

comment by Ozzie Gooen (oagr) · 2020-08-17T21:08:47.600Z · EA(p) · GW(p)

I've been thinking a lot about the lack of non-EA interest or focus on forecasting or related tools. I was very surprised when I made Guesstimate and there was both excitement from several people, but not that much excitement from most businesses or governments. 

I think that forecasting of the GJP sort is still highly niche. Almost no one knows of it or understands the value. You can look at this as similar to specific advances in, say, type theory or information theory. 

The really smart groups that have interests in improving their long term judgement seem to be financial institutions and similar. These are both highly secretive, and not interested in spending extra effort helping outside groups.

So to really advance a field like judgemental forecasting would require a combination of expertise, funding, and interest in helping the broad public, and this is a highly unusual combination. I imagine that if IARPA wasn't around in time to both be interested in and able to fund GJP's efforts, much less would have happened there. I'd also personally point out that I'd expect that IARPA's funding of it was around 1/3rd or maybe 1/20th as efficient as it would have been if OpenPhil would have organized a more directed effort, in terms of global benefit.

This makes me think that there are probably many other very specific technology and research efforts that also be exciting for us to focus on, but we don't have the expertise to recognize them. May may have gotten lucky with forecasting/estimation tech, as that was something we had to get close to anyway for other reasons.

Replies from: MichaelDickens
comment by MichaelDickens · 2020-08-19T19:00:47.987Z · EA(p) · GW(p)

Also worth noting that the managing director of IARPA's forecasting program was Jason Matheny, who previously founded New Harvest (which does cultured meat research, and was the first such org AFAIK) and did x-risk research at FHI.

Replies from: oagr
comment by Ozzie Gooen (oagr) · 2020-08-19T20:35:56.293Z · EA(p) · GW(p)

Yep, and a few others at IARPA who worked around the forecasting stuff were also EAs or close. 

comment by Ben Snodin (Ben_Snodin) · 2020-08-16T10:04:16.984Z · EA(p) · GW(p)

Thanks for this, it's pretty interesting to get your perspective as someone who's been (I presume) heavily engaged in the community for some time. I thought your other post on the All-Party Parliamentary Group for Future Generations was awesome, by the way.

You asked for comments including "small" thoughts so here are some from me, for what they're worth. These are my current views which I can easily see changing if I were to think about this more etc.

I think I basically agree that there doesn't seem to have been much progress in cause prioritisation in say the last five years, compared to what you might have hoped for.

(mainly written to clarify my own thoughts:) It seems like you can do cause prioritisation work either by comparing different causes, or by investigating a particular cause (especially a cause that's relatively unknown or poorly investigated), or by doing more "foundational" things like asking "what is moral value anyway?", "how should we compare options under uncertainty", etc.

My impression the Effective Altruism community has invested a significant amount of resources into cause prioritisation research, and relative lack of progress is because it's hard

  • The Global Priorities Institute is basically doing cause prioritisation (as far as I know, and by the vague definition of cause prioritisation I have in my head) - maybe it's more on the foundational / academic field building side (i.e. fleshing out and formally writing up existing arguments), but my impression is that it's mostly stuff that seems worth working through to work out how to do the most good
  • I think you could give the cause prioritisation label to some of the work from the Future of Humanity Institute's macrostrategy team(?)
  • Open Philanthropy Project spends a lot of their resources doing some version of this, as you noted
  • Rethink Priorities is basically doing this (though I might agree with you that it would be better if they were able to compare across causes rather than investigating a particular cause)
  • I'd consider work on forecasting / understanding AI progress, as is done by e.g. AI Impacts as cause prioritisation

The above (which is probably far from comprehensive) seems like a decent fraction of the resources of the "longtermist" part of the community (the part I'm familiar with). I suppose I lean towards wanting a larger fraction of resources allocated to cause prioritisation, but I don't think it's that obvious either way. Anyway, regardless of whether the right fraction of resources have been spent on this, I think it's just very hard and that this explains a lot of what you're describing.

Maybe one reason there's not much work comparing causes in particular is that there's so much uncertainty, which makes it very difficult to do well enough that the output is valuable. In particular

  • people don't agree on empirical issues that can radically alter the relative importance of different causes (e.g. AI timelines)
  • people don't agree on "the correct moral theory" / whatever the ultimate objective is / what you ~call "different views"

Edit: reading the above you could probably get the impression that I think you're wrong to "raise the alarm" about the need for more / different cause prioritisation, but I don't think that at all. I think I'm pretty sympathetic to most of what you wrote.

Replies from: Michelle_Hutchinson, weeatquince, weeatquince
comment by Michelle_Hutchinson · 2020-08-18T10:49:37.926Z · EA(p) · GW(p)

I agree that the cause prioritisation work we need to do now is far harder than the work we were doing ten years ago. I think AI Impacts provides an interesting illustration of that: It was initially set up essentially as a cause prioritisation org. But in doing that work it became clear that whereas in comparing between different global development interventions there was a large published literature to build on, when trying to compare work on AI to other areas, and compare interventions within AI safety, there was far less to go on. That led to the conclusion that the work they should do first was get a better grasp on questions like 'how fast will AI likely develop, and how discontinuously?'.

I think another thing going on is that the stakes have become higher. When Giving What We Can first started publishing recommendations eg comparing between donating to education or deworming, we only had ~30 members. That's a lot of money over people's lifetimes, but it's nowhere near the resources the EA movement now commands. The huge increase in resources to allocate makes it more worth doing the foundational work that groups like AI Impacts do, and also the theoretic work GPI does. I think that makes it look like there's less work being done, because there are way fewer actionable results per hour spent.

comment by weeatquince · 2020-08-16T15:19:02.519Z · EA(p) · GW(p)

Hi Ben. Thank you for this. This is exactly what I like, people replying with their impressions of the post, even if rough, so that I get some idea of how people feel and if this resonates. So thank you.

- -

That said I disagree with your claim. 

You say "I think it's just very hard and that this explains a lot of what you're describing".

I think it may well be difficult but it is mostly not happening due to underinvestment and lack of coordination in this space. Hence raising a flag.

I make this case above by comparing what I would see as a good coverage of the space with what is actually happening, so don’t have much to add here except that it is interesting that others see it differently.

I note a few counterexamples to the idea it is not done because it is hard (even in the "longtermist" area) such as: 80K's stated reason [EA · GW] for doing less in this space is that they have reached a conclusion (priority paths) that they are happy with, that GPI was only created recently (research agenda is from 2019 [EA · GW]), Rethink Priorities is following funding, AI strategy is also difficult but is progressing much quicker. etc.

- -

Overall, I don’t have a strong view on this, and maybe you are correct. But this is something that could be looked into more. In particular I have mostly dug into research on websites but if I (or anyone) had more time it would be great talk to people who have worked on this and see if it is difficult or underinvested in (or both). I also think you could with a bit of time somewhat address this question by writing a research agenda and looking for potential low hanging research fruit in this domain.

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2020-08-17T18:42:28.571Z · EA(p) · GW(p)

Hey Sam, just a very quick comment that the post you link to wasn't meant to imply we intend to do less prioritisation research than before.

The 50/30/20 split we mention there was for how we intend to split delivery efforts across different target audiences, rather than on research vs. delivery. And also note that this means ~50% of effort is going into non-priority paths, which will include new potential priorities & career paths (such as the lists we posted recently).

As Rob notes in another comment, we still intend to spend ~10% of team time on research, similar to the past, and more total time because the team is larger. This would include looking into whether we should add new priority paths or problem areas.

Replies from: weeatquince
comment by weeatquince · 2020-08-17T19:11:52.893Z · EA(p) · GW(p)

Hi Ben,

Thank you for flagging – it is super amazing to hear and very excited by that.

I looked at a lot of organisations and tried to extrapolate what they will be doing in this space from the public information rather than reaching out, so it is great to see comments saying that research along these lines will be happening, and sorry for any thing mischaracterised.

comment by omernevo ( · 2020-08-16T09:34:42.976Z · EA(p) · GW(p)

Thank you for writing this!
I think your analysis can be specifically useful for people who want to contribute and feel like they're not sure where to look for neglected areas in EA.

I'll add a small comment regarding "It is difficult to compete with the existing organisations that are just not quite doing this":

My experience with orgs in the EA community is that pretty much everyone is incredibly cooperative and genuinely happy to see others fill in the gaps that they're leaving.
I've been in talks with 80,000 hours and a few other orgs about an initiative in the careers space for a while now. Everyone we've talked to was both open about what they're doing (and what they aren't doing) and ridiculously helpful with advice and support.

I think if someone is serious about trying to fill a gap in the EA body of work: It's important to understand from adjacent orgs how big \ real this gap is and if they have comments about your approach to it. And while I can see why someone would be worried, I think if you approach with the right attitude, the 'competition' would have far more benefits than harms.

Replies from: weeatquince
comment by weeatquince · 2020-08-16T14:35:14.020Z · EA(p) · GW(p)

Thank you for this comment. I fully agree with this and would say that my experience of the EA community is a very positive one and that the EA community and EA organisations work very well together and are very willing to share ideas, talk and support one another. I am sure would be much support for anyone trying to fill these gaps.

comment by JustinShovelain · 2020-08-18T17:36:15.457Z · EA(p) · GW(p)

Thanks for writing the post! I think we need a lot more strategy research, cause prioritization being one of the most important types, and that is why we founded Convergence Analysis (theory of change and strategy, our site, and our publications). Within our focus of x-risk reduction we do cause prioritization, describe how to do strategy research, and have been working to fill the EA information hazard policy [? · GW] gap. We are mostly focused on strategy research as a whole which lays the groundwork for cause prioritization. Here are some of our articles:

We’re small and relatively new group and we’d like to see more people and groups do this type of research and that this field get more support and grow. There is a vast amount to do and immense opportunity in doing good with this type of research.

Replies from: oagr
comment by Ozzie Gooen (oagr) · 2020-08-18T17:43:12.686Z · EA(p) · GW(p)

I'll give a +1 for Convergence. I've known the team for a while and worked with Justin a few years back. It's a bit on the theoretical side of prioritization, but that sort of thinking often does lead to more immediate value.

My impression is also that more funding could be quite useful to them, if anyone is reading this considering.

comment by richard_ngo · 2020-08-23T07:57:25.238Z · EA(p) · GW(p)

Thanks for making this post, I think this sort of discussion is very important.

It seems to me (predictably given the introduction) that far and away the most valuable thing EA has done is the development of and promotion of cause prioritisation as a concept.

I disagree with this. Here's an alternative framing:

  • EA's big ethical ideas are 1) reviving strong, active, personal moral duties, 2) longtermism, 3) some practical implications of welfarism that academic philosophy has largely overlooked (e.g. the moral importance of wild animal suffering, mental health, simulated consciousnesses, etc).
  • I don't think EA has had many big empirical ideas (by which I mean ideas about how the world works, not just ideas involving experimentation and observation). We've adopted some views about AI from rationalists (imo without building on them much [EA · GW] so far, although that's changing), some views about futurism from transhumanists, and some views about global development from economists. Of course there's a lot of people in those groups who are also EAs, but it doesn't feel like many of these ideas have been developed "under the banner of EA".

When I think about successes of "traditional" cause prioritisation within EA, I mostly think of things in the former category, e.g. the things I listed above as "practical implications of welfarism". But I think that longtermism in some sense screens off this type of cause prioritisation. For longtermists, surprising applications of ethical principles aren't as valuable, because by default we shouldn't expect them to influence humanity's trajectory, and because we're mainly using a maxipok strategy.

Instead, from a longtermist perspective, I expect that biggest breakthroughs in cause prioritisation will come from understanding the future better, and identifying levers of large-scale influence that others aren't already fighting over. AI safety would be the canonical example; the post on reducing the influence of malevolent actors is another good example. However, we should expect this to be significantly harder than the types of cause prioritisation I discussed above. Finding new ways to be altruistic is very neglected. But lots of people want to understand and control the future of the world, and it's not clear how distinct doing this selfishly is from doing this altruistically. Also, futurism is really hard.

So I think a sufficient solution to the case of the missing cause prioritisation research is: more EAs are longtermists than before, and longtermist cause prioritisation is much harder than other cause prioritisation, and doesn't play to EA's strengths as much. Although I do think it's possible, and I plan to put up a post on this soon.

Replies from: tamgent
comment by tamgent · 2020-08-29T15:13:25.418Z · EA(p) · GW(p)
For longtermists, surprising applications of ethical principles aren't as valuable, because by default we shouldn't expect them to influence humanity's trajectory, and because we're mainly using a maxipok strategy

Aiming for maxipok doesn't mean not influencing the trajectory (if the counterfactual is catastrophe), it's just much harder to measure impact. If measuring impact is hard, de-risking becomes more important, because of path-dependency. If we build out one or two particular longtermist cause areas really strongly with lots of certainty, they'll have a lot of momentum (orgs and stuff) and if we find out later that they are having negative impact or not having impact (or worse, this happens and we just never find out), that will be bad.

I agree longtermist cause prioritisation is harder, even though I didn't really think your reasons were very well articulated (in particular I don't understand why you're comparing altruism with understanding & controlling the future, seems like apples and oranges to me and surely it's the intersection of X and altruism with the market gap), but I don't think it's less valuable.

comment by Robert_Wiblin · 2020-08-17T15:52:35.148Z · EA(p) · GW(p)

"For example 80000 Hours have stopped cause prioritisation work to focus on their priority paths"

Hey Sam — being a small organisation 80,000 Hours has only ever had fairly limited staff time for cause priorities research.

But I wouldn't say we're doing less of it than before, and we haven't decided to cut it. For instance see Arden Koehler's recent posts about Ideas for high impact careers beyond our priority paths and Global issues beyond 80,000 Hours’ current priorities.

We aim to put ~10% of team time into underlying research, where one topic is trying to figure out which problems and paths go into each priority level. We also have podcast episodes on newer problems from time to time.

All that said, I am sympathetic to the idea that as a community we are underinvesting in cause priorities research.

Replies from: weeatquince
comment by weeatquince · 2020-08-17T20:10:13.955Z · EA(p) · GW(p)

Super great to hear that 10% of 80000 Hours team time will go into underlying research. (Also apologies for getting things wrong, was generalising from what I could find online about what 80K plans to work on – have edited the post). If you have more info on what this research might look into do let me know.

– – 

That there is an exploit explore tradeoff. Continuing to do cause prioritisation research needs to be weighed against focusing on specific cause areas.

I imply in my post that EA organisations have jumped too quickly into exploit. (I mention 80K and FHI, but l am judging from an outside view so might be wrong). I think this is a hard case to make, especially to anyone who is more certain than me about which causes matter (which may be the most EA folk). That said there are other reasons for continuing to explore, to create a diverse community, epistemic humility, game theoretic reasons (better if everyone explores a bit more), to counter optimism bias, etc. 

Not sure I am explaining this well. I guess I am saying that I still think the high level point I was making stands: that EA organisations seem to move towards exploit quicker than I would like. But do let me know if you disagree.

comment by Michael_Wiebe · 2020-08-18T07:12:29.305Z · EA(p) · GW(p)

I don't share your optimistic view of research. You write:

it is reasonable to think that research would make progress because:
Very little research has been done on this so far.

That's because cause prioritization research is extremely difficult, not because no one has thought to do this.

Human history reflects positively on our ability to build a collective understanding of a difficult subject and eventually make headway.

Survivorship bias: what about all of the difficult subjects where we couldn't make any progress and gave up?

Even if difficult, we should at least try! We would learn why such research is hard and should keep going until we reach a point of diminishing returns.

No, we should try if the expected returns are better than the next alternative. What if we've already hit diminishing returns?

Replies from: Michael_Wiebe
comment by Michael_Wiebe · 2020-08-18T07:21:29.719Z · EA(p) · GW(p)

More generally, research isn't magic. Hiring a researcher and having them work 9-5 is no guarantee of solving a problem. You write:

What empirical evidence is there that we can reliably impact the long run trajectory of humanity and how have similar efforts gone in the past? [...]
I think there needs to be much better research into how to make complex decisions despite high uncertainty.

Isn't it obvious that allocating researcher hours to these questions would be a waste of money? Almost by definition, we can't have good evidence that we can impact the long-run (ie. centuries) trajectory of humanity, because we haven't been collecting data for that long. And making complex decisions under high uncertainty will always be incredibly difficult; in the best case scenario, more research might yield small improvements in decision-making.

Replies from: weeatquince
comment by weeatquince · 2020-08-18T09:15:05.282Z · EA(p) · GW(p)

Hi Michael. Thank you for your points. It is good to hear opposing views. I have never worked in pure research so find it hard to judge and somewhat parroted Paul's post. You may well be correct about the difficulty of research.

Let me try to draw from my own experience to elucidate why I may jumping to different intuitive conclusions on this question

My experience of research is from policy development. I think 2/3 of policy development is super easy and 1/3 is super difficult. The super easy stuff is just looking at the world and seeing if there are answers already out there and implementing them. For example on US police reform or UK tax policy or technology regulatory policy. We mostly know how to do these things well, we just need some incentive to implement best practice. The super difficult stuff is the foundational work, where a new problem emerges and no existing solutions abound, eg financial stability policy.

Now when I look at a question such as the one you quote of "much better research into how to make complex decisions despite high uncertainty" it seems to me to be a mix, but with definite areas that fall more towards the easy side. There appear to be a number of fields and domains with best practice that would be highly relevant to EAs making best decisions despite high uncertainty, that rarely seem to make it into EA circles. For example Enterprise Risk Management, economic models of Knightian uncertainty, organisational design, policy development toolkits, Robust Decision Making.

Maybe these have all been used and/or considered not relevant (I don’t work at GPI etc, I don’t know). But my life experience to date leaves me with an intuition that there is still low hanging research fruit just around the next corner. This is not a well-reasoned argument or a strong case simply me sharing where I come from and how I see the challenges and the path forward.

Replies from: Michael_Wiebe
comment by Michael_Wiebe · 2020-08-18T18:48:13.817Z · EA(p) · GW(p)

Thanks for the reply. I'm a jaded PhD student, but I am open to updating towards research-optimism.

I would distinguish research from implementation of research. I agree that there seems to be l0w-hanging fruit in implementing best practices, but I think implementation can be a super difficult problem in its own right. (See the state capacity literature.)

comment by rossaokod · 2020-08-24T21:06:30.625Z · EA(p) · GW(p)

This is a great post - thanks a lot for writing it. I work at GPI, so want to add a bit of context on a couple of points, and add some of my own thoughts. Standard disclaimer that these are my personal views and not those of GPI though. 

First, on GPI's research agenda, and our progress in econ:

"(One economics student told me that when reading the GPI research agenda, the economics parts read like it was written by philosophers. Maybe this contributes to the lack of headway on their economics research plans.)"

I think this is accurate and a reflection of how the research agenda was written and has evolved. For what it's worth, we're currently working on refreshing the research agenda to reflect some of the 'exploration research' we've done in economics in the past ~18 months - we should have an updated version in the next few months. More generally, we've had very little econ research capacity to date beyond pre-doctoral researchers (very junior in academic terms). This will improve very shortly -- as Phil notes in a previous comment, we've hired two postdocs to start in the next month -- but as others have noted, high quality academic work is hard and takes quite a lot of time, so this may not result in a step change in actionable econ research coming out of GPI in the short run, which leads on to my second comment... 

Second, on theories of change - your point D1 is really important. We've actively discussed various 'theories of change' internally at GPI and how these should affect our strategy. A decent part of this discussion depends on what others are doing in EA and how we think GPI fits into the overall EA movement portfolio. Even within the (relatively narrow) scope of doing academic GP research in econ and philosophy, possible theories of change for GPI include (but are not limited to!) prioritising building up academic credibility for long-run influence, prioritising research that is more actionable for EAs/philanthropists and policymakers, prioritising influencing policymakers / the general public, or prioritising influencing the next generation through higher education. These are not mutually exclusive, but placing different emphasis on one or the other may imply different strategy. We are still very young, and so far we have mostly been focused on laying foundations for the first of these, and have so far made much more progress on this in philosophy than econ, though I expect things will evolve in the next few years. Personally, I don't think we'll be able to effectively target all of the possible theories of change, and I'd love to see more people and groups working on these. 

comment by MichaelStJules · 2020-08-19T06:19:55.415Z · EA(p) · GW(p)

For example Rethink Priorities looked really exciting when it got started with their co-founder expressing strong support for practical prioritisation research. But their research has mostly focused on animal welfare interventions, not on comparing between causes.

For what it's worth, Rethink Priorities' research on sentience and capacity for welfare can be used to inform us how to prioritize between interventions for nonhuman animals and interventions for humans. Charity Entrepreneurship has also done research comparing animal welfare under different conditions for different species, including humans [EA · GW], and Founders Pledge has done a sensitivity analysis comparing the Humane League and AMF [EA · GW].

comment by MichaelStJules · 2020-08-22T07:33:33.648Z · EA(p) · GW(p)

2. Different views – not happening – 0/10

For what it's worth, Christian Tarsney from GPI has looked at other aggregative views:

  • Average Utilitarianism Implies Solipsistic Egoism. Summary: average utilitarianism and rank-discounted utilitarianism reduce to egoism due to the possibility of solipsism. Might also apply to variable value theories, depending on the factors. See also the earlier The average utilitarian’s solipsism wager by Caspar Oesterheld.
  • Non-additive axiologies in large worlds. Summary: With large background (e.g. unaffected) populations, average utilitarianism, and some kinds of egalitarian and prioritarian theories reduce to additive theories, i.e. basically utilitarianism. Geometric rank-discounted utilitarianism reduces to maximin instead. (That being said, this doesn't imply we should maximize expected total utility, since it doesn't rule out risk-aversion.)

So, if your population axiology is representable by a single (continuous and impartial) real-valued function of utilities for finite populations (so excluding some person-affecting views), it seems hard to avoid totalism.

Also, I think such views (or utilitarianism) but with deontological constraints are covered by existing interventions; you can just pick among the recommended ones that don't violate any constraints, and I expect that most don't.

Suffering-focused ethics was also already mentioned.

Still, these are only slight variations of total utilitarianism or even special cases.

Replies from: MichaelStJules, Michael_Wiebe
comment by MichaelStJules · 2020-08-22T19:33:11.079Z · EA(p) · GW(p)

Some other works and authors exploring other views and their relationship to EA or EA concepts:

Some less formal writing:

And there are of course critiques of EA, especially by leftists, by animal rights advocates (for our welfarism) and for neglecting large scale systemic change.

Replies from: MichaelA, evelynciara
comment by MichaelA · 2020-08-23T08:57:13.784Z · EA(p) · GW(p)

On how risk- and uncertainty-aversion should arguably affect EA decisions, this was also this talk hosted by GPI, by Lara Buchak.

(I'm mentioning that because it seems relevant, not necessarily because I agreed with the talk or with the basic idea that we should take intrinsic risk- or uncertainty-aversion seriously.)

comment by BrownHairedEevee (evelynciara) · 2020-12-27T05:48:26.181Z · EA(p) · GW(p)

Thanks for this list! I appreciate the Effective Justice paper because it: (1) articulates a deontological version of effective altruism and (2) shows how one could integrate the ideas of EA and justice. I've been trying to do the second thing for a while, although as a pure consequentialist I focus more on distributive justice, so this paper is inspiring for me.

comment by Michael_Wiebe · 2020-08-22T19:57:24.007Z · EA(p) · GW(p)


this doesn't imply we should maximize expected total utility, since it doesn't rule out risk-aversion

What do you mean by this? Isn't risk aversion just a fact about the utility function? You can maximize expected utility no matter how the utility function is shaped.

Replies from: MichaelStJules
comment by MichaelStJules · 2020-08-22T20:28:31.670Z · EA(p) · GW(p)

Ah, we use utility in two ways, the social welfare function whose expected value you maximize, and the welfares of individuals on which your social welfare function depends. You can be a risk-averse utilitarian, for example, with a social welfare function like , where the  are the individual utilities/welfares and  is nondecreasing and concave.

Replies from: Michael_Wiebe
comment by Michael_Wiebe · 2020-08-24T02:25:08.167Z · EA(p) · GW(p)

Hm, I've never seen the use of $f$ like that. Can you point to an example?

Replies from: MichaelStJules
comment by MichaelStJules · 2020-08-24T04:07:18.254Z · EA(p) · GW(p)

An example function , or an example where someone actually recommended or used a particular function ?

I don't know of any of the latter, but using an increasing and bounded  has come up in some discussions about infinite ethics (although it couldn't be concave towards ). I discuss bounded utility functions here [EA(p) · GW(p)].

An example function is . See this link for a graph. It's strictly increasing and strictly concave everywhere, and bounded above, but not below.

Replies from: Michael_Wiebe
comment by Michael_Wiebe · 2020-08-24T16:49:54.895Z · EA(p) · GW(p)

Yes, I meant an example of someone using in this way. It doesn't seem to be standard in welfare economics.

comment by Jan_Kulveit · 2020-09-10T10:31:32.630Z · EA(p) · GW(p)

Quick reaction:

I. I did spent a considerable amount of time thinking about prioritisation (broadly understood)

My experience so far is

  • some of the foundations / low hanging sensible fruits were discovered
  • when moving beyond that, I often run into questions which are some sort of "crucial consideration" for prioritisation research, but the research/understanding is often just not there.
  • often work on these "gaps" seems more interesting and tractable than trying to do some sort of "lets try to ignore this gap and move on" move

few examples, where in some cases I got to writing something

  • Nonlinear perception of happiness [LW · GW] - if you try to add utility across time-person-moments, it's plausible you should log-transform it (or non-linearly transform it) . sums and exponentiation do not commute, so this is plausibly a crucial consideration for part of utilitarian calculations trying to be based on some sort of empirical observation like "pain in bad"
  • Multi-agent minds and predictive processing [LW · GW] - while this is framed as about AI alignment, super-short version of why this is relevant for prioritisation is: theories of human values depend on what mathematical structures you use to represent these values. if your prioritization depnds on your values, this is possible important
  • Another example could be the style of thought explained in Eliezer's "Inadequate Equillibria". While you may not count it as "prioritisation research", I'm happy to argue the content is crucially important for prioritisation work on institutional change or policy work. I spent some time thinking about "how to overcome inadequate equillibria", which leads to topics from game theory, complex systems, etc.

II. My guess is there are more people who work in a similar mode, trying to basically 'build as good world model as you can', dive into problems you run into, and at the end prioritise informally based on such a model. Typically I would expect such model to be in parts implicit / be some sort of multi-model ensemble / ...

While this may not create visible outcomes labeled as prioritisation, I think it's important part of what's happening now

comment by Tobias_Baumann · 2020-08-16T16:42:33.401Z · EA(p) · GW(p)

Thanks for writing this up! I think you're raising many interesting points, especially about a greater focus on policy and going "beyond speculation".

However, I'm more optimistic than you are about the degree of work invested in cause prioritisation, and the ensuing progress we've seen over the last years. See this recent comment of mine [EA(p) · GW(p)] - I'd be curious if you find those examples convincing.

Also, speaking as someone who is working on this myself, there is quite a bit of research on s-risks and cause prioritisation from a suffering-focused perspective, which is one form of "different views" - though perhaps this is not what you had in mind. (I think it might be good to clarify in more detail what sort of work you want to see, because the term "cause prioritisation research" may mean very different things to different people.)

Replies from: weeatquince
comment by weeatquince · 2020-08-18T09:29:34.047Z · EA(p) · GW(p)

Hi Tobias, Thank you for the comment. Yes very glad for CLR ect and all the s-risk research. 

An interesting thing I noted when reading through your recent comment [EA(p) · GW(p)] is that all 3 of the examples of progress involve a broadening of EA, expanding horizons, pushing back on the idea that we need to be focusing right now on AI risk now. They suggest that to date the community has perhaps gone too quickly gone towards a specific case area (AI / immediate x-risk mitigation) rather than continued to explored.

I don’t really know what to make of that. Do you examples weaken the point I am making or strengthen it? Is this evidence that useful research is happening or is this evidence that we as a community under-invests in exploration?

Maybe there is no universal answer to this question and it depends on the individual reader and how your examples affects their current assumptions and priors about the world.

Replies from: Tobias_Baumann, MichaelA
comment by Tobias_Baumann · 2020-08-19T09:07:41.897Z · EA(p) · GW(p)

Yeah, I would perhaps say that the community has historically been too narrowly focused on a small number of causes. But I think this has been improving for a while, and we're now close to the right balance. (There is also a risk of being too broad, by calling too many causes important and not prioritising enough.)

comment by MichaelA · 2020-08-18T18:38:14.042Z · EA(p) · GW(p)

An interesting thing I noted when reading through your recent comment [EA(p) · GW(p)] is that all 3 of the examples of progress involve a broadening of EA, expanding horizons, pushing back on the idea that we need to be focusing right now on AI risk now.

The post Tobias was commenting on requested "novel major" insights specifically. This guarantees that the examples provided would be ones that broadened EA, expanded its horizons, and pushed back on whatever priorities EA had before 2015. So I don't think we should read anything into the fact that a high proportion of the examples were of that kind, rather than e.g. refinements of existing ideas or object-level work within particular cause areas (since the question excluded such things).

(That said, I do think that the number and nature of examples we can come up with in answering that question is relevant to how useful further cause prioritisation research would be. In particular, the fact that commenters came up with some examples rather than 0 examples seems to be evidence that some cause prioritisation research occurred and was useful over the last 5 years. And the fact they came up with relatively few examples is evidence that relatively little such research occurred or was useful. And this could perhaps inform our predictions about the future.)

comment by david_reinstein · 2021-06-07T17:11:11.181Z · EA(p) · GW(p)

I'm doing a series of recordings of EA Forum posts on my "found in the struce" podcast, also delving into the links and with my own comments.

  • I've just done an episode on the present post HERE

  • I also did one on Ben Todd's post HERE

  • Next I'll do one on the comments section on this post, I think

Let me know your thoughts, and if its useful. I think you can also engage directly with the Anchor app leaving a voice response or something.

comment by BrownHairedEevee (evelynciara) · 2020-08-16T16:54:36.287Z · EA(p) · GW(p)

I agree wholeheartedly with this! Strong upvote from me.

I agree that cause prioritization research in EA focuses almost entirely on utilitarian and longtermist views. There's substantial diversity of ethical theories within this space, but I bet that most of the world's population are not longtermist utilitarians. I'd like to see more research trying to apply cause prioritization to non-utilitarian worldviews such as ones that emphasize distributive justice.

One thing I notice is that, with few exceptions, the path to change for EA folk who want to improve the long-run future is research. They work at research institutions, design AI systems, fund research, support research. Those that do not do research seem to be trying to accumulate power or wealth or CV points in the vague hope that at some point the researchers will know what needs doing.

Fully agree, but I think it's ironic (in a good way) that your proposed solution is "more global priorities research." When I see some of 80K's more recent advice, I think, "Dude, I already sank 4 years of college into studying CS and training to be a software engineer and now you expect me to shift into research or public policy jobs?" Now I know they don't expect everyone to follow their priority paths, and I'm strongly thinking about shifting into AI safety or data science anyway. But I often feel discouraged because my skill set doesn't match what the community thinks it needs most.

I think there needs to be much better research into how to make complex decisions despite high uncertainty. There is a whole field of decision making under deep uncertainty (or knightian uncertainty) used in policy design, military decision making and climate science but rarely discussed in EA.

I wouldn't know how to assess this claim, but this is a very good point. I'm glad you're writing a paper about this.

Finally, I love the style of humor you use in this post.

Replies from: weeatquince
comment by weeatquince · 2020-08-18T09:38:59.150Z · EA(p) · GW(p)

Hi evelynciara, Thank you so much for your positivity and for complementing my writing.

Also to say do not feel discouraged. It is super unclear exactly what the community needs and I we should each be doing what we can with the skills we have and see what form that takes.

comment by MichaelPlant · 2020-08-24T11:22:15.126Z · EA(p) · GW(p)

Thanks very much for writing this up Sam. Two points from my perspective at the Happier Lives Institute, who you kindly mention and is a new entrant to cause prioritisation work.

First, you say this on theories of change:

But for a new organisation to solely focus on doing the research that they believed would be most useful for improving the world it is unclear what the theory of change would be. Some options are:
Do research → build audience on quality of research → then influence audience
Do research + persuade other organisations to use your research → influence their audiences and money

I think this nails the difficulty for new cause prioritisation research (where 'new' means 'not being done by an existing EA organisation'). The existing organisations are the 'gatekeepers' for resources but doing novel cause prioritsation work requires, of necessity, doing work those organisations themselves consider low-priority (otherwise they would do it themselves). This creates a tension: funders often want potential entrants to show they have 'buy-in' from existing orgs. But the more novel the project, the less 'buy-in' it will have, and so the less chance it gets off the ground. I confess I don't have a solution for this, other than that, if funders want to see new research, they need to be prepared to back it themselves.

Second, you say you'd like to see research on

unexplored areas that could be highly impactful such as access to painkillers or mental health

I'm pleased to say HLI is working on both those areas - see our April update [EA · GW].

Replies from: Michelle_Hutchinson
comment by Michelle_Hutchinson · 2020-08-26T11:21:36.891Z · EA(p) · GW(p)

I agree that setting up new orgs is really challenging. I think this maybe oversells the difficulty of getting buy in from existing orgs in a way that might unduly put people off trying to set up new projects though.

My main experience with this is setting up the Global Priorities Institute. GPI does fairly different work from other EA orgs (though some overlap with FHI), and is much more foundational/theoretic than typical ones. You might expect that to get extra push back from EAs, given that the theory of change is of necessity less direct than for orgs like openphil. I was in the fortunate position of already working with CEA, which ofc made things easier. And getting funding from OpenPhil was definitely a long process. But I actually found it really helpful. The kind of docs etc they asked for were ones that it was useful for us to produce (for example pinning down our vision going forward, including milestones that would indicate we were or weren't on track), and their comments on our strategy and work was helpful for improving them.

I think some things that helped, and that others might find useful, were:

  • Doing a bunch of consultation early on in the process. That improved the idea and the project from the start, and (I expect) meant that others who I hoped would support the project had a better sense of what it was trying to achieve, and that we would be open and responsive to their feedback. This latter seems like it could go some way to allaying people's worries about new projects, by giving people a sense that if they see a project going wrong in a way they think could end up net negative, the people running it will be keen to hear that and to pivot.
  • For docs I sent people asking for input, spending time to make sure they were as concise and clear as possible. I find this pretty challenging, and definitely more time consuming than writing longer docs. But it really increases people's willingness to give comments . I also think it can improve their understanding of the project (because they get a better snapshot for a given time reading) and therefore the usefulness of comments.
  • Linked to the above, asking for help from people in a really targeted way: trying to find the people who would be most helpful for answering specific questions and improving specific aspects of the project, and then making concrete asks which made clear why they in particular would be helpful for answering this. Using that approach, I was surprised how helpful total strangers were (though this may be partly because academics are used to collaborating with strangers, so are particularly helpful). I think that also had useful knock on effects, because others were happy we were getting (and acting on!) advice from experts.

Something I still find hard, but am trying to do more in my current role, is get input and advice from people who are sceptical as well as those who are broadly supportive. It seems useful to try to really flesh out the strongest versions of concerns with your project, and how to mitigate those. It also seems likely to increase buy in for your project because it shows you're keen to consider different worldviews and to act on rather than minimise concerns.

comment by JP Addison (jpaddison) · 2020-08-21T17:56:13.979Z · EA(p) · GW(p)

I think you did a really good job nailing the emotional tenor of this post and I think it's great.

comment by Venkatesh · 2021-04-02T07:32:04.471Z · EA(p) · GW(p)

Sorry for digging up this old post. But it was mentioned in the Jan 2021 EA forum Prize report published today and that is how I got here.

This comment assumes that Cause Prioritization (CP) is a cause area that requires people with width(worked across different cause areas) rather than depth(worked on a single cause area) of knowledge. That is, they need to know something about several cause areas instead of deeply understanding one of them. Would love to hear from CP researchers or others who would disagree.

  1. Maybe CP is an excellent path for some people in mid/late career. I think there could be some people in the middle of their career who have width rather than depth of knowledge. I might be wrong but it feels like the current advice for mid-career folks from 80k hours (See this 80k hours podcast episode discussion for example) seems to focus on people with skill depth alone. Further, I also think 80k hours may actually be creating people who have skill width by encouraging people to experiment with working on different cause areas until they find the best personal fit. What if we could tell them - "Experimented a lot? Have a lot of width? Try CP!"

  2. I also feel like it would be difficult for people in their early career to rationalize working on CP. Personally, as someone in their early career, I feel like I don't fully understand even one of the cause areas of interest to EAs properly. How can I then hope to understand multiples of them, find those not yet unknown and on top of it prioritize them all!? Now, there is good reason to believe EA is a relatively young movement (majority age between 25-34 [EA · GW]) and since young people can't rationalize working on CP, we are seeing relatively lesser research on this.

  3. Maybe as EAs grow older eventually CP research will gain steam. Maybe their depth could also give them some width. At a later stage, current EAs working on a specific cause area could feel, "Having done specialized work all these years, I am beginning to see some ways I can generalize this stuff. Maybe this generalization is the next big impactful thing I can do" and then get into CP. Maybe some EAs already realized this and have even planned their career so that they can do CP at a later stage. So this whole thing could just be a matter of time. But that doesn't mean we should not worry - what if at the stage when EAs want to generalize we don't have the structures in place for them to pursue it?

comment by RayTaylor · 2020-09-10T08:48:18.530Z · EA(p) · GW(p)

>I like the idea of building "resilience" instead of going after specific causes.

That's almost exactly the approach we took in ALLFED, treating the more likely GCR and Xrisk scenarios as a "basket of risks"...
... and then looking at how to build resilience and recovery capacity for all of them, with an initial focus on recovering food supply.
We now have more than 20 EA volunteers at ALLFED, in a range of disciplines from engineering to history, so clearly this makes sense to people.

>For instance, if we spend all of our attention on bio risks, AI risks, and nuclear risks, it's possible that something else weird will cause catastrophe in 15 years.

Most likely a "cascading risk scenario" ... (as covid is, without yet being a GCR) ...
.... or what EA Matthijs Maas calls a "boring apocalypse".

>So experimenting with broad interventions that seem "good no matter what" seems interesting. For example, if we could have effective government infrastructure, or general disaster response, or a more powerful EA movement, those would all be generally useful things.

yes the DRR (disaster risk reduction) discipline gave us structures and processes, and enabled us to bridge across to UNDRR, a profession of disaster people, insights into preparedness-response-recovery which we are scaling up to whole-continent and whole-planet scale, etc
comment by Magnus Vinding (MagnusVinding) · 2020-08-28T22:07:42.797Z · EA(p) · GW(p)

Thanks for writing this post! :-)

Two points:

i. On how we think about cause prioritization, and what comes before

2. Consideration of different views and ethics and how this affects what causes might be most important.

It’s not quite clear to me what this means. But it seems related to a broader point that I think is generally under-appreciated, or at least rarely acknowledged, namely that cause prioritization is highly value relative.

The causes and interventions that are optimal relative to one value system are unlikely to be optimal relative to another value system (which isn't to say that there aren't some causes and interventions that are robustly good on many different value systems, as there plausibly are [EA · GW], and identifying novel such causes and interventions would be a great win for everyone; but then it is also commensurately difficult to identify new such causes and have much confidence in them given both our great empirical uncertainty and the necessarily tight constraints).

I think it makes sense that people do cause prioritization based on the values, or the rough class of values, that they find most plausible. Provided, of course, that those values have been reflected on quite carefully in the first place, and scrutinized in light of the strongest counterarguments and alternative views on offer.

This is where I see a somewhat mysterious gap in EA, more fundamental and even more gaping than the cause prioritization gap highlighted here: there is surprisingly little reflection on and discussion of values (something I also noted in this post [EA · GW], along with some speculations as to what might explain this gap).

After all, cause prioritization depends crucially on the fundamental values based on which one is trying to prioritize (a crude illustration), so this is, in a sense, the very first step on the path toward thoroughly reasoned cause prioritization.

ii. On the apparent lack of progress

As hinted in Zoe's post [EA · GW], it seems that much (most?) cutting edge cause prioritization research is found in non-public documents these days, which makes it appear like there is much less research than there in fact is.

This is admittedly problematic in that it makes it difficult to get good critiques of the research in question, especially from skeptical outsiders, and it also makes it difficult for outsiders to know what in fact animates the priorities of different EA agents and orgs. It may well be that it is best to keep most research secret, all things considered, but I think it’s worth being transparent about the fact that there is a lot that is non-public, and that this does pose problems, in various ways, including epistemically.

Replies from: MichaelA
comment by MichaelA · 2020-08-29T16:39:41.897Z · EA(p) · GW(p)

This post [EA · GW] - which I found interesting and useful - feels relevant in relation to your first point. A relevant excerpt:

We can approach ‘figuring out what to do’ at three different levels of directness (which are inspired by the same kind of goal hierarchy as the Values-to-Actions Chain [EA · GW]). 

Most indirectly, we can ask ‘what should we value?’ We call that values research, which is roughly the same as ethics. 

From our values, we can derive a high-level goal to strive for. For longtermism values, such a goal could be minimize existential risk.[1] [EA · GW] For another set of values , such as animal-inclusive neartermism, the high-level goal could be to minimize the aggregate suffering of farm animals.[2] [EA · GW]

More directly, we can ask ‘given our goal, how can we best achieve it?’ We call the research to answer that question strategy research. The result of strategy research is a number of strategic goals embedded in a strategic plan. For example, in existential risk reduction, strategy research could determine how to best allocate resources between reducing various existential risks based on their relative risk levels and timelines.

Most directly, we can ask ‘given our strategic plan, how should we execute it?’ We call the research to answer that question tactics research. Tactics research is similar to strategy research, but is at a more direct level. This makes tactics more specific. For example, in existential risk reduction, tactics research could be taking one of the sub goals from a strategic plan, say ‘reduce the competitive dynamics surrounding human-level AI’, and ask a specific question that deals with part of the issue: ‘How can we foster trust and cooperation between the US and Chinese governments on AI development?’ In general, less direct questions have more widely relevant answers, but they also provide less specific recommendations for actions to take.

Finally, the plans can be implemented based on the insights from the three research levels.

(I added two line breaks and changed where the diagram was, compared to the original text.) 

(That post was written on behalf of my former employer, but not by me, and before I was aware of them.)

comment by MichaelStJules · 2020-08-22T20:10:43.406Z · EA(p) · GW(p)

I think there needs to be much better research into how to make complex decisions despite high uncertainty. There is a whole field of decision making under deep uncertainty (or knightian uncertainty) used in policy design, military decision making and climate science but rarely discussed in EA.

I think GPI is doing research on this, under cluelessness. See, for example:

comment by david_reinstein · 2020-08-21T15:35:52.139Z · EA(p) · GW(p)

Great post! I laid down a variety of comments and suggestions within your post using If you want to check it out (you need to install the browser ad-in and get a free account to see these.

I prefer to comment within the text rather than here at the bottom, cutting and pasting quotes. Anyone else here tried

(By the way, I'm an academic economist. I don't have any stake in I just like it.)

comment by MichaelStJules · 2020-08-19T06:04:49.190Z · EA(p) · GW(p)

I think the EA animal space is going beyond RCTs out of necessity, since RCTs have been hard to come by other than for diet change interventions (although their quality was previously quite poor, but better recently). Humane League Labs is researching the causal effects of corporate campaigns from observational data.

And you've already pointed out OPIS and the Happier Lives Institute, but HLI was incubated by Charity Entrepeneurship, which I think is generally looking beyond RCTs. They just put out their next round of recommended charities to incubate [EA · GW].

comment by Eirik (EirikMofoss) · 2020-11-22T20:03:23.348Z · EA(p) · GW(p)

I fully agree with this!

"it doesn’t feel like the EA community has thought much about policy. For example there is a huge focus on AI policy, but the justification for this is weak. Even if you fully believe the longtermist arguments that top programmers should work on AI alignment, it does not immediately follow that good policy people can have more long term impact in AI policy compared to policy on resilience, macroeconomics, institution design, nuclear non-proliferation, climate change, democracy promotion, political polarisation, etc, etc."

comment by EgilElenius · 2020-08-24T18:07:36.974Z · EA(p) · GW(p)

Ideas coming through my mind, not too well refined:

Reading this post, I came to think of this old joke:

A police officer sees a drunken man intently searching the ground near a lamppost and asks him the goal of his quest. The inebriate replies that he is looking for his car keys, and the officer helps for a few minutes without success then he asks whether the man is certain that he dropped the keys near the lamppost.
“No,” is the reply, “I lost the keys somewhere across the street.” “Why look here?” asks the surprised and irritated officer. “The light is much better here,” the man.

So, how could this be applied to cause prioritisation? For one, I think the area where the keys could be lost is quite large.

My second thought would be, that "How do we prioritise what to do, to achieve the most good?" sounds to me partly like an existential question, a bit like "What is the meaning of life?" Perhaps this goes back a bit to the dropped keys, with the GP research being done focusing on the visible area of what can be done concretely. Trying to answer the question of global priorities without a grand narrative of what the globe is to become, seems incomplete to me.

Insofar as the EA moment wants to answer the concrete question of how to do create change according to one's values instead of discussing values as such, I would expect the different branches to remain interested in their respective agendas and not into how to compare them to one another. That would be contra-productive.

Also, despite EA's philosophical roots, I think perhaps not enough different parts of philosophy is being used. For example, if value and meaning is created by ourselves, what implications does that hvae on GPR? Has the subconscious been considered when it comes to increasing well-being? To me, the EA movement seems to be in a humanistic, individualistic or such worldview, and if a new grand narrative, like that outlined in Homo deus or Digital libido were to come, and the EA movement stays in the old paradigm, it could very well end up looking to outsiders that the primary question of concern is akin to how many angels can dance on a needle's point.

comment by Charlotte (CharlotteSiegmann) · 2020-08-16T07:33:23.387Z · EA(p) · GW(p)

Thank you very much for writing this up. However, I am not sure I understand your point, the things you are referring to in:

3. Policy and beyond – not happening – 2/10. Are you referring to your explanation within the subsection on The Parliament? Then, this would make sense for me.

Replies from: weeatquince
comment by weeatquince · 2020-08-16T07:56:16.334Z · EA(p) · GW(p)

Yes that is correct. I have made some edits to clarify.