What should Founders Pledge research?

post by Halstead · 2019-09-09T17:41:04.073Z · score: 51 (19 votes) · EA · GW · 28 comments

Founders Pledge has recently significantly expanded its research team and is currently considering its research strategy for the next 12 months. This is important as our pledge value is ~$2bn and counting. I would welcome suggestions on which topics could be potentially promising for us to research going forward. These suggestions could be promising according to various different ethical and empirical premises, catering to:

Topics we are currently considering include:

Thoughts on these topics and suggestions for any others would be appreciated. Meta-thoughts on how to approach this selection task would also be handy.

Cheers!


28 comments

Comments sorted by top scores.

comment by RyanCarey · 2019-09-10T16:08:53.398Z · score: 22 (11 votes) · EA · GW

[My views only]

Thanks for putting up with my follow-up questions.

Out of the areas you mention, I'd be very interested in:

  • Improving science. Things like academia.edu and sci-hub have been interesting. Replacing LaTeX is interesting. Working on publishing incentives is also interesting. In general, there seems to be plenty of room for improvement!

I'd be interested in:

  • Improving political institutions and political wisdom: EA might need to escalate its involvement in many areas adjacent to this, such as policy intersected with great power relations or pivotal technologies. It would be very interesting to better-understand what can be done with funding alone.
  • Reducing political bias and partisanship: this seems hard, but somewhat important. Most lobbyists are not trying to do this. Russia is actively trying to do the opposite. It would be interesting if more can be done in this space. Fact-checking websites and investigative journalism (Bellingcat) are interesting in this space too. Another interesting area is counteracting political corruption.
  • Sundry ex risks/GCRs

I'd be a little interested in:

  • Increasing economic growth

I think the other might be disadvantageous based on my understanding that it's better for EA to train people up in longtermist-relevant areas, and be percieved as being focused on the same.

Out of those you haven't mentioned, but that seem similar, I'd also be interested in:

  • Promotion of effective altruism
  • Scholarships for people working on high-impact research
  • More on AI safety - OpenPhil seems to be funding high-prestige mostly-aligned figures (e.g. Stuart Russell, OpenAI) and high-prestige unaligned figures (e.g. their fellows) but has mostly not funded low-mid prestige highly-aligned figures (with notable exceptions of MIRI, Michael C and Dima K). Other small but comparably informed funders mostly favor low-mid prestige highly-aligned targets to a greater extent e.g. Paul's funding for AI safety research, and Paul and Carl argued to OpenPhil that they should fund MIRI more. I think there are residual opportunities to fund other low-mid prestige highly-aligned figures. [edited for clarity]
comment by Milan_Griffes · 2019-09-10T18:41:18.567Z · score: 16 (7 votes) · EA · GW

+1 to doing something with Sci-Hub.

Sci-Hub has had a huge positive impact. Finding ways to support it / make it more legal / defend it from rent-seeking academic publishers would be great.

comment by Halstead · 2019-09-11T09:18:30.403Z · score: 11 (4 votes) · EA · GW

Thanks a lot for this Ryan. Re promoting science, what do you make of the worry that the long-term sign of the effect of improving science is unclear because it doesn't produce differential technological development and instead broadly increases the increase of all knowledge, including potentially harmful knowledge?

comment by RyanCarey · 2019-09-11T10:43:05.654Z · score: 9 (3 votes) · EA · GW

I think it's a reasonable concern, especially for AI and bio, and I guess that is part of what a grantmaker might investigate. Any such negative effect could be offset by: (1) associating scientific quality with EA/ recruiting competent scientists into EA, (2) improving the quality of risk-reducing research, and (3) improving commentary/reflection on science (which could help with identifying risky research). My instinct is that (1-3) are greater than risk-increasing effects, at least for many projects in this space and that most relevant experts would think so, but it would be worth asking around.

comment by riceissa · 2019-09-11T02:13:51.022Z · score: 2 (2 votes) · EA · GW

various people's pressure on OpenPhil to fund MIRI

I'm curious what this is referring to. Are there specific instances of such pressure being applied on Open Phil that you could point to?

comment by Wei_Dai · 2019-09-11T22:08:27.649Z · score: 18 (6 votes) · EA · GW

Not sure if this counts, but I did make a critique [EA · GW] that Open Phil seemed to have evaluated MIRI in a biased way relative to OpenAI.

comment by RyanCarey · 2019-09-11T10:36:13.563Z · score: 5 (3 votes) · EA · GW

I don't have any inside info, and perhaps "pressure" is too strong, but Holden reported recieving advice in that direction in 2016:

"Paul Christiano and Carl Shulman–a couple of individuals I place great trust in (on this topic)–have argued to me that Open Phil’s grant to MIRI should have been larger. (Note that these individuals have some connections to MIRI and are not wholly impartial.) Some other people I significantly trust on this topic are very non-enthusiastic about MIRI’s work, but having a couple of people making the argument in favor carries substantial weight with me from a “let many flowers bloom”/”cover your bases” perspective. (However, I expect that the non-enthusiastic people will be less publicly vocal, which I think is worth keeping in mind in this context.)"
comment by KarolinaSarek · 2019-09-24T11:01:01.187Z · score: 10 (5 votes) · EA · GW

Thanks for asking this question. I support and follow the approach of asking relevant people in the space for input to a research agenda. I am happy to see that other organizations are also doing it.

Meta-thoughts on how to approach this selection task would also be handy.

Your question inspired me to write a short post on a methodology of systematically integrating stakeholders' and decision-makers' input into the research agenda. [EA · GW] You might find this meta-methodology helpful.

Out of the areas you mention, I'd be very interested in the following:
Animal product alternatives 6/10

Pain relief in developing countries 6/10
Improving science 9/10

Ideas not included on your list:
GiveWell recently published its list of areas they are planning to explore. I think some of them might be of interest to donors focused on improving the welfare of the current generation of humans and high-income countries’ problems.

  • Tobacco, alcohol, and sugar control
  • Air pollution regulation
  • Micronutrient fortification and biofortification
  • Improving government program selection
  • Improving government implementation
  • Immigration reform
  • Mosquito gene drives advocacy and research
  • Mental health (interventions comparison)
  • Sleep quality improvement

As you know, GW’s research is very diligent. Consequently, it takes a long time to finalize. I would be interested in having preliminary research conducted by other organizations.

Regarding donors focused on animal welfare:

  • Producers’ outreach, for example,. providing subsidization for farmers interested in higher-welfare farming
  • CRISPR-based gene drives to address wild animals’ suffering
  • WAS intervention comparison
  • Affecting law and law enforcement focused on welfare improvements for chicken and fish [EA · GW] in Asia [EA · GW]
  • Insects’ welfare, intervention comparison, for example, reduction of the production of silk, painkillers for insects used in research, etc.

I am currently working on CE’s agenda for the next year in the area of global poverty/health, animal advocacy, and mental health. I will be able to list more areas and research questions worth investigating that CE cannot cover this year at the end of September. I am narrowing down a list of research ideas from 400 ideas (in three cases). Let me know if you are interested in hearing more about it.

comment by JanBrauner · 2019-09-22T23:40:33.904Z · score: 7 (2 votes) · EA · GW

cognitive enhancement research

comment by Joey · 2019-09-17T08:59:44.610Z · score: 5 (5 votes) · EA · GW

Here are a few different areas that look promising. Some of these are taken from other organizations’ lists of promising areas, but I expect more research on each of them to be high expected value.

  • Donors solely focused on high-income country problems.
    • Mental health research (that could help both high and low income countries).
    • Alcohol control
    • Sugar control
    • Salt control
    • Trans-fats control
    • Air pollution regulation
    • Metascience
    • Medical research
    • Lifestyle changes including "nudges" (e.g. more exercise, shorter commutes, behaviour, education)
    • Mindfulness education
    • Sleep quality improvement
  • Donors focused on animal welfare.
    • Wild animal suffering (non-meta, non-habitat destruction) interventions
    • Animal governmental policy, particularly in locations outside of the USA and EU.
    • Treat disease that affects wild animals
    • Banning live bait fish
    • Transport and slaughter of turkeys
    • Pre-hatch sexing
    • Brexit related preservation of animal policy
  • Donors focused on improving the welfare of the current generation of humans.
    • Pain relief in poor countries
    • Contraceptives
    • Tobacco control
    • Lead paint regulation
    • Road traffic safety
    • Micronutrient fortification and biofortification
    • Sleep quality improvement
    • Immigration reform
    • Mosquito gene drives, advocacy, and research
    • Voluntary male circumcision
    • Research to increase crop yields
comment by RyanCarey · 2019-09-09T21:10:29.109Z · score: 4 (5 votes) · EA · GW

I'd need a better understanding of how Founders Pledge works to be able to say anything intelligent. I'm guessing the idea is something like:

  • when founders are due to donate, you prompt them
  • you ask them what kind of advice they would like
  • you give them some research relevant to that, and do/don't make specific recommendations ???
  • they make donations directly

Is that how it actually happens?

comment by Halstead · 2019-09-09T21:43:08.667Z · score: 8 (5 votes) · EA · GW

yes it's something like that, except that we do make specific recommendations, which are suited to their core values, and that they typically make donations via our donor advised fund rather than directly.

comment by RyanCarey · 2019-09-09T21:51:49.558Z · score: 2 (1 votes) · EA · GW

Cool! Are you able to indicate roughly what order of magnitude of donations you would expect to contribute per-year, over the next few years in the promising areas (or any of the others if they're significantly bigger than those) such as:

Donors focused on the long-term future of sentient life.
Donors focused on GCRs and existential risk.
Improving science
Sundry ex risks/GCRs
Improving political institutions and political wisdom

?

comment by Halstead · 2019-09-09T22:18:53.272Z · score: 11 (6 votes) · EA · GW

I would expect it to be in the millions/yr, though I don't think I should throw about specific figures on the forum.

comment by RyanCarey · 2019-09-09T22:30:21.532Z · score: 2 (1 votes) · EA · GW

No problem. I've also had a skim of the x-risk report to get an idea of what research you're talking about.

Would you expect the donors to be much more interested in some of the areas you mention than others, or similarly interested in all the areas?

comment by Halstead · 2019-09-10T13:55:25.742Z · score: 9 (2 votes) · EA · GW

I think we will be able to convince enough of them to donate to high-impact areas regardless of what they are

comment by Milan_Griffes · 2019-09-09T18:13:36.815Z · score: 4 (12 votes) · EA · GW

I'd love to see an independent dive into consciousness & moral patienthood.

Luke Muehlhauser did a thorough report (a) on this a couple years ago. As far as I know, that work is informing a lot of EA prioritization. It's quite opinionated, and I haven't seen too much discussion of its conclusions (there's some in the AMA [EA · GW]; the topic definitely warrants more).

Consciousness and its relationship to morality is complicated enough & important enough that an independent pass seems high value.

Potential entry point: Integrated Information Theory is currently pretty prominent in neuroscience; I'd love to see an EA steelman of it. (Luke on IIT, after giving a brief explainer: "let me jump straight to my reservations about IIT.")

Also would be great to see an EA steelman of panpsychism, which is considered plausible by a bunch of philosophers and some scientists.

comment by MichaelStJules · 2019-09-09T19:44:57.775Z · score: 11 (5 votes) · EA · GW

Have you seen Rethink Priorities work on this? https://www.rethinkpriorities.org/invertebrate-sentience-table

While the purpose was to investigate invertebrate sentience, they also covered different species of vertebrates, plants and single-celled organisms for comparison.

comment by Milan_Griffes · 2019-09-09T19:57:14.986Z · score: 5 (3 votes) · EA · GW

I guess I'm desiring more of a common vocabulary here, maybe something like "here are some open questions about consciousness that are cruxy [LW · GW], here's where [our organization] ended up on each of those questions, here are some things that could change our mind."

Luke did a good job of this in his report. From a quick look at Rethink Priorities' consciousness stuff, I'm not sure what they concluded about the important open questions. (e.g. Where do they land on IIT? Where do they land on panpsychism? What premises would I have to hold to agree with their conclusions?)

comment by Peter_Hurford · 2019-09-10T16:30:59.589Z · score: 7 (4 votes) · EA · GW

I should probably only speak for myself and not the entire team, but I think the breakdown is something like:

Where do they land on IIT?

Quite skeptical / lean against

~

Where do they land on panpsychism?

Quite skeptical / lean against

~

What premises would I have to hold to agree with their conclusions?

The key assumptions are:

(1) epiphenomenalism (in the traditional sense) is false

(2) methodological naturalism

(3) "inference to the best explanation" is a worthwhile method in this case

~

here are some open questions about consciousness that are cruxy, here's where [our organization] ended up on each of those questions, here are some things that could change our mind

We largely chose not to do this because we mostly just agree with what Luke wrote and didn't think we would be able to meaningfully improve upon it.

comment by Milan_Griffes · 2019-09-10T18:34:44.825Z · score: 5 (3 votes) · EA · GW

Thanks!


We largely chose not to do this because we mostly just agree with what Luke wrote and didn't think we would be able to meaningfully improve upon it.

fwiw I found your comment really helpful & I think the RP content would benefit from including a sketch like this.

comment by Milan_Griffes · 2019-09-09T19:50:26.430Z · score: 5 (3 votes) · EA · GW

Thanks for highlighting; I had only thought a little about RP's work on consciousness. I'll take a closer look. (This essay [EA · GW] seems especially relevant.)

comment by Peter_Hurford · 2019-09-10T16:25:55.574Z · score: 5 (3 votes) · EA · GW

Yeah, I'd recommend reading that essay, the feature reports, and also the cause profile [EA · GW].

comment by Milan_Griffes · 2019-09-10T18:32:19.459Z · score: 2 (1 votes) · EA · GW

Got it, thanks!

comment by AidanGoth · 2019-09-10T17:00:38.356Z · score: 10 (3 votes) · EA · GW

Scott Aaronson and Giulio Tononi (the main advocate of IIT) and others had an interesting exchange on IIT which goes into the details more than Muehlhauser's report does. (Some of it is cited and discussed in the footnotes of Muehlhauser's report, so you may well be aware of it already.) Here, here and here.

comment by atlasunshrugged · 2019-09-11T17:27:21.234Z · score: 1 (4 votes) · EA · GW

Just wanted to mention that I also think that improving political institutions and wisdom (and general capacity building) is quite interesting. I think policy in general is a semi-neglected EA area that could be highly valuable. Everything from advocating for known high impact policies to be put in place where they aren't (ex. tobacco taxation) to examining new policies that could be implemented (ex. novel ways of stopping illicit financial outflows from developing countries). I think GiveWell has also been looking into this field so I'm sure they have some thoughts here. I've been researching tobacco tax policy mainly in LMICs (and tobacco policies more broadly as a byproduct of that research) and am happy to chat about that if it's helpful, but I'm a relative novice in the field.

comment by EA-Basti · 2019-09-11T09:18:21.915Z · score: 1 (7 votes) · EA · GW

Mental health (especially in developing countries --> eg a more thorough look at Strong Minds etc.).

comment by beth​ · 2019-09-10T14:35:13.935Z · score: -2 (5 votes) · EA · GW

Fighting human rights violations around the globe.