Some research questions that you may want to tackle

post by Aaron Bergman (aaronb50) · 2022-07-09T04:26:05.628Z · EA · GW · 21 comments

Formerly titled "Write up my research ideas for someone else to tackle? Fine - you asked for it!"

Unrelatedly, thanks to Jessica McCurdy [EA · GW] for telling me to write down some of my research ideas and questions in case someone else wants to tackle one (or a few).

The list

1. Cause prio but for earning to give
1. As far as I know, SBF relied on his personal knowledge and intuition when deciding to try building FTX.
2. It doesn’t have to be this way! I can imagine a more systematic effort to identify and describe which earning to give opportunities are most promising. Is there a $100B idea with a 1% chance of working? A$1T idea with a 0.1% chance? I think we can and should find out.
2. Are there cheap and easy ways to kill fish quickly?
1. Right now, I estimate 250 million fish years are spent in agony each year as wild fish are killed by asphyxiation or being gutted alive, which takes a surprisingly long time to cause death. There must be a better way.
1. Related: can we just raise (farm) a ton of fish ourselves, but using humane practices, with donations subsidizing the cost difference relative to standard aquaculture
3. From my red teaming project [EA · GW] on extinction risk reduction:
1. Unpacking which particular biorisk prevention activities seem robust to a set of plausible empirical and ethical assumptions and which do not; and
2. Seeking to identify any AI alignment research programs that would reduce s-risks by a greater magnitude than "mainstream" x-risk-oriented alignment research.
4. From my “half baked ideas comment” on the Forum:
1. Figure out how to put to good use some greater proportion of the approximately 1 Billion recent college grads who want to work at an "EA org"
1. This might look like a collective of independent-ish researchers?
2. There should be way more all-things-considered, direct comparisons between cause areas.
1. So I guess the research question is: what is the most important cause area to work on and/or donate to, all things considered?
1. No more “agreeing to disagree” - I want an (intellectual) fight to the death. Liberal-spending longtermists should make an affirmative case that this ethos is the best way to spend money on the margin, and objectors should argue that it isn’t.
2. In particular, I don't think a complete case has been made (even from a total utilitarian, longtermist perspective) that at the current funding margin, it makes sense to spend marginal dollars on longtermism-motivated projects instead of animal welfare projects. I'd be very interested to see this comparison in particular
5. [Related to above] Is anyone actually arguing that neartermist, human-centric interventions are the most ethical way to spend time or money?
1. That’s not a rhetorical question! The hundreds of millions of dollars being directed to AMF et al. instead of some other charity or cause area should be more seriously justified or defended, IMO.
2. For anyone who does think that improving human welfare in the developing world is the best thing to do: do AMF-type charities actually increase the number of human life-years lived?
6. (As I asked on Twitter) What jobs/tasks/roles are high impact (by normal EA standards) but relatively low status within EA?
1. I think one of the big ways EA could screw up is by having intra-EA status incongruent (at least ordinally) with expected impact.
7. What would an animal welfare movement with the ambition, epistemic quality, and enthusiasm (and maybe funding) of the longtermist movement look like?
8. [I might tackle this] What can AI safety learn from human brains’ bilateral asymmetry
1. The whole “brain hemisphere difference” thing is surrounded by plenty pop science myths, surrounding it, but there really are some quite profound differences as described in Ian McGilchrist’s The Master and His Emissary
9. What positions of power and/or influence in the world are most neglected or easiest to access, perhaps because they’re low prestige and/or low pay?
10. S-risk people: what can we actually do, in the real world and the foreseeable future, to decrease s-risks?
1. It seems to me most of this research is quite abstract and theoretical - which may not make sense if transformative AI is only a few years away!
11. It seems like the default view is that some time in the future, the world and/or EA is going to decide that AI systems are sentient. This seems totally implausible.
1. What should we do under radical uncertainty as to whether any given “thing” or process is sentient?
2. What empirical observations, if any, should change our actions, plans, or ethics?

comment by saulius · 2022-07-09T07:53:14.991Z · EA(p) · GW(p)

Hey, thanks for writing this, there are some interesting ideas here. A bit of a nitpick, but I’m not sure that your “estimate 250 million fish years are spent in agony each year as wild fish are killed by asphyxiation or being gutted alive” is quite accurate . You are extrapolating from the length of time it takes for herring, cod, whiting, sole, dab and plaice to suffocate to all wild-caught fish. But I think that all of these are rather big fish and they likely were studied and mentioned by FishCount because it takes so long for them to suffocate. For example, 17%–65% of all wild-caught fishes are anchovies (295–908 billion fishes per year), and this video claims that “anchovies die immediately when they are out of water.” (though I don’t know how reliable that video is). I tried to estimate the same things (after reading the same text) here [EA · GW]. I estimated that 0.7–49 million herring, cod, whiting, sole, dab, and plaice are suffocating in the air after being landed at any time (and didn’t make an estimate for other fishes). Also, there’s already some research on humane slaughter of fish, some of it is funded by Open Philanthropy, I don’t know if it is neglected or not.

Replies from: aaronb50
comment by Aaron Bergman (aaronb50) · 2022-07-09T22:37:57.544Z · EA(p) · GW(p)

Thanks for the correction, that is definitely good news for the fish (albeit slightly bad news for my research judgement lol)!

Although another consideration pointing in the other direction is that larger fish probably have larger brains with more neurons, which may render them more morally relevant

comment by Henry Howard · 2022-07-10T04:12:09.765Z · EA(p) · GW(p)

Re. 5: I think more people are in this camp than you realise, maybe they're just not well-represented on the forum and twitter. I'm a global development nowist because:

1. Affecting the future predictably is hard and most longtermist projects I've seen aren't obviously going to have a positive impact. They might even be negative (eg. stalling AI might be bad).

2. Beyond freeing pigs and chooks from cages, animal welfare concern leads to absurd conclusions when you start thinking about wild animal suffering or try to quantify insect suffering.

3. Global development probably has positive long-term effects on human wellbeing (development begets development and accelerates technology by allowing more people able to take part in innovation)

4. Global development will probably have positive effects on animal welfare anyway and may even be necessary in a lot of cases (richer countries are generally the ones that adopt better animal welfare rules)

5. Global development is more broadly appealing than than fish rights and AI safety. Focus on longtermism and fringe animal welfare issues is part of what makes the EA label and community alienating to people.

Replies from: aaronb50
comment by Aaron Bergman (aaronb50) · 2022-07-10T05:30:45.881Z · EA(p) · GW(p)

Thanks for representing the global dev camp!

2. Beyond freeing pigs and chooks from cages, animal welfare concern leads to absurd conclusions when you start thinking about wild animal suffering or try to quantify insect suffering.

Eh, I agree the conclusions might be counterintuitive and even weird, but disagree pretty strongly that they're absurd.

Even granting that only freeing mammals from cages is good and worthy, I'm (subjectively, not super rigorously) quite confident that indeed getting chickens and/or pigs out of cages is both more robustly good and ethically more important than any of the GiveWell charities.

4. Global development will probably have positive effects on animal welfare anyway and may even be necessary in a lot of cases (richer countries are generally the ones that adopt better animal welfare rules)

Not impossible, but seems very unlikely and would be suspicious if helping humans happened to also be the best way to help animals. I don't think it comes very close, in fact, though I'm unsure what the sign is

5. Global development is more broadly appealing than than fish rights and AI safety. Focus on longtermism and fringe animal welfare issues is part of what makes the EA label and community alienating to people.

I agree with the first sentence, which is why I suspect that most of the ethical value from global dev runs through community building/attracting newcomers and optics, and this effect is plausibly pretty big in magnitude. But I think we should have a very high bar for not doing something morally important because some people might think it's weird or silly, even if some amount of activity optimized for broad appeal is warranted

comment by Harrison Durland (Harrison D) · 2022-07-09T22:41:12.454Z · EA(p) · GW(p)

Re your point 4-1: I wrote a relevant post some number of months ago and never really got a great answer: https://forum.effectivealtruism.org/posts/HZacQkvLLeLKT3a6j/how-might-a-herd-of-interns-help-with-ai-or-biosecurity [EA · GW]

And now, here I am going into what may be my ~6th trimester of “not having an existential risk reduction (or relevant) job or internship despite wanting to get one”… 🙃

Replies from: aaronb50
comment by Aaron Bergman (aaronb50) · 2022-07-10T01:45:56.012Z · EA(p) · GW(p)

This seems like a major failure, and FWIW I think you should currently be getting paid to do some sort of longtermism-relevant research (or other knowledge work), and it's a failure that you're not. 🙃 indeed lol

Though I should register that I know Harrison IRL so infer whatever biases you should

comment by Zach Stein-Perlman (zsp) · 2022-07-09T05:15:52.371Z · EA(p) · GW(p)

I don't think a complete case has been made (even from a total utilitarian, longtermist perspective) that at the current funding margin, it makes sense to spend marginal dollars on longtermism-motivated projects instead of animal welfare projects. I'd be very interested to see this comparison in particular

I think this is wildly overdetermined in favor of longtermism. For example, I think at the current margins, a well-spent dollar has a ~10^-13 chance of making the future go much better, with a value probably more than 10^50 happy human lives (and with a much greater expected value -- arguably infinite, but that's another conversation). So the marginal longtermist dollar is worth much more than 10^37 happy lives in expectation. (That's way more than the number of fish that have ever lived, but for the sake of having a number I think we can safely upper-bound the direct effect of the marginal animal-welfare dollar at 10^0 happy lives.) Given utilitarianism, even if you nudge my numbers quite a bit, I think longtermism blows animal welfare out of the water.

Of course, I don't think a longtermist dollar is actually ~10^40 times more effective than an animal-welfare one, because of miscellaneous side effects of animal welfare spending on the long-term future. But I think those side effects dominate. (I have heard an EA working on animal welfare say that they think the effects of their work are dominated basically by side effects on humans' attitudes.) And presumably the side effects aren't greater than the benefits of funding longtermist projects.

Replies from: aaronb50, Lukas_Finnveden
comment by Aaron Bergman (aaronb50) · 2022-07-09T05:37:25.436Z · EA(p) · GW(p)

I tend to think you’re right, but don’t think it’s wildly overdetermined - mostly because animal suffering reduction seems more robustly good than does preventing extinction (which I realize is not the sole or explicit goal of longtermism, but is sometimes an intermediate goal)

Replies from: MichaelStJules, zsp
comment by MichaelStJules · 2022-07-09T09:45:14.955Z · EA(p) · GW(p)

You can also compare s-risk reduction work with animal welfare.

comment by Zach Stein-Perlman (zsp) · 2022-07-11T23:45:53.065Z · EA(p) · GW(p)

You asked for an analysis "even from a total utilitarian, longtermist perspective." From that perspective, I claim that preventing extinction clearly has astronomical (positive) expected value, since variance between possible futures is dominated by what the cosmic endowment is optimized for, and optimizing for utility is much more likely than optimizing for disutility. If you disagree, I'd be interested to hear why, here or on a call.

comment by Lukas_Finnveden · 2022-07-11T23:30:24.607Z · EA(p) · GW(p)

A proper treatment of this should take into account that short-term helping also might have positive effects in lots of simulations to a much greater extent than long-term helping. https://longtermrisk.org/how-the-simulation-argument-dampens-future-fanaticism

Replies from: zsp
comment by Zach Stein-Perlman (zsp) · 2022-07-11T23:40:15.843Z · EA(p) · GW(p)

Sure, want to change the numbers by a factor of, say, 10^12 to account for simulation? The long-term effects still dominate. (Maybe taking actions to influence our simulators is more effective than trying to cause improvements in the long-term of our universe, but that isn't an argument for doing naive short-term interventions.)

Replies from: Lukas_Finnveden
comment by Lukas_Finnveden · 2022-07-12T09:59:26.893Z · EA(p) · GW(p)

10^12 might be too low. Making up some numbers: If future civilizations can create 10^50 lives, and we think there's an 0.1% chance that 0.01% of that will be spent on ancestor simulations, then that's 10^43 expected lives in ancestor simulations. If each such simulation uses 10^12 lives worth of compute, that's a 10^31 multiplier on short-term helping.

comment by Derek · 2022-07-11T15:09:46.202Z · EA(p) · GW(p)

On fish, there were several comments here [EA · GW], including this one [EA · GW] from me.

The 2018 Humane Slaughter Association report was probably the best info available at the time; not sure what's happened since.

Replies from: aaronb50
comment by Aaron Bergman (aaronb50) · 2022-07-16T08:14:30.590Z · EA(p) · GW(p)

Wow thanks so much, super valuable info! Too bad I can't give it more than four karma haha

There is a lot of potential in fish welfare/stunning. In addition to what others have mentioned, IIRC from some reading a few years ago:

• The greatest bottleneck in humane slaughter is research, e.g. determining parameters/designing machines for stunning each major species, as they differ so much. There just aren't many experts in this field, and the leading researchers are mostly very busy (and pretty old), but perhaps financial incentives would persuade some people with the right sort of background to go into this area.
• As well as electrical and percussive stunning, anaesthetising with clove oil/eugenol seems a promising and under-researched method of reducing the pain of slaughter.  Because it may just involve adding a liquid/powder to a tank containing the fish, it may also require less tailoring to each species than than other methods (though it can affect the flavour if "too much" is used). I have some notes on this if anyone is interested.
• Crustastun could be mass-produced and supplied cheaply/freely to places that would otherwise boil crustaceans alive. I seem to recall a French lawyer had invented another machine that was even better (or cheaper) but was too busy to promote it; maybe EAs could buy the patent or something?

One of the reasons it took so long for me to reply is that I kinda fell into a rabbit hole investigating whether buying the Crustastun patent+manufacturing it and giving away would be a good intervention. It all looked good until I finally thought to look into lobsters themselves, and it turns out that they have way fewer neurons - ~100,000 according to an OpenPhil report (lost the link) - which is 2 orders of magnitude lower than even very small fish and  as many as humans. And crabs are very similar. F

WIW, I was not at all expecting to find this, and had no idea crustaceans had extremely disproportionately small brains. May as well link this Google doc as what I had written before I met some inconvient statistics.

I know not everyone is convinced that linear neuron comparisons are ideal, but they intuitively seem unlikely to be too far off from what "matters". Given this, I'm gonna conclude that Crustastun isn't worth pursuing unless we get more, different info about lobster sentience.

On to the other bullet points!

Replies from: Derek
comment by Derek · 2022-07-19T18:08:44.501Z · EA(p) · GW(p)

Glad you found it useful. I am not qualified to comment on the role of neuron count in sentience; you may want to look at work by Jason Schukraft and others at Rethink Priorities on animal sentience and/or get in touch with them.

If you haven't already, you may also want to review the 2018 Humane Slaughter Association report, which was the best I could find in early 2019. While looking for it, I also just came across one from Compassion in World Farming, which I don't think I've read.

comment by levin · 2022-07-09T15:25:18.224Z · EA(p) · GW(p)

Some of these are good enough questions that I am just raising an eyebrow, nodding, and hoping someone writes them up.

A few miscellaneous thoughts on the rest, which seem more tractable:

Are there cheap and easy ways to kill fish quickly?

Maybe you're already aware of ikejime and have concluded that it can't be cheaply scaled, but in case you haven't, check it out.

Figure out how to put to good use some greater proportion of the approximately 1 Billion recent college grads who want to work at an "EA org"

This might look like a collective of independent-ish researchers?

Agree that this sounds promising. I think this could be an org that collected well-scoped, well-defined research questions that would be useful for important decisions and then provided enough mentorship and supervision to get the work done in a competent way; I might be trying to do this this year, starting at a small scale. E.g., there are tons of tricky questions in AI governance that I suspect could be broken down into lots of difficult but slightly simpler research questions. DM me for a partial list.

For anyone who does think that improving human welfare in the developing world is the best thing to do: do AMF-type charities actually increase the number of human life-years lived?

Is this different from GiveWell because GiveWell doesn't try to estimate, like, the nth-order effects of AMF? I think I'm convinced by the cluelessness explanation that those would cancel out in expectation so we should be fine with first and maybe second-order effects.

(As I asked on Twitter) What jobs/tasks/roles are high impact (by normal EA standards) but relatively low status within EA?

I think one of the big ways EA could screw up is by having intra-EA status incongruent (at least ordinally) with expected impact.

(As I responded on Twitter and hope to turn into a forum post) I think aligning intra-EA status with impact is basically the whole point of EA community-building, so this is very important. I would guess that organizational operations is still too low-status and neglected: we need more people who are willing to set up payroll. (Low confidence, willing to be talked out of this, but it seems like the case to me.)

What positions of power and/or influence in the world are most neglected or easiest to access, perhaps because they’re low prestige and/or low pay?

An early and low-confidence guess: political careers that begin outside the NEC or California.

Replies from: kei, aaronb50, kei
comment by Kei (kei) · 2022-07-10T16:15:26.106Z · EA(p) · GW(p)

Agree that this sounds promising. I think this could be an org that collected well-scoped, well-defined research questions that would be useful for important decisions and then provided enough mentorship and supervision to get the work done in a competent way; I might be trying to do this this year, starting at a small scale. E.g., there are tons of tricky questions in AI governance that I suspect could be broken down into lots of difficult but slightly simpler research questions. DM me for a partial list.

You may be able to draw lessons from management consulting firms. One big idea behind these firms is that bright 20-somethings can make big contributions to projects in subject areas they don't have much experience in as long as they are put on teams with the right structure.

Projects at these firms are typically led by a partner and engagement manager who are fairly familiar with the subject area at hand. Actual execution and research is mostly done by lower level consultants, who typically have little background  in the relevant subject area.

Some high-level points on how these teams work:

• The team leads formulate a structure for what specific tasks need to be done to make progress on the project
• There is a lot of hand-holding and specific direction of lower-level consultants, at least until they prove they can do more substantial tasks on their own
• There are regular check-ins and regular deliverables to ensure people are on the right track and to switch course if necessary
Replies from: levin
comment by levin · 2022-07-10T17:04:19.483Z · EA(p) · GW(p)

Good points, thanks!

comment by Aaron Bergman (aaronb50) · 2022-07-10T01:43:57.700Z · EA(p) · GW(p)

Maybe you're already aware of ikejime and have concluded that it can't be cheaply scaled, but in case you haven't, check it out.

Yeah, I consider that the best case slaughter method and regret that it seems so labor intensive, but seems like there might be other  less bad methods than the current status quo

Is this different from GiveWell because GiveWell doesn't try to estimate, like, the nth-order effects of AMF? I think I'm convinced by the cluelessness explanation that those would cancel out in expectation so we should be fine with first and maybe second-order effects.

Sure, I think 4th+ order effects are likely impossible to model, but 2nd and maybe even 3rd not so much. I'd bet (though far from certain) you could get a well-identified study for the causal effect of e.g. malaria nets on total life years lived/population/pop growth in a certain geographic region, at least for some period of time

(As I responded on Twitter and hope to turn into a forum post) I think aligning intra-EA status with impact is basically the whole point of EA community-building, so this is very important. I would guess that organizational operations is still too low-status and neglected: we need more people who are willing to set up payroll. (Low confidence, willing to be talked out of this, but it seems like the case to me.)

Strong +1 on this, would be a super interesting and productive post IMO!

comment by Kei (kei) · 2022-07-10T16:14:28.081Z · EA(p) · GW(p)

Agree that this sounds promising. I think this could be an org that collected well-scoped, well-defined research questions that would be useful for important decisions and then provided enough mentorship and supervision to get the work done in a competent way; I might be trying to do this this year, starting at a small scale. E.g., there are tons of tricky questions in AI governance that I suspect could be broken down into lots of difficult but slightly simpler research questions. DM me for a partial list.

You may be able to draw lessons from management consulting firms. One big idea behind these firms is that bright 20-somethings can make big contributions to projects in subject areas they don't have much experience in as long as they are put on teams with the right structure.

Projects at these firms are typically led by a partner and engagement manager who are fairly familiar with the subject area at hand. Actual execution and research is mostly done by lower level consultants, who typically have little background  in the relevant subject area.

Some high-level points on how these teams work:

• The team leads formulate a structure for what specific tasks need to be done to make progress on the project
• There is a lot of hand-holding and specific direction of lower-level consultants, at least until they prove they can do more substantial tasks on their own
• There are regular check-ins and regular deliverables to ensure people are on the right track and to switch course if necessary
comment by lincolnq · 2022-07-19T18:22:02.500Z · EA(p) · GW(p)

Is there a \$100B idea with a 1% chance of working?

Coming from the startup world: it's pretty unlikely you will find great startups by thinking from this angle. Why? First, entrepreneurship appears to work much better when you don't over-index on the "what if it works?" storyline too early, as it causes people to dig a hole that's "broad and shallow" (which causes your feedback loops to suck, which causes you to fail to make progress, get demotivated and quit) . Second, a ton of other people are trying to find ideas with similar chances of success (competitors only matter early on in a huge market, but an idea of this value must be in a huge market).

comment by Derek · 2022-07-12T16:09:34.917Z · EA(p) · GW(p)

On fish, there were several comments here [EA · GW], including this one [EA · GW] from me.

The 2018 Humane Slaughter Association report was probably the best info available at the time; not sure what's happened since.