Thomas Kwa's Shortform

post by Thomas Kwa (tkwa) · 2020-09-23T19:25:09.159Z · EA · GW · 15 comments

15 comments

Comments sorted by top scores.

comment by Thomas Kwa (tkwa) · 2020-09-23T19:25:09.517Z · EA(p) · GW(p)

I'm worried about EA values being wrong because EAs are unrepresentative of humanity and reasoning from first principles is likely to go wrong somewhere. But naively deferring to "conventional" human values seems worse, for a variety of reasons:

  • There is no single "conventional morality", and it seems very difficult to compile a list of what every human culture thinks of as good, and not obvious how one would form a "weighted average" between these.
  • most people don't think about morality much, so their beliefs are likely to contradict known empirical facts (e.g. cost of saving lives in the developing world) or be absurd (placing higher moral weight on beings that are physically closer to you).
  • Human cultures have gone through millennia of cultural evolution, such that values of existing people are skewed to be adaptive, leading to greed, tribalism, etc.; Ian Morris says "each age gets the thought it needs".

However, these problems all seem surmountable with a lot of effort. The idea is a team of EA anthropologists who would look at existing knowledge about what different cultures value (possibly doing additional research) and work with philosophers to cross-reference between these while fixing inconsistencies and removing values that seem to have an "unfair" competitive edge in the battle between ideas (whatever that means!).

The potential payoff seems huge, as it would expand the basis of EA moral reasoning from the intuitions of a tiny fraction of humanity to that of thousands of human cultures, and allow us to be more confident about our actions. Is there a reason this isn't being done? Is it just too expensive?

comment by David_Moss · 2020-09-25T14:15:03.092Z · EA(p) · GW(p)

Thanks for writing this.

I also agree that research into how laypeople actually think about morality is probably a very important input into our moral thinking. I mentioned some reasons for this in this post [EA · GW] for example. This project on descriptive population ethics also outlines the case for this kind of descriptive research. If we take moral uncertainty and epistemic modesty/outside-view thinking seriously, and if on the normative level we think respecting people's moral beliefs is valuable either intrinsicaially or instrumentally, then this sort of research seems entirely vital.

I also agree that incorporating this data into our considered moral judgements requires a stage of theoretical normative reflection, not merely "naively deferring" to whatever people in aggregate actually believe and that we should probably go back and forth between these stages to bring our judgements into reflective equillibrium (or some such).

That said, it seems like what you are proposing is less a project and more an enormous research agenda spanning several fields of research, a lot of which is ongoing across multiple disciplines, though much of it is in its early stages. For example, there is much work in moral psychology, which tries to understand what people believe, and why, at different levels, (influential paradigms include Haidt's Moral Foundations Theory, and Oliver Scott Curry's (Morality as Cooperation / Moral Molecules theory), a whole new field of sociology of morality (see also here) , anthropology of morality is a whole long-standing field, and experimental philosophy has just started to seek to empirically examine how people think about morality too. 

Unfortunately, I think our understanding of folk morality remains exceptionally unclear and in its very early stages. For example, despite a much touted "new synthesis" between different disciplines and approaches, there remains much distance between different approaches, to the extent that people in psychology, sociology and anthropology are barely investigating the same questions >90% of the time. Similarly, experimental philosophy of morality seems utterly crippled by validity issues (see my recent paper with Lance Bush here) . There is also, I have argued, a necessity to also gather qualitative data, in part due to the limitations with survey methodology for understanding people's moral views, which experimental philosophy and most psychology have essentially not started to do at all.  

I would also note that there already cross-cultural moral research on various questions, but this is usually limited to fairly narrow paradigms: for example, aside from those I mentioned above, the World Values Survey's focus on Traditional/Secular-Rational and Survival/Self-expressive values; research on the trolley problem (which also dominates the rest of moral psychology), or the Schwartz Values Survey. So these lines of research doesn't really give us insight into people's moral thinking in different cultures as a whole.

I think the complexity and ambition involved in measuring folk morality becomes even clearer when we consider what is involved in studying specific moral issues. For example, see Jason Schukraft's discussion of how we might investigate how much moral weight [EA · GW] the folk ascribe to the experiences of animals of different species.

There are lots of other possible complications with cross-cultural moral research. For example, there is some anthropological evidence that the western concept of morality is idiosyncratic and does not overlap particularly neatly with other cultures, see here.

So I think, given this, the problem is not simply that it's "too expensive", as we might say of a really large survey, but that it would be a huge endeavour where we're not even really clear about much of the relevant theory and categories. Also training a significant number of EA anthropologists, who are competent in ethnography and the relevant moral philosophy would be quite a logistical challenge.

---

That said I think there are plenty of more tractable research projects that one could do roughly within this area. For example, more large scale representative surveys examining people's views and their predictors across a wider variety of issues relevant to effective altruism/prioritisation would be relatively easy to do with a budget of <$10,000, by existing EA researchers. This would also potentially contribute to understanding influences on the prioritisation of EAs, rather than just what non-EA things, which would also plausibly be valuable.

comment by Denise_Melchin · 2020-09-30T17:15:13.261Z · EA(p) · GW(p)

Strong upvoted. Thank you so much for providing further resources, extremely helpful, downloading them all on my Kindle now!

comment by Denise_Melchin · 2020-09-23T20:25:23.538Z · EA(p) · GW(p)

I have recently been thinking about the exact same thing, down to getting anthropologists to look into it! My thoughts on this were that interviewing anthropologists who have done fieldwork in different places is probably the more functional version of the idea. I have tried reading fairly random ethnographies to built better intuitions in this area, but did not find it as helpful as I was hoping, since they rarely discuss moral worldviews in as much detail as needed.

My current moral views seem to be something close to "reflected" preference utilitarianism, but now that I think this is my view, I find it quite hard to figure out what this actually means in practice.

My impression is that most EAs don't have a very preference utilitarian view and prefer to advocate for their own moral views. You may want to look at my most recent post on my shortform on this topic.

If you would like to set up a call sometime to discuss further, please PM!

comment by Ozzie Gooen (oagr) · 2020-09-24T09:06:43.896Z · EA(p) · GW(p)

First, neat idea, and thanks for suggesting it!

Is there a reason this isn't being done? Is it just too expensive?

From where I'm sitting, there are a whole bunch of potentially highly useful things that aren't being done. After several years around the EA community, I've gotten a better model of why that is:

1) There's a very limited set of EAs who are entrepreneurial, trusted by funders, and have the necessary specific skills and interests to do many specific things. (Which respected EAs want to take a 5 to 20 year bet on field anthropology?)
2) It often takes a fair amount of funder buy-in to do new projects. This can take several years to develop, especially for an research area that's new.
3) Outside of OpenPhil, funding is quite limited. It's pretty scary and risky to start something new and go for it. You might get funding from EA Funds this year, but who's to say if you'll have to fire your staff in 3 years.

On doing anthropology, I personally think there might be lower hanging fruit first engaging with other written moral systems we haven't engaged with. I'd be curious to get an EA interpretation of parts of Continental Philosophy, Conservative Philosophy, and the philosophies and writings of many of the great international traditions. That said, doing more traditional anthropology could also be pretty interesting.

comment by evelynciara · 2020-09-24T04:30:56.244Z · EA(p) · GW(p)

I agree - I'm especially worried that focusing too much on longtermism will make us seem out of touch with the rest of humanity, relative to other schools of EA thought. I would support conducting a public opinion poll to learn about people's moral beliefs, particularly how important and practical they believe focusing on the long-term future would be. I hypothesize that people who support ideas such as sustainability will be more sympathetic to longtermism.

comment by Thomas Kwa (tkwa) · 2020-10-10T19:01:57.021Z · EA(p) · GW(p)

Is it possible to donate appreciated assets (e.g. stocks) to one of the EA Funds? The tax benefits would be substantially larger than donating cash.

I know that MIRI and GiveWell as well as some other EA-aligned nonprofits do support donating stocks. GiveWell even has a DAF with Vanguard Charitable. But I don't see such an option for the EA Funds.

edit: DAF = donor-advised fund

comment by MichaelDickens · 2020-10-11T03:39:30.602Z · EA(p) · GW(p)

Probably the easiest way to do this is to give to a donor-advised fund, and then instruct the fund to give to the EA Fund. Even for charities that can accept stock, my experience has been that donating through a donor-advised fund is much easier (it requires less paperwork).

comment by Thomas Kwa (tkwa) · 2020-10-11T04:33:25.441Z · EA(p) · GW(p)

To clarify, you mean a donor-advised fund I have an account with (say Fidelity, Vanguard, etc.) which I manage myself?

comment by MichaelDickens · 2020-10-11T18:41:00.324Z · EA(p) · GW(p)

Yes

comment by Thomas Kwa (tkwa) · 2020-10-16T19:46:00.720Z · EA(p) · GW(p)

Are there GiveWell-style estimates of the cost-effectiveness of the world's most popular charities (say UNICEF), preferably by independent sources and/or based on past results? I want to be able to talk to quantitatively-minded people and have more data than just saying some interventions are 1000x more effective.

comment by Larks · 2020-10-17T03:12:01.398Z · EA(p) · GW(p)

Unfortunately most cost-effectiveness estimates are calculated by focusing on the specific intervention the charity implements, a method which is a poor fit for large diversified charities.

comment by Thomas Kwa (tkwa) · 2020-10-17T03:28:03.805Z · EA(p) · GW(p)

Hmm, that's what I suspected. Maybe it's possible to estimate anyway though-- quick and dirty method would be to identify the most effective interventions a large charity has, estimate that the rest follow a power law, take the average and add error bars upwards for the possibility we underestimated an intervention's effectiveness?

comment by Jorgen_Ljones · 2020-10-22T16:37:10.291Z · EA(p) · GW(p)

One argument against the effectiveness from mega charities who does a bunch of different, unrelated interventions is that from the Central Limit Theorem (https://en.m.wikipedia.org/wiki/Central_limit_theorem) the average effectiveness of a large sample of interventions is apriori more likely to be close to the population mean effectiveness - that is the mean effectiveness of all relevant interventions. In other words, it's hard to be one of the very best if you are doing lots of different stuff. Even if some of the interventions you do are really effective, your average effectiveness will be dragged down by the other interventions.