Posts

Effective Altruism and Free Riding 2020-03-27T19:12:54.844Z
Is Neglectedness a Strong Predictor of Marginal Impact? 2018-11-09T15:34:55.272Z

Comments

Comment by sbehmer on Introducing LEEP: Lead Exposure Elimination Project · 2020-10-22T22:35:46.897Z · EA · GW

This looks great and thanks for posting! One question: how come those other organizations working in this space, who as you note have a track record of success in other countries, haven't expanded to countries like Malawi? In other words, why is lead exposure reduction in Malawi neglected by other actors?

Comment by sbehmer on Effective Altruism and Free Riding · 2020-08-29T16:30:35.179Z · EA · GW

Thanks, this is a very good comment. I mostly cited that article for the literature review, which includes a few papers that argue for a causal connection between learning economics and free-riding. However, I looked into it more today, and it seems like the entire body of work is inconclusive on this question. Here's a more recent literature review on that.

I'll edit that part of the post to be more accurate.

Comment by sbehmer on Common ground for longtermists · 2020-07-30T13:05:52.831Z · EA · GW

Thanks for the post. One question on the background: is there any data (from the EA survey or elsewhere) about the percentage of EAs who lean towards suffering-focused ethics?

Comment by sbehmer on John Halstead: Is impact investing impactful? · 2020-06-30T21:13:53.381Z · EA · GW

Thanks for the talk and the report. I think this it's a very interesting topic and an important one to work on, given how many socially-minded people seem to care about impact investing.

I have a few more questions in addition to the one about perfectly elastic demand curves:

1. You note that if public markets are efficient then it will take nearly the entire population of investors to divest for the divestment movement to impact stock prices. This seems to make sense: it only takes a small group of socially-neutral investors to drastically increase their investments in the bad company in response to divestment from others. However, if we consider a movement to increase investment in a socially-good company, it seems like this idea doesn't apply. Let's say that the good company makes up .001% of the total stock market. It seems like if .001% of investors are willing to accept lower returns for investing in that company, then they should be able fund the company all on their own. In equilibrium no socially-neutral investors will hold that company's stock, and the stock would yield lower returns than socially-neutral stocks. So perhaps movements which promote investment in good companies are more likely to succeed than divestment movements are.

2. From your research it looks like the current ESG ratings are very low-quality. Given how big of a market impact investing is, do you think that there would be value in trying to improve those ratings?

Comment by sbehmer on John Halstead: Is impact investing impactful? · 2020-06-30T20:50:35.353Z · EA · GW

Thanks for the reply.

You're right that the paper I posted doesn't present direct evidence. I just thought it was important that in their literature review they claim that prior studies show that demand curves are not perfectly elastic (at least in theory. They aren't citing empirical papers).

On the empirical side, I'm surprised to hear you say that there seems to be agreement that long-run demand curves are perfectly elastic. On page 18 of the founder's pledge report, you seem to say that there is expert disagreement on this, and you cite multiple recent studies on both sides of the issue. Has more evidence come out since the report was published?

Comment by sbehmer on John Halstead: Is impact investing impactful? · 2020-06-30T15:22:29.977Z · EA · GW

It seems like you are fairly confident from your research that impact investing will tend to have little impact in publicly traded markets. I briefly looked into the theoretical literature on this, and I'm not seeing why we should be so confident in that idea. For example, this paper from 2019 claims:

"In general, systematic screening of assets based on investors’ preferences leads to a return premium on the screened assets, in equilibrium, and such return differences cannot be arbitraged away by 'neutral' investors".

They then cite four theoretical papers in support of that claim (note: I haven't actually read through these papers. I just glanced at the introductions and the setups of their models. It could be that these are bad papers).

Were you aware of this literature when writing your report? Why should we be so confident in the arbitrage argument?

Comment by sbehmer on Effective Altruism and Free Riding · 2020-05-19T16:01:16.413Z · EA · GW

Thanks for the comment. If differences in careful thinking are the main sources of differences in people's altruistic behavior and those differences can be easily eliminated through informing people about the benefits of thinking carefully, then I agree that the ideas in this post are not very important.

The reason that the second part is relevant is because as long as these differences in careful thinking persist, then it's as if people have differences in values (this is the same as what I said in the essay about how there are a lot of differences in beliefs within the EA community which lead to different valuations of causes, even when people's moral values are identical). If these differences in careful thinking were easily to eliminate, then we should be prioritizing informing the entire world about their mistakes ASAP, so that any differences in altruistic priorities would be eliminated. Unfortunately, I don't think it's true that these differences are easy to eliminate (I think that's partially why the EA community has moved away from advocacy).

I also would disagree that differences in careful thinking are the main sources of disagreements in people's altrusitic behavior. Even within the EA community, where I think most people think very carefully, there are large differences in people's valuations of causes, as I mentioned in the post. I expect that the situation would be similar if the entire world started "thinking more carefully".

Comment by sbehmer on Reducing long-term risks from malevolent actors · 2020-05-05T15:29:52.866Z · EA · GW

Thanks, it's a very nice article on an important topic. If you're interested, there's a small literature in political economy called "political selection" (here's an older survey article) . As far as I know they don't focus specifically on the extreme lower tail of bad leaders, but they do discuss how different institutional features can lead to different types of people gaining power.

Comment by sbehmer on Effective Altruism and Free Riding · 2020-04-02T15:11:15.319Z · EA · GW

First, the only strong claim that I'm trying to make in the post is that the standard EA advice in this setting is to free-ride. Free-riding is not necessarily irrational or immoral. In the section "Working to not Destroy Cooperation" I argue that it's possible that this sort of free-riding will make the world worse, but that is more speculative.

As far as who the other players are in the climate change example, I was thinking of it as basically everyone else in the world who has some interest in preventing climate change, but the most important players are those who are or could potentially have a large impact on climate change and other important problems. This takes the form of a many-player public goods game, which is similar conceptually to a prisoner's dilemma. While I do think it's unlikely that everyone who has contributed to fighting climate change will collectively decide "let's not help EA with their goals", I think it's possible that if EA has success with their current strategy, some people will choose to use the methodology of EA. This could lead them to contribute to causes which are neglected by their value systems but which most people currently in EA find less important than climate change (causes like philanthropy in their local communities, or near term conservation work, or spreading their religion, or some bizarre thing that they think is important but no one else does). So, in that way, free-riding by EA could lead others to free-ride, which could make us all worse off.

Comment by sbehmer on Effective Altruism and Free Riding · 2020-03-31T17:25:54.128Z · EA · GW
I'd be up for being convinced otherwise – and maybe the model with log returns you mention later could do that. If you think otherwise, could you explain the intuition behind it?

The more general model captured the idea that there are almost always gains from cooperation between those looking to do good. It doesn't show, however, that those gains are necessarily large relative to the costs of building cooperation (including opportunity costs). I'm not sure what the answer is to that.

Here's one line of reasoning which makes me think the net gains from cooperation may be large. Setting aside the possibility that everyone has near identical valuations of causes, I think we're left with two likely scenarios:

1. There's enough overlap in valuations of direct-work to create significant gains from compromise on direct-work (maybe on the order of doubling each persons impact). This is like example A in the post.

2. Valuations of direct work are so far apart (everyone thinks that their cause area is 100x more valuable than others) that we're nearly in the situation from example D, and there will be relatively small gains from building cooperation on direct work. However, this creates opportunities for huge externalities due to advocacy, which means that the actual setting is closer to example B. Intuition: If you think x-risk mitigation is orders of magnitude more important than global poverty, then an intervention which persuades someone to switch from working on global poverty to x-risk will also have massive gains (and have massively negative impact from the perspective of the person who strongly prefers global poverty). I don't think this is a minor concern. It seems like a lot of resources get wasted in politics due to people with nearly perpendicular value systems fighting each other through persuasion and other means.

So, in either case, it seems like the gains from cooperation are large.

Comment by sbehmer on Effective Altruism and Free Riding · 2020-03-31T17:20:10.949Z · EA · GW
I'd still agree that we should factor in cooperation, but my intuition is then that it's going to be a smaller consideration than neglect of future generations, so more about tilting things around the edges, and not being a jerk, rather than significantly changing the allocation.

For now, I don't think any major changes in decisions should be made based on this. We don't know enough about how difficult it would be to build cooperation and what the gains to cooperation would be. I guess the only concrete recommendation may be to more strongly emphasize the "not being a jerk" part of effective altruism (especially because that can often be in major conflict with the "maximize impact" part). Also I would argue that there's a chance that cooperation could be very important and so it's worth researching more.

Comment by sbehmer on Effective Altruism and Free Riding · 2020-03-30T20:25:11.073Z · EA · GW

One more example to add here of a cause which may be like a "public good" within the EA community: promoting international cooperation. Many important causes are global public goods (that is, causes which benefit the whole world and thus any one nation has an incentive to free-ride on other nations' contributions), including global poverty, climate change, x-risk reduction, and animal welfare. I know that FHI already has some research on building international cooperation. I would guess that some EAs who primarily give to global poverty would be willing to shift funding towards building international cooperation if some EAs who normally give to AI safety do the same.

Comment by sbehmer on Effective Altruism and Free Riding · 2020-03-30T19:58:34.732Z · EA · GW

I agree with your intuition that with what a "cooperative" cause prioritization might look like. Although I do think a lot more work would need to be done to formalize this. I also think it may not make sense to use cooperative cause prioritization: if everyone else always acts non-cooperatively, you should too.

I'm actually pretty skeptical of the idea that EA tends to fund causes which are widely valued by people as a whole. It could be true, but it seems like it would be a very convenient coincidence. EA seems to be made up of people with pretty unique value systems (this, I'd expect, is partly what leads EAs to view some causes as being orders of magnitude more important than the causes that other people choose to fund). It would be surprising if optimizing independently for the average EA value system leads to the same funding choices as would optimizing for some combination of the value systems in the general population. While I agree that global poverty work seems to be pretty broadly valued (many governments and international organizations are devoted to it), I'm unsure about things like x-risk reduction. Have you seen any evidence that that is broadly popular? Does the UN have an initiative on x-risk?

I would imagine that work which improves institutions is one cause area which would look significantly more important in the cooperative framework. As I mention in the post, governments are one of the main ways that groups of people solve collective action problems, so improving their functioning would probably benefit most value systems. This would involve improving both formal institutions (constitutions), or informal institutions (civic social norms). In the cooperative equilibrium, we could all be made better off because people of all different value systems would put a significant amount of resources towards building and maintaining strong institutions.

A (tentative) response to your second to last paragraph: the preferences of animals and future generations would probably not be directly considered when constructing the cooperative world portfolio. Gains from cooperation come from people who have control over resources working together so that they're better off than in the case where they independently spend their resources. Animals do not control any resources, so there are no gains from cooperating with them. Just like in the non-cooperative case, the preferences of animals will only be reflected indirectly due to people who care about animals (just to be clear: I do think that we should care about animals and future people). I expect this is mostly true of future generations as well, but maybe there is some room for inter-temporal cooperation.

Comment by sbehmer on Effective Altruism and Free Riding · 2020-03-30T15:22:36.902Z · EA · GW

Thanks a lot for the comment. Here are a few points:

1. You're right that the simple climate change example it won't always be a prisoner's dilemma. However, I think that's more due to the fact that I assumed constant returns to scale for all three causes. At the bottom of this write-up I have an example with three causes that all have log returns. As long as both funders value the causes positively and don't have identical valuations, a pareto improvement is possible through cooperation (unless I'm making a mistake in the proof, which is possible). So I think the existence of collective action problems is more general than the climate change example would make it seem.

2. It's a very nice point that the gains from cooperation may be small in magnitude, even if they're positive. That is definitely possible. But I'm a little skeptical that large valuation differences between the 4 'schools' of EA donors means that the gains to cooperation are likely to be small. I think even within those schools there are significant disagreements among causes. For example, within the long-termist school, disagreements on whether we're living in an extremely influential time or on how to value population increases can lead to very large disagreements in valuation of causes. Also, when people have very large differences in valuations of direct causes, the opportunity for conflict on the advocacy front seems to increase (See Phil Trammell's post here).


I agree that it would be useful to get more of an idea of when the prisoner's dilemma is likely to be severe. Right now I don't think I have much more to add on that.

Comment by sbehmer on Effective Altruism and Free Riding · 2020-03-30T14:51:57.054Z · EA · GW

Thanks for the clarification. I apologize for making it sound as if 80k specifically endorsed not cooperating.

Comment by sbehmer on Effective Altruism and Free Riding · 2020-03-30T14:35:01.445Z · EA · GW

Thanks for the comment. First, I'd like to point out that I think there's a good chance that the collective action problem within EA isn't so bad because, as I mentioned in the post, there has been a fairly large emphasis on cooperating with others within EA. It's when interacting with people outside of EA that I think we're acting non-cooperatively.


However, it's still worth discussing whether there are major unsolved collective action problems within EA. I'll give some possible examples here, but note that I'm very unsure about many of these examples. First, here are some causes which I think benefit EAs of many different value systems and are thus would be underfunded if people were acting non-cooperatively:

1. General infrastructure including the EA forum, EA funds or EA global. This also would include the mechanisms for cooperation which I mentioned in the post. All of these things are like public goods in that that they probably benefit nearly every value system within EA. If true, this also means that the "EA meta fund" may be the most public good-like of the four EA funds.

2. The development of informal norms within the community (like being nice, not overly-stating or making misleading arguments, cooperating with others). The development and maintenance of these norms also seems to be a public good which benefits all value systems.

3. (this is the most speculative one) more long-term oriented approaches to near-term EA cause areas. An example is approaches to global development which involve building better and lasting political institutions (see this forum post). This may represent a kind of compromise between some long-termist EAs (who may normally donate to AI safety) and global development EAs (who would normally donate to short-term development initiatives like AMF).


And here are some causes which I think are viewed as harmful by some value systems and thus would be overfunded if people acted non-cooperatively:

1. Advocacy efforts to convince people to convert from other EA cause areas to your own. As I mentioned in the post, these can be valued negatively by other value systems.

2. Causes which increase (or decrease) the population. Some people disagree on whether creating more lives is on average good or bad (for example, some suffering-focused EAs may think that creating more human lives is good. Conversely, some people may think that creating more farm animal lives is on average good). This means that causes which increase (decrease) the population will be viewed as harmful by those who view population increases (decreases) as bad. Brian Tomasik's example at the end of this post is along those lines.


So, in general, I don't think I agree that the EA community is likely to not have major collective action problems. It seems more likely, though, that EA has solved most of its internal collective action problems through emphasizing cooperation.

Comment by sbehmer on Effective Altruism and Free Riding · 2020-03-29T17:08:25.997Z · EA · GW

Thanks for that reference! I hadn't come across that before. I think the main difference is that for most of my post I'm considering public goods problems among people who are completely unselfish but have different moral values. But problems also exist when people have identical moral values and some level of selfishness. Paul Christiano's post does a nice job of explaining that case. Milton Friedman also wrote about that problem (specifically, he talked about how poverty alleviation is a public good).

Comment by sbehmer on Differential progress / intellectual progress / technological development · 2020-02-09T18:46:11.629Z · EA · GW

Thanks for the post!

For people especially interested in this topic, it might be useful to know that there's a literature within academic economics that's very similar called "Directed Technical Change". See Acemoglu (2002) for a well-cited reference.

Although that literature has mostly focused on how different technological developments will impact wage inequality, the models used can be applied (I think) to a lot of the topics mentioned in your post.

Comment by sbehmer on Is Neglectedness a Strong Predictor of Marginal Impact? · 2018-11-28T15:13:50.592Z · EA · GW

Thanks for the comment. I agree that R&D costs are very important and can lead to increasing marginal returns. The HIV example is a good one, I think.

Comment by sbehmer on Is Neglectedness a Strong Predictor of Marginal Impact? · 2018-11-28T15:10:04.375Z · EA · GW

I agree that moving to explicit cost-effectiveness modeling is ideal in many situations. However, the arguments that I gave in the post also apply to the use of neglectedness for initial scoping. If neglectedness is a poor predictor of marginal impact, then it will not be useful for initial scoping.

Comment by sbehmer on Is Neglectedness a Strong Predictor of Marginal Impact? · 2018-11-19T00:11:48.210Z · EA · GW

Thanks for the response. I agree that social norms and politics are areas where increasing returns seem likely.

Comment by sbehmer on Is Neglectedness a Strong Predictor of Marginal Impact? · 2018-11-19T00:10:06.778Z · EA · GW

Thanks for this comment! The links were helpful. I have a few comments on your points:

" Empirically, we do see systematic diminishing returns to R&D inputs across fields of scientific and technological innovation "

After reading the introduction of that article you linked, I'm not sure that it has found evidence of diminishing returns to research, or at least that it has found the kind of diminishing returns that we would care about. They find that the number of researchers required to double GDP (or any other measure of output) has increased over time, but that doesn't mean that the number of researchers required to increase GDP by a fixed amount has increased. In fact, if you take their Moore's law example, we find that the number of transistors added to a computer chip per researcher per year is 58000 larger than it was in the early 70s (it takes 18 times more researchers to double the number of transistors, but that number of transistors is about a million times larger than it was in the 70s). When it comes to research on how to do the most good, I think we would care about research output in levels, rather than in percentage terms (so, I only care how many lives a health intervention would save at time t, rather than how many lives it will save as a percentage of the total amount of lives at time t).

" In politics and public policy the literatures on lobbying and campaign finance suggest diminishing returns "

I'm struggling to see how those articles you linked are finding diminishing returns. Is there something I'm missing? The lobbying article says that the effectiveness of lobbying is larger when an issue does not receive much public attention, but that doesn't mean that, for the same issue, the effectiveness of lobbying spending will drop with each dollar spent. Similarly, the campaign finance article mentions studies that find no causal connection between ad-spending and winning an election for general elections and others which show a causal connection for primary and local elections. I don't see how this means that my second dollar donated to a campaign will have less expected value than my first dollar.

As antonin_broi mentioned in another comment, political causes seem to have increasing returns built in to them. You need a majority to get a law passed or to get someone elected, so under complete certainty there would be zero (marginal) value to convincing people to vote your way until you reach the median voter. After that there will once again be zero marginal value to buying additional votes.

" In growing new movements, there is an element of compounding returns, as new participants carry forward work (including further growth), and so influencing; this topic has been the subject of a fair amount of EA attention "

I agree that this is important for growing new movements, and I have seen EA articles discuss a sort of "multiplier effect" (if you convince one person to join a group they will then convince other people). But none of the articles I have seen, including the one that you linked, have mentioned the possibility of increasing returns to scale. Increasing returns would arise if the cost of convincing an additional person to join decreases with the number of people that are already involved. This could arise because of changing social norms or due to increased name recognition.

" historically the greatest successes of philanthropy, reductions in poverty, and increased prosperity have stemmed from innovation, and many EA priorities involve research and development "

This brings up one potentially important point: in addition to scaling effects that you mentioned, another common source of increasing returns is high research and development requirements. High R&D requirements mean that the first units of output are very expensive (because in addition to the costs of production you also have to learn how to produce them) compared with following units. To apply this to an EA topic, if Givewell didn't exist, then to do a unit of good in global health we would either have to fund less cost-effective charities (because we wouldn't know which one was best) or pay money to create Givewell before donating to its highest recommended charities. In the second scenario, the cost of producing a unit of good within global health is very high for the first unit and significantly lower for the second. The fact that innovation seems to be one of the more effective forms of philanthropy increases the possibility that we are in a world where increasing returns to scale are relevant to doing good. However, I'm not completely sure on my reasoning here. I may be missing something.

" Experience with successes using neglectedness (which in prioritization practice does involve looking at the reasons for neglect) thus far, at least on dimensions for which feedback has yet arrived "

I think this would be a very important piece of evidence. Can you give me some detail about the successes so far?

Comment by sbehmer on Is Neglectedness a Strong Predictor of Marginal Impact? · 2018-11-11T14:36:38.343Z · EA · GW

Yes, PITi + ui is supposed to be the real importance and tractability. If we knew PITi + ui, then we would know a cause area's marginal impact exactly. But instead we only know PITi.

Comment by sbehmer on Is Neglectedness a Strong Predictor of Marginal Impact? · 2018-11-11T05:29:35.239Z · EA · GW

Thanks for the comment. I agree that considering the marginal value of information is important. This may be another source of diminishing marginal total value (where total value = direct impact + value of information). It seems, though, that this is also subject to the same criticism I outline in the post. If other funders also know that neglected causes give more valuable information at the margin, then the link between neglectedness and marginal value will be weakened. The important step, then, is to determine whether other funders are considering the value of information when making decisions. This may vary by context.

Also, could you give me some more justification for why we would expect the value of information to be higher for neglected causes? That doesn't seem obvious to me. I realize that you might learn more by trying new things, but it seems that what you learn would be more valuable if there were a lot of other funders that could act on the new information (so the information would be more valuable in crowded cause areas like climate change).

On your second point, I agree that when you're deciding between causes and you're confident that other funders of these causes have no significant information that you don't, and you're confident that there are diminishing returns, then we would expect for neglectedness to be a good signal of marginal impact. Maybe this is a common situation to be in for EA-type causes, but I'm not so sure. A lot of the causes on 80,000 Hours' page are fairly mainstream (climate change, global development, nuclear security), so a lot of other smart people have thought about them. Alternatively, in cases where we can be confident that other funders are poorly informed or irrational, there's the worry about increasing returns to scale.