Is EA unscalable central planning?

post by Nathan Young (nathan) · 2019-05-07T07:25:07.387Z · score: 9 (6 votes) · EA · GW · 15 comments

Contents

  Effective Altruism must exist due to market failure
  Effective Altruism does not primarily focus on correcting that market failure
  80000 hours is basically central planning and is unsustainable as the movement grows
None
15 comments

I was discussing EA with a friend the other night and they made some criticisms I couldn’t answer. I think this is usually illustrative. I’ll attempt to structure the discussion:

Effective Altruism must exist due to market failure

That the market is the least bad allocator of resources we have created. I agree that a failure to price in x-risks and s-risks (existential and suffering risks) is a market failure here. Companies will make no money in a future where we are all dead and consumers would pay a lot to avoid significant suffering and yet companies aren’t investing. We are subsidising catastrophe while we let this continue.

Effective Altruism does not primarily focus on correcting that market failure

While it is an aspect of EA to lobby government to consider x and s-risk, it is not (as far as I can tell) the primary focus, nor it is what most people seem to spend their time doing. In other ideologies it might be reasonable to say we are doing what we can whilst carrying on, but since we are about finding the most effective way to do things, if this were the most important thing to do, we should all do it. We should found or convince a political party and either campaign or pay others to. Why don’t we?

80000 hours is basically central planning and is unsustainable as the movement grows

If people choose work based on 80k hours advice rather than their own desires/market incentives, this makes a break from market allocation and backtracks towards central planning which has always been worse in the past. Why is it better here?

Likewise the 80k hours advice isn’t scalable. If 10% of the workforce was reading 80k hours, it would make much more sense at to change government than to tell each individual which job they ought to be doing. At that point rather than saying it should mainly be AI and biorisk, you’d be thinking about how to shift the whole economy, which is more effectively done by the market than central planning. Rather than advising individuals you’d work on the level of legislation (to correct externalities).

So why is there a goldilocks zone where it makes sense to tell individuals to change their lives, when elsewhere they should all lobby government? Why are we working to do something which we don’t intend to do indefinitely? I can’t help but think it seems as if much of EA is a stopgap right now to demonstrate our legitimacy so that we can convince others to join and eventually move to our real purpose - wholescale legislative change. If that’s the case we should be honest about it. If that isn’t the case, where is this argument wrong?

15 comments

Comments sorted by top scores.

comment by Ben_Kuhn · 2019-05-07T12:07:52.083Z · score: 31 (10 votes) · EA · GW

What EA is currently doing would definitely not scale to 10%+ of the population doing the same thing. However, that's not a strong argument against not doing it right now. You can't start a political party with support from 0.01% of the population!

In general, we should do things that don't scale but are optimal right now, rather than things that do scale but aren't optimal right now, because without optimizing for the current scale, you die before reaching the larger scale.

comment by aarongertler · 2019-05-10T07:06:09.653Z · score: 2 (1 votes) · EA · GW

Also, we're very far from a world where even most people in EA choose careers based on 80K's advice.

I'd guess that among EA community members with "direct work" jobs, many or even most of them mostly used their own judgment to evaluate which career path would optimize their impact. (If "optimizing impact" was even their goal, that is; many of us chose jobs partly or mostly based on things like "personal interest" and "who we'd get to work with" rather than 100% "what will help most".)

And of course, most members don't have "direct work" jobs; they just donate and/or discuss EA while working in positions that 80K doesn't recommend anymore (or never did), because they found the jobs before they found EA or because they don't take 80K recommendations seriously enough to want to switch jobs (or any of a dozen other reasons).

comment by Nathan Young (nathan) · 2019-05-07T12:21:34.984Z · score: 1 (1 votes) · EA · GW

Thanks :)

Do we acknowledge our activities will change as we grow? Are we transparent about our mission?

comment by kbog · 2019-05-08T03:55:23.476Z · score: 10 (3 votes) · EA · GW

EA has always acknowledged that the specific choice of activities, charities, etc is contingent upon social and scientific realities. So it's implicitly clear that our activities can change as we grow.

comment by Milan_Griffes · 2019-05-07T22:28:20.189Z · score: 10 (4 votes) · EA · GW
Do we acknowledge our activities will change as we grow? Are we transparent about our mission?

There aren't monolithic answers to these questions.

EA is a broad coalition of individuals & organizations; different folks have different worldviews, missions, and communication preferences.

comment by anonymous_ea · 2019-05-08T16:19:18.516Z · score: 9 (3 votes) · EA · GW

EA activities have historically changed over time. EA growth itself is much less prioritized now than it was a few years ago. The importance of money, funding, and earning to give has changed over time. There have been several posts about this over the years - "funding constrained" might a good keyword to search for.

I think the core mission of doing the most good has always stayed the same and probably always will. The cause areas EA most focuses on has changed to some extent over the years. Most importantly, longtermism and far future concerns have become more prominent over time in EA orgs and prominent EAs, but much less so among more casual EAs.

80000 Hours is perhaps the most prominent example of an organization whose activities, thinking, and priorities have changed over time. Some of this should be visible from reading some of their older content.

comment by Nathan Young (nathan) · 2019-05-08T17:38:13.104Z · score: 3 (2 votes) · EA · GW

I suppose I don't understand why the aim isn't to grow the movement more to eventually influence legislation.

Likewise if that will one day be the aim at what point will the switch come?

comment by aarongertler · 2019-05-10T07:00:46.672Z · score: 3 (2 votes) · EA · GW

Is there a particular article or statement from an organization that made you think influencing legislation isn't one of the movement's aims?

In the last year or two, there's been a lot more focus within EA on influencing policy [EA · GW], at least in areas thought to be especially impactful. It's helped that some organizations within the movement have gradually become more experienced and credible, with more connections in the political sphere. I don't see any reason that this focus wouldn't continue to increase as we build our ability to succeed in this area.

As far as "grow the movement more", that's a tough question, and it's been the subject of debate for many years. Growth has many obvious upsides, but also some downsides. For example, if many new people join, it can be hard to transmit ideas in a high-fidelity way, and EA's focus/philosophy may drift as a result. Also, some [EA · GW] people [EA · GW] have argued that EA organizations currently struggle to provide enough resources/opportunities to current community members; adding a lot of new people without being selective might not let us actually give these people very much to do.

(I work for CEA, but these views are my own.)

comment by Nathan Young (nathan) · 2019-05-10T08:37:31.089Z · score: 3 (2 votes) · EA · GW
Is there a particular article or statement from an organization that made you think influencing legislation isn't one of the movement's aims?

I suppose from what I've read I get the sense it's mainly about careers and philanthropy rather than lobbying/activism, though that may be a case of what you later describe. Also @anonymous_EA 's post does suggest this idea:

EA growth itself is much less prioritized now than it was a few years ago.

Thanks for your time, I'll look into the influencing policy stuff.

comment by Larks · 2019-05-08T01:02:17.152Z · score: 10 (5 votes) · EA · GW

Thanks, I thought you (or your friend) had some interesting points.

With regard the '80k is a central planner', I think it's important to bear in mind that economists don't object to planning per se. It is good that individual firms perform their own planning; what matters is that:

  • Firms have the right incentives.
  • The price mechanism provides signals between firms, that also aid intra-firm decision making via shadow pricing.
  • Planning occurs at the right scale - firms are under optimization pressure to be neither too small nor too large.

All of which are of course largely absent from the charity sector.

I think 80k is not subject to this critique insomuchas they direct a relatively small fraction of total resources. They're more similar to a single firm, which has a view on an under-addressed market niche, than a socialist planner trying to solve for everything in general equilibrium. Firms often persue plans without direct price signals (e.g. developing a new product for which no market or price currently exists), and I would analogize 80k to this in some regards.

Where I do think you could make this criticism would be with regard the 'planning' of the EA movement. To the extent you think that they have over-emphased applying to EA groups [EA · GW] (or at least failing to communicate adequately their nuance on the issue) this looks like a classic case of central planners massively over-producing one good and under-producing another, with no price mechanism to equilbriate supply and demand.

comment by RobBensinger · 2019-05-08T03:00:45.550Z · score: 6 (4 votes) · EA · GW

Small terminology note: an "existential risk" is anything that would drastically reduce the future's value, so s-risk is a special case of x-risk.

comment by RomeoStevens · 2019-05-08T01:44:02.901Z · score: 5 (3 votes) · EA · GW

10 or even 100% of the population having better data driven methods for weighing career decisions is very much in everyone's interests even if the low hanging fruit is more in the early days.

comment by kbog · 2019-05-08T03:53:53.413Z · score: 4 (2 votes) · EA · GW

While it is an aspect of EA to lobby government to consider x and s-risk, it is not (as far as I can tell) the primary focus, nor it is what most people seem to spend their time doing. In other ideologies it might be reasonable to say we are doing what we can whilst carrying on, but since we are about finding the most effective way to do things, if this were the most important thing to do, we should all do it. We should found or convince a political party and either campaign or pay others to. Why don’t we?

Well in America, perhaps the main reason is that the implicit two-party system makes this intractable. Even in multiparty countries, I suspect EA is not yet large enough to get any power. Eventually, however, this could be a nice thing to add to the EA ecosystem.

If people choose work based on 80k hours advice rather than their own desires/market incentives, this makes a break from market allocation and backtracks towards central planning which has always been worse in the past. Why is it better here?
  • EA is acting on the margin with a small number of actors, not the whole economy.
  • EAs are individually deciding which jobs will have the highest social impact, they are not being commanded.
  • When comparing different jobs in a free market, the disparities in social impact are greater than the disparities in contributions to economic growth. Therefore it is easier to improve upon the market's social impact than it is to improve upon the market's economic growth.
Likewise the 80k hours advice isn’t scalable. If 10% of the workforce was reading 80k hours, it would make much more sense at to change government than to tell each individual which job they ought to be doing

The same thing can be said about career advice that is given by other websites. If 10% of the workforce read Mergers and Inquisitions and tried to become an investment banker, then trying to become an investment banker wouldn't make sense anymore. But it would be silly to complain about Mergers and Inquisitions that way.

I can’t help but think it seems as if much of EA is a stopgap right now to demonstrate our legitimacy so that we can convince others to join and eventually move to our real purpose - wholescale legislative change. If that’s the case we should be honest about it.

EA doesn't have a "real purpose", it just does whatever works best. Legislative change is something we may add to our toolkit; that doesn't mean we would abandon other things like charitable donations.

comment by ishi · 2019-05-08T15:59:15.180Z · score: 3 (2 votes) · EA · GW

I think in a sense choosing a name or 'brand' ---such as EA and 80,000 hours --is a form of central planning, just as a business usually involves central planning within its 'microenvironment'. But names, brands, religions, businesses all exist in a 'sea' or larger environment of others.

In a theoretical sense I think everyone on earth could join EA no matter their situation. Biologists sort of see the world this way--but recognize it may be more like IEA (ineffective altruism) or SEI (sub- or semi-optimal altruism).

I wonder if 80,000 hours has any high impact careers for 'nonconformists'---eg comedians like Dave Chappelle or writers like Mark Twain . I heard Ukraine just elected a TV comedian to be president---seems like a high impact career, though I dought he identifies as an EA.

I can imagine a high impact career aligned with EA as a 'central planner'. (I think Stalin and Mao and maye Hitler tried that with mixed results. German and chinese economies I hear are doing pretty well, though they started on an uneven playing field; Russia seems to be a mixed bag especialy outside of Moscow.)

comment by Nathan Young (nathan) · 2019-05-10T10:26:20.701Z · score: 1 (1 votes) · EA · GW

A core question for me is still, "Is EA's main aim to grow to affect govt policy?". This would be able to deal with problems that EA organisations work on at an incentives level such that that non-EAs would be properly motivated to solve problems that affect all our wellbeing.

In that sense, correcting an an externality is better than lobbying firms/consumers to ignore it (which is roughly what we currently do). Am I wrong here? If growth isn't EA's main aim, why not? Something doesn't add up.

I suppose the best answer I can expect is "we don't know that's more effective" thanks to aaron who showed me how Givewell is starting to look at [EA · GW] this [EA · GW] . But at some level this will stop being true, if EA had 51% support then we could just vote throw the measures we wanted (some ethical nuances).

So the secondary question is, do we have any idea when this shift from lobbying individuals to lobbying/participating in govt ought to take place. How many EAs should exist in a country before they make a concerted effort to lobby directly. That seems a fairly crucial detail.