EA and the current funding situation

post by William_MacAskill · 2022-05-10T02:26:06.446Z · EA · GW · 193 comments

Contents

  Summary
  Intro
  The current situation
  Risks of commission: causing harm
  Risks of omission: squandering the opportunity 
  An incredible opportunity
  Appendix: how fast should we be scaling funding?
None
187 comments

This post gives an overview of how I’m thinking about the “funding in EA” issue, building on many conversations. Although I’m involved with a number of organisations in EA, this post is written in my personal capacity. You might also want to see my EAG talk which has a related theme, though with different emphases. As a warning, I’m particularly stretched for time at the moment and might not have capacity to respond to comments. For helpful comments, I thank Abie Rohrig, Asya Bergal, Claire Zabel, Eirin Evjen, Julia Wise, Ketan Ramakrishnan, Leopold Aschenbrenner, Matt Wage, Max Daniel, Nick Beckstead, Stephen Clare, and Toby Ord.
 

Summary

 

Intro

Well, things have gotten weird, haven’t they?

Recently, I went on a walk with a writer, and it gave me a chance to reflect on the earlier days of EA. I showed him the first office that CEA rented, back in 2013. It looks like this:

To be clear: the office didn’t get converted into an estate agent — it was in the estate agent, in a poorly-lit room in the basement. Here’s a photo from that time:

 


 Normally, about a dozen people worked in that room. When one early donor visited, his first reaction was to ask: “Is this legal?” 

At the time, there was very little funding available in EA. Lunch was the same, every day: budget baguettes and plain hummus. The initial salaries offered by CEA were £15,000/yr pre-tax. When it started off, CEA was only able to pay its staff at all because I loaned them £7,000 — my entire life savings at the time. One of our first major donations was from Julia Wise, for $10,000, which was a significant fraction of the annual salary she received from being a mental health social worker at a prison. Every new GWWC pledge we got was a cause for celebration: Toby Ord estimated the expected present value of donations from a GWWC pledge at around $70,000, which was a truly huge sum at the time.[2]

Now the funding situation is… a little different. This post is about taking stock, and reflecting on how we should respond to that. It builds on thinking I’ve done over the last 14 months, many conversations I’ve had, and the many recent Forum posts and comments. I’m not trying to be prescriptive  — you should figure out your own takes — but hopefully I can be a little helpful. I’m aiming for this post to convey what I hope to be the right attitude to the situation as a whole, rather than merely discussing one aspect or other of the issue, as most of the recent posts have done.

In a nutshell: our current situation is both an enormous responsibility and an incredible opportunity. If we’re going to respond appropriately, we need to act with judicious ambition, holding both of these frames in mind. 
 

The current situation

Effective altruism has done very well at raising potential funding[3] for our top causes. This was true two years ago: GiveWell was moving hundreds of millions of dollars per year; Open Philanthropy had potential assets of $14 billion from Dustin Moskovitz and Cari Tuna. But the last two years have changed the situation considerably, even compared to that. The primary update comes from the success of FTX: Sam Bankman-Fried has an estimated net worth of $24 billion (though bear in mind the difficulty of valuing crypto assets, and their volatility), and intends to give essentially all of it away. The other EA-aligned FTX early employees add considerably to that total.[4]

There are other prospective major donors, too. Jaan Tallinn, the cofounder of Skype, is an active EA donor. At least one person earning to give (and not related to FTX) has a net worth of over a billion; a number of others are on track to give hundreds of millions in their lifetime. Among Giving Pledge signatories, there are around ten who are at least somewhat sympathetic to either effective altruism or longtermism. And there are a number of other successful entrepreneurs who take EA or longtermism seriously, and who could increase the total aligned funding by a lot. So, while FTX’s rapid growth is obviously unusual, it doesn’t seem like a several-orders-of-magnitude sort of fluke to me, and I think it would be a mistake to think of it as a ‘black swan’ sort of event, in terms of EA-aligned funding.  

So the update I’ve made isn’t just about the level of funding we have, but also the growth rate. Previously, it wasn’t obvious to me whether Dustin and Cari were flukes or not; if they were, all it would take is for their interests to move elsewhere, or for Facebook stock to tank, for the amount of EA-aligned potential funding to decline considerably. 

Now I think the amount of EA-aligned funding is, in expectation, considerably bigger in the future than it is today. Of course, over the next five years, total potential funding could still decrease by a lot, especially if FTX crashes. But it also could increase by many tens of billions more, if FTX does very well, or if new very large donors get on board. So we should at least be prepared for a world where there’s even more EA-aligned potential funding than there is today.

There’s a tricky question about how fast we should be spending this down. Compared to others in EA, I think I’m unusually sympathetic to patient philanthropy [? · GW]: I don’t think the chance of a hinge moment in the next decade is dramatically higher than the chance of a hinge moment in 2072-82 (say); and I think our understanding of how to do good is improving every year, which gives a reason for delay. 

But even I think that we should greatly increase our giving compared to now. One reason in favour of spending quickly is that, even if you endorse patient philanthropy, you should probably still distribute some significant proportion of your funding over time (perhaps in the low single-digit percentage points[5]) because philanthropic resources have diminishing returns. And if you think that we’re at a very influential time, perhaps because you think we’ll probably see transformative AI in our lifetimes, then it should be larger still. (I lay out some more reasons for and against faster spending in an appendix.) 

A second reason is that we can fund community-building, which is a form of investment, and which seems to have very high returns. Indeed, the success of FTX, and of EA in general, should give us a major update in this direction. So far, we've generated more than $30 bn for something like $200 mn, at a benefit:cost ratio of 150 to 1;[6] and even excluding the success of FTX, the benefit-cost looks very good (especially if we consider that funding raised is not all, or maybe even most, of the impact that outreach to date has generated). Further investment in outreach is likely to continue to raise much more money for the most pressing problems than it costs.[7]

A third reason is option value: if we build the infrastructure to productively and scalably absorb funding, then we can choose not to use it if it turns out to not be the right decision; whereas if we don’t build the infrastructure now, then it will take time to do so if in a few years’ time it does turn out to be needed. 

A final consideration that weighs on me for spending faster isn’t based on impact grounds. Rather, it simply feels wrong to have such financial assets when there’s such suffering in the world, and such grave risks that we face. Now, of course what we ought to do is whatever is impact-maximising over the long run; but at least in terms of my moral aesthetics, it really feels that the appropriate thing is for this money to get used, soon.

For the time being, let’s suppose that we aim just to spend the return on EA-aligned funding: about $2 billion per year (at 5% real rate of return). Spending even this amount of funding effectively will be a huge challenge. It’s a big challenge even within global health and wellbeing, where the biggest scale-up in giving is currently happening. GiveWell now aims to move $1 billion per year by 2025, including (as a tentative plan) an annual allocation from Open Phil of about $500 million. But even now, and even if they lower their funding bar from 8x the cost-effectiveness of GiveDirectly to 5x the cost-effectiveness of GiveDirectly, they still have more funding available than funding gaps to fill.

Spending this funding will be a truly enormous challenge within cause areas such as AI risk and governance, worst-case biorisk, and community-building, that have fewer or no existing organisations that could productively use such sums of money. The Future Fund expressed a bold aim of giving between $100mn and $1bn this year. Let’s say that this ends up at $300mn in grants (which might be about 30% of total EA money moved this year). That’s a rapid scale-up from a standing start, but it’s a shortfall of $1.2 billion compared to what it would need to spend just to distribute the rate of return on the financial assets of Sam Bankman-Fried and Gary Wang. 

If the total potential funding grows, or if the right thing to do is to be spending down our assets, then that number increases, perhaps considerably. And we should be particularly prepared for scenarios where our potential funding increases by a lot, because we have more impact in those scenarios than in scenarios where our potential funding decreases considerably.

Meeting this challenge means that EA’s culture and norms will need to adapt. In 2013, it made sense for us to work in a poorly-lit basement, eating baguettes and hummus. Now it doesn’t. Frugality is now comparatively less valuable, and saving time and boosting productivity in order to make more progress on the most pressing problems is comparatively more valuable. Creating projects that are maximally cost-effective is now comparatively less valuable; creating projects that are highly scalable with respect to funding, and can thereby create greater total impact even at lower cost-effectiveness, is comparatively more valuable. Extensive desk research to evaluate a small or medium-sized funding opportunity is comparatively less valuable; just spending money to actually try something and find out empirically if it works is comparatively more worthwhile. 

Now, I miss the basement days. It feels morally appropriate to be eating baguettes and hummus every day. But solving the big problems in the world isn’t about acting in a way that feels appropriate; it’s about doing the highest-impact thing. 

Nonetheless, it’s natural to worry: is this really EA adapting to a new situation, or is it value-drift? Maybe we’re fooling ourselves! I think it’s good we’re being vigilant about this. So let’s discuss how we should adapt; how to respond appropriately to the situation we’re in, while not losing the mission-driven focus that was present in the basement of an Oxford estate agent.

There are two ways in which we could fail in response to this challenge. We could cause harm by commission: doing dumb things that end up net-negative overall. Or we could cause harm by omission: failing to do things that would have enormous positive impact. Let’s take each of these risks in turn.  
 

Risks of commission: causing harm

As has been noted in a number of recent Forum posts, there are ways that scaling up our giving could cause harm, such as by damaging EA’s culture — destroying what makes EA distinctive and great — or by funding net-negative projects. There are a number of risks here; I’ll discuss a few, but this list is still incomplete. 
 

Appearances of extravagance

Other things being equal, EA wants to appeal to morally-motivated people. It’s more valuable to have someone who intrinsically wants to make the world better than someone who’s just doing it for a paycheck: they’ll be in it for longer, and are more likely to make better decisions even in cases where their income doesn’t depend on them making the right decision. But morally-motivated people, especially on college campuses, often find seemingly-extravagant spending distasteful.

What’s more, this is a perfectly rational response. Living on rice and beans is a costly signal: it’s easier for genuinely morally motivated people to do it than people who are faking. So if I meet someone living on rice and beans and giving a chunk of their income away, I take seriously their claims to be morally motivated. If I meet someone who’s flying business class — they might be doing that because they’re morally serious and trying to maximise their output, but it’s much harder for an outsider to tell. [8]

So we don’t want to turn off morally dedicated people. This is a major worry for me; I think that EA developing a bad reputation is one of the leading existential risks to the community, and a reputation for extravagance would not at all be helpful for that. 

But there’s a balancing act. Very few people will want to live on rice and beans forever. Initially, we disproportionately appealed to people who were willing to be very frugal, and turned off those who weren’t, so the current community is disproportionately constituted of such people. Now, we have the chance to appeal to people who are less willing to be ultra-frugal, but have many other great qualities and can contribute enormously to the world.[9] And this is a great thing.

My impression is that most of the issues that have gotten people worried have been unforced errors: people getting carried away, or not thinking about how what they did would be perceived, or talking about spending in a way that seems flippant. (And sometimes the alleged situations have been misrepresented, distorted through a game of Telephone.) This, naturally, feels alienating to many people, especially those who are new to EA. The opportunity cost of spending is very real and very major, in absolute terms; not recognising that can seem suspicious. 

Given that the most egregious errors seem unforced, I think there are some easy wins, such as: 

Harming quality of thought

Another worry is that funding will negatively impact EA’s ability to think; that, insidiously, people will be incentivised to believe whatever will help them get funding, or that particular worldviews will get artificially inflated in virtue of receiving more funding than they should receive.

My guess is that culture is an even bigger worry (where people, including funders, go too far in the direction of deferring to those regarded as particularly smart, or are too worried about deviating from what they regard as consensus views within the community). But either way, having incorrect beliefs or focusing on the wrong things is an easy way for the EA community to lose almost all its value. And we want to reduce that as much as possible. 

Again, I think there are actions we can take to mitigate this risk:

Resentment

There’s a tough messaging challenge around the funding situation. On the one hand, we want to convey that people should be developing big, ambitious plans, and convey how much we need to scale up the community’s giving. Given that, it’s natural to feel disappointed or even resentful if you create such plans but then don’t receive funding.

But there’s always an opportunity cost, and the bar for receiving funding is still extremely high. We could easily use up the entirety of our potential funding on cash transfers, or clean tech funding, or pandemic preparedness technology, or compute for the most safety-conscious AI labs. Given this opportunity cost, even despite the scale-up in funding, most well-meaning projects still won’t get funded. For example, Future Fund is trying to scale up its giving rapidly, but in the recent open call it rejected over 95% of applications. 

As a proportion of the world’s resources, EA-aligned financial resources are still tiny: the return on EA-aligned financial assets is less than a hundredth of Alphabet’s yearly revenue and about one four-hundredths of the US defence budget. And they’re also tiny compared to the problems we face: despite about $160bn in annual official development assistance, and $630bn in annual (public and private) climate spending, extreme poverty and climate change are still serious problems. 

To address this issue, how we talk about the situation is important: 

Losing evolutionary forces towards greater impact

One worry I’ve had is that availability of funding could mean we lose incentives towards excellence. If it’s too easy to get funding, then a mediocre project could just keep limping on, rather than improving itself; or a substandard project could continue, even though it would be better if it shut down and the people involved worked elsewhere.  

Within the nonprofit world, there’s a general problem where, unlike unprofitable companies, bad nonprofits don’t die. We should worry that the same problem will affect us.

That said, this is something that I think donors are generally keeping in mind; many seed grants won’t be renewed, and if a project doesn’t seem like a good use of the people running it, then it’s not likely to get funded. 

One way we as a community can mitigate this concern further is to celebrate failures. For example, No Lean Season was an impressive-looking global development non-profit; it was incubated at Evidence Action, and went through Y Combinator (in the same batch as CEA). But, after an RCT found that its impact was lower than they’d hoped, and after they had to terminate their relationship with a partner organisation, they shut down and published their reasons for doing so.

This is a socially weird thing to do, and very unusual within the nonprofit world. But it was awesome, and should be praised as such.
 

Risks of harm 

There’s one huge difference between aiming to do good and aiming to make profit. If you set up a company aiming to make money, generally the very worst that can happen is that you go bankrupt; there’s a legal system in place that prevents you from getting burdened by arbitrarily large debt. However, if you set up a project aiming to do good, the amount of harm that you can do is basically unbounded.

This is a common worry in EA, and it's extremely important as far as it goes. The standard solution is to communicate and cooperate with others with shared goals; if there’s a range of opinions on whether something is a good idea, then following the majority view is the right strategy. And, in practice, all the major funders closely communicate and coordinate, and behave cautiously; similarly, when someone is starting a new project, in my experience they tend to get extensive feedback from the community on the risks and benefits of that project. 

Indeed, my honest take is that EAs are generally on the too-cautious end. As well as the unilateralist’s curse (where the most optimistic decision-maker determines what happens), there’s a risk of falling into what we could call the bureaucrat’s curse,[10] where everyone has a veto over the actions of others; in such a situation, if everyone follows their own best-guesses, then the most pessimistic decision-maker determines what happens. I’ve certainly seen something closer to the bureaucrat’s curse in play: if you’re getting feedback and your plans, and one person voices strong objections, it feels irresponsible to go ahead anyway, even in cases where you should. At its worst, I’ve seen the idea of unilateralism taken as a reason against competition within the EA ecosystem, as if all EA organisations should be monopolies. 

The suggested examples of harmful projects I’ve heard tend, in my view, not to come from people who take the unilateralist’s curse seriously and are in close communication with the community, but go ahead and do it anyway. Instead, all along they were power-grabs, or they were by people who just didn’t care what others thought of their plans. In contrast, I’ve found that, for those who are highly concerned about unilateralism, if they end up doing something that might be harmful, they quickly receive feedback and course-correct. 

Overall, risks of harm are something I think we’re actually managing pretty well as a community, and can keep managing well if we:

Risks of omission: squandering the opportunity 

There are a number of ways in which the influx of funding could cause real harm. Despite this, I don’t think it’s the most likely way we’ll fail. 

It seems to me to be more likely that we’ll fail by not being ambitious enough; by failing to take advantage of the situation we’re in, and simply not being able to use the resources we have for good ends. 

It’s hard to internalise, intuitively, the loss from failing to do good things; the loss of value if, say, EA continued at its current giving levels, even though it ought to have scaled up more. For global health and development, the loss is clear and visceral: every year, people suffer and lives are lost. It’s harder to imagine for those concerned by existential risks. But one way to make the situation more vivid is to imagine you were in an “end of the world” movie with a clear and visible threat, like the incoming asteroid in Don’t Look Up. How would you act? For sure, you’d worry about doing the wrong thing. But the risk of failure by being unresponsive and simply not doing enough would probably weigh on you even harder.

There are a couple of reasons why I’m particularly worried about risks of omission.  First, it’s just very hard to seriously scale up giving while spending the money effectively. It’ll involve enormous amounts of work, from hundreds or thousands of people. Often, it’ll involve people doing things that just aren’t that enjoyable: management and scaling organisations to large sizes are rarely people’s favourite activities; and, it will be challenging to incentivise enough people to do these things effectively.

To see how hard this is, we can look at existing foundations. The foundation that has most successfully scaled up its giving is the Gates Foundation: it gives out about $6 bn per year, which is extremely impressive — far more than any other foundation.[11] But it seems to me they are falling far short of their goals. Bill Gates and Melinda French Gates have said they want the foundation to spend all its assets within 20 years of their deaths, and comment that: “The decision to use all of the foundation’s resources in this century underscores our optimism for progress and determination to do as much as possible, as soon as possible, to address the comparatively narrow set of issues we’ve chosen to focus on.” 

Even though the Gates Foundation spends far more than any other foundation, since 2000 their total assets have increased from $100 billion[12] to $316 billion[13] — over a factor of 3. They’re distributing close to $6 bn per year, but that’s less than half the return they get on their total financial assets (at 5% real return per year). Given the ages of Bill Gates and Melinda French Gates, they should expect to live approximately another 30 years. But in order to spend down their assets within 50 years, if the foundation distributed a fixed amount every year, it would need to give out over $17 bn per year. 

I don’t want to make any claims about the tricky question of the optimal rate of giving over time. But we should at least feel the potential loss, here, if scaling up too slowly means that less good is done. 

A second reason why I’m worried about scaling too slowly, or too low a plateau, is that there are asymmetric costs to trying to do big things versus being cautious. Compare: (i) How many times can you think of an organisation being criticised for not being effective enough? and (ii) How many times can you think of someone being criticised for not-founding an organisation that should have existed? (Or, suppose I hadn’t given a talk on earning to give at MIT in 2012,[14] would anyone be berating me?) In general, you get public criticism for doing things and making mistakes, not for failing to do anything at all.

The asymmetric costs are especially worrying when salaries represent only a tiny fraction of the value you create, which is especially true for non-profit projects. VCs struggle to get entrepreneurs to be ambitious and risk-taking enough: the solution that has emerged is to pay successful entrepreneurs huge amounts of money. A successful EA megaproject might generate far more value for the world than Uber (for example), but, even if EA salaries were to increase enormously, the founders and early employees will still get paid much less than the founders and early employees of Uber.[15]

The importance of finding ways to scale our giving also changes how we should think about grantmaking. Early EA culture was built on a highly skeptical mindset. This is still important in many ways (this post by Holden on ‘minimal trust investigations’ is one of my favourite blog posts of the last year). But it can cause us to go awry if it means we don’t take chances of upside seriously, or when we focus our concern on false positives rather than false negatives. 

I worry we’ve made some errors in the past by not taking the chance of best-case scenarios seriously, out of a desire to be rigorous and skeptical. For example, I mentioned that Toby initially estimated the value of a Giving What We Can Pledge at $70,000 (as one example of quantifying the benefits of outreach and community-building more generally). I remember having arguments with people who claimed that estimate was too optimistic. But take the 7000 Giving What We Can members, and assume that none of them give anything apart from Sam Bankman-Fried, who gives his net worth. Then a Pledge was actually worth $2 million — 30 times higher than Toby’s “optimistic” estimate at the time.[16] In general, if our successes are sampling from a heavy-tailed distribution, the historical average value of our impact will very likely be lower than the true mean. 

And when we look to future community-building efforts, the asymmetry of upside and downside suggests to me that, if we put the risk of harm to one side, we should be much more concerned about missing opportunities for impact than about spending money in ways that don’t have impact.

It’s easiest to quantify when looking at earning to give (but is in no way limited to that). We’ve seen, now, that EA outreach can inspire people to earn to give in ways that put them on track to donate hundreds of millions of dollars or more. That means the worry about missing out on opportunities to change people’s careers should, for the time being, loom larger than the worry about overspending. 

(Quantitatively: suppose $200 is spent on an intro to EA retreat for someone. If that has a more than a one in a five hundred thousand chance of inspiring the attendee to earn to give and successfully donate $100 million over their lifetime, then the expected financial benefit is positive. Given the successes we’ve seen, both from FTX and outside of that, the real probability is orders of magnitude larger. That’s not to say $200 on a retreat is how much should be spent — if you can have the same impact at cheaper cost, you should. And excessive spending can even become counterproductive if it sends the wrong message. But it indicates just how small community-building spending is in comparison the potential benefits from changing people’s careers for the better.)  

The need to scale changes the optimal approach to grantmaking in another way, too: it also means that making many small grants (small relative to the tens of billions of dollars per year we might need to spend) in order to find out, empirically, what things seem cost-effective, becomes well worth it.

Here’s a toy example. Suppose you give out 100 grants of $100,000 each. They all do nothing apart from one, which demonstrates a scalable way of absorbing $100 million at 120% of the cost-effectiveness of last-dollar spending. You’ve spent $10 million in order to gain impact equivalent to $20 million at last-dollar spending.[17] It’s a good use of money, even though 99 of the grants achieved nothing. 

I think this toy example often reflects reality. It’s much easier, and more reliable, to assess a project once it's already been tried. If you need to scale giving dramatically, then often it makes sense to fund something and find out empirically how good it is, so that in two years’ time you can decide whether to stop funding altogether, or scale donations considerably. If the cost to fund and get the information is a small proportion of the giving you hope to scale up to, then it can be well worth just making the grant and figuring out how cost-effective it is later on if it seems potentially promising as something to scale. (A similar thought lies in part behind Future Fund’s 2022 goal [EA · GW] of doing “bold and decisive tests of highly scalable funding models.”)
 

An incredible opportunity

It’s easy to feel stressed about the current situation. But paralysing anxiety or insomnia-inducing stress probably aren’t the attitudes that will help you have the most long-term impact.

So let’s reframe things, for a moment at least.

A classic reason why people feel unmotivated to do good things is that their contribution will be just “a drop in the bucket.” A helpful psychological response is to think of the impact that a community you’re part of, working together on that problem, will have. 

When we think of the impact EA has had so far, it’s pretty inspiring. Let’s just take one organisation: Against Malaria Foundation. Since its founding, it has raised $460 million, in large part because of GiveWell’s recommendation. Because of that funding, 400 million people have been protected against malaria for two years each; that’s a third of the population of sub-Saharan Africa. It’s saved on the order of 100,000 lives — the population of a small city.

We did that. 

And the current funding situation means that is just the beginning

The amount of potential funding is still very small from the perspective of the world as a whole. But we’re now at a stage where we can plausibly have a very significant impact on some of the world’s biggest problems. Could we reduce existential risk from AI or pandemic by more than 10%, eradicate a global disease, or bring forward the end of factory farming by a year? Probably.

We should be judiciously ambitious. To achieve the sort of impact we’re now capable of, it means being sensitive to the risks of ambition, and the negatives of spending funds, for sure. But it also means we need to use the opportunities we have available to us. We should think big and be willing to take bold actions, while mitigating the risks. If we can manage to do both of these at once, we as a community can achieve some amazing things.
 

Appendix: how fast should we be scaling funding?

It’s non-obvious to me what the ideal rate of distributing funding should be, although my fairly strong best guess is that we should be scaling up to giving much more than we are at the moment.

Briefly, the main reasons I see in favour of giving more are:

In my mind, the strongest cases against dramatically scaling up our giving are:

  1. ^

    I considered suggesting the slogan “Move fast and don’t break things” to encapsulate this, but I thought that “move fast” isn’t really the right framing for ambition: setting up a massively scalable project might mean being small and testing things out for years in order to put yourself in a position to grow enormously.

  2. ^

    I don’t know exactly what the situation was like for GiveWell in New York or for the LessWrong / SingInst crowd in the Bay, but it wasn’t radically different: salaries were low, and funding was scarce.

  3. ^

    By “potential funding” I mean financial assets from people who plan to give those financial assets away to EA-aligned causes.

  4. ^

    For example, Gary Wang has a publicly estimated net worth of $5.9 billion, and plans to use the majority of that for EA-aligned goals.

  5. ^

    For example, in a very simple model [? · GW] by Phil Trammell, if you find yourself at a moderately influential time, you should spend 2.4% of your resources per year. On this simple model, if you think that “influentialness” (or “hingeyness”) will be roughly flat over the next fifty years, but will permanently fall to 1/10 of its current level soon after that, then you should give out  0.83% each year.  (These numbers shouldn’t be relied on though; they are intended only to be illustrative.)

  6. ^

     If you were sceptical of replicating FTX-esque success, then that number might drop - but I think even so it should be much higher than 10:1.

  7. ^

    Note that the ‘community-building’ argument also justifies significant funding of direct work, simply because a movement that never does anything of actual value is not very compelling.

  8. ^

    What’s weird but important to bear in mind is that the perception of extravagance often has little to do with the amount of money actually being spent. One organisation I know hosted a conference at a very fancy venue: in reality, the venue is owned by a charitable foundation, so was a comparatively cheap option. But it looked extremely grand, and there were a number of complaints.

  9. ^

    In some cases, an extravagant lifestyle can even produce a lot of good, depending on the circumstances. I know of some people who have attended luxurious parties, met major philanthropists there, and gotten them involved in EA. It’s not my preferred moral aesthetic, but the world’s problems don’t care about my aesthetics. (Needless to say, if you find yourself in this unusual position, you should probably take special care to make sure that attending luxurious parties really is the way you can do the most good.)

  10. ^

    H/T Nick Beckstead

  11. ^

    I believe the foundation that distributes the second-largest amount per year is Wellcome, which in 2020/1 gave out £1.2 billion.

  12. ^

    Inflation-adjusted to today’s money

  13. ^

    Including the pledge from Warren Buffet to give almost all his wealth, which is currently $120 billion.

  14. ^

    https://80000hours.org/stories/sam-bankman-fried/

  15. ^

    Of course, relative to Uber there is the additional “incentive” of the impact that the project will create, which mitigates this issue.

  16. ^

    Of course, there’s plenty to argue with in the estimate, but I don’t think it changes the core point.

  17. ^

    In this toy example, I’m ignoring inflation and investment returns, and assuming that you can’t in advance identify which project is a “hit”. See also https://www.openphilanthropy.org/blog/hits-based-giving

193 comments

Comments sorted by top scores.

comment by Luke Freeman (lukefreeman) · 2022-05-10T04:38:32.804Z · EA(p) · GW(p)

Thanks so much for writing this Will! I can't emphasise enough how much I appreciate it.

 if a project doesn’t seem like a good use of the people running it, then it’s not likely to get funded. 

Two norms that I'd really like to see (that I haven't seen enough of) are:
1. Funders being much more explicit  to applicants about why things aren't funded (or why they get less funding than asked for). Even a simple tagging system like "out of our funding scope" or "seemed too expensive", "not targeted enough", or "promising (review and resubmit)" (with a short line about why) is explicit yet simple.

2. More funder diversity while maintaining close communications (e.g. multiple funders with different focus areas/approaches/epistemics, but single application form to apply to multiple funders and those funders sharing private information such as fraud allegation etc).

I know feedback is extremely difficult to do well (and there are risks in giving feedback), but I think that lack of feedback creates a lot of problems, e.g.:

  • resentment and uneasiness towards funders within the community;
  • the unilateralists curse is exacerbated (in cases where something is not funded because it's seen as bad they keep seeking out other funders because they assume that it just wasn't a good fit for the funder);
  • funding applications and ideas don't get better (e.g. a process for 'review and resubmit' could be great); and
  • it wastes lots of time for all the grantees and funders.

Whereas providing quality feedback (or at least some minimal feedback) can create a lot of good outcomes, e.g.:

  • people update their plans appropriately;
  • 'bad' projects get weeded out sooner;
  • people learn how to write better proposals; and
  • more impactful projects get funded.
Replies from: Sanjay, Tee, Arepo, Linch, Jonas Vollmer
comment by Sanjay · 2022-05-10T09:40:30.986Z · EA(p) · GW(p)

Heartily agree with this.

For the pilot of SoGive Grants [EA · GW], we plan to 

(1) Provide feedback as much as we can (the only reason we haven't promised to give feedback to everyone is that this is a pilot and we don't know whether that's feasible for us)

(2) The application form is almost a copy and paste of the EA Funds application form, to make life easier for those who are applying to both

(BTW applications are still open and close on 22nd May)

I also want to echo pretty much every bullet point that Luke made about the value of feedback, which I think are excellent points.

Replies from: Lorenzo Buonanno
comment by Lorenzo (Lorenzo Buonanno) · 2022-05-10T15:04:47.286Z · EA(p) · GW(p)

I'm really curious as to why this is being downvoted (was at -2 when I originally wrote this comment, now it's at 0 with 7 votes), I find SoGive Grants interesting and relevant to the discussion.

Especially since More funder diversity is a main point of Luke's comment.

Replies from: Max_Daniel
comment by Max_Daniel · 2022-05-10T15:26:05.102Z · EA(p) · GW(p)

I don't know but FWIW my guess is some people might have perceived it as self-promotion of a kind they don't like.

(I upvoted Sanjay's comment because I think it's relevant to know about his agreement and about the plans for SoGive Grants given the context.)

comment by Arepo · 2022-05-13T07:11:44.974Z · EA(p) · GW(p)

I regret that I have but one strong upvote to give this. Lack of feedback on why some of the projects I've been involved in didn't get funding has been incredibly frustrating.

One further benefit of getting it would have been that it can help across the ecosystem when you get turned down by Funder A and apply to Funder B - if you can pass on the feedback you got from Funder A (and how you've responded to it), that can save a lot of Funder B's time.

As a meta-point, the lack of feedback on why there's a lack of feedback also seems very counterproductive. 

comment by Linch · 2022-05-23T20:28:03.568Z · EA(p) · GW(p)

Obviously I do not speak for Will or anybody else, but I wrote "Some unfun lessons I learned as a junior grantmaker [EA · GW]" partially as a response to what I perceived as misconceptions by some of the comments in this thread and elsewhere on the Forum.

comment by Habryka · 2022-05-10T20:05:30.156Z · EA(p) · GW(p)

I feel like this post mostly doesn't talk about what feels like to me the most substantial downside of trying to scale up spending in EA, and increased availability of funding. 

I think the biggest risk of the increased availability of funding, and general increase in scale, is that it will create a culture where people will be incentivized to act more deceptively towards others and that it will attract many people who will be much more open to deceptive action in order to take resources we currently have. 

Here are some paragraphs from an internal memo I wrote a while ago that tried to capture this: 

========

I think it it was Marc Andreessen who first hypothesized that startups usually go through two very different phases:

  1. Pre Product-market fit: At this stage, you have some inkling of an idea, or some broad domain that seems promising, but you don't yet really have anything that solves a really crucial problem. This period is characterized by small teams working on their inside-view, and a shared, tentative, malleable vision that is often hard to explain to outsiders.
  2. Post Product-market fit: At some point you find a product that works for people. The transition here can take a while, but by the end of it, you have customers and users banging on your door relentlessly to get more of what you have. This is the time of scaling. You don't need to hold a tentative vision anymore, and your value proposition is clear to both you and your customers. Now is the time to hire people and scale up and make sure that you don't let the product-market fit you've discovered go to waste.

I think it was Paul Graham or someone else close to YC (or maybe Ray Dalio) who said something like the following (NOT A QUOTE, since I currently can't find the direct source):

> The early stages of an organization are characterized by building trust. If your company is successful, and reaches product-market fit, these early founders and employees usually go on to lead whole departments. Use these early years to build trust and stay in sync, because when you are a thousand-person company, you won't have the time for long 10-hour conversations when you hang out in the evening.

> As you scale, you spend down that trust that you built in the early days. As you succeed, it's hard to know who is here because they really believe in your vision, and who just wants to make sure they get a big enough cut of the pie. That early trust is what keeps you agile and capable, and frequently as we see founders leave an organization, and with that those crucial trust relationships, we see the organization ossify, internal tensions increase, and the ability to effectively correspond to crises and changing environments get worse.

It's hard to say how well this model actually applies to startups or young organizations (it matches some of my observations, though definitely far from perfectly), and even more dubious how well it applies to systems like our community, but my current model is that it captures something pretty important.

I think whether we want it or not, I think we are now likely in the post-product-market fit part of the lifecycle of our community, at least when it comes to building trust relationships and onboarding new people. I think we have become high-profile enough, and have enough visible resources (especially with FTX's latest funding announcements), and have gotten involved in enough high-stakes politics, that if someone shows up next year at EA Global, you can no longer confidently know whether they are there because they have a deeply shared vision of the future with you, or because they want to get a big share of the pie that seems to be up for the taking around here.

I think in some sense that is good. When I see all the talk about megaprojects and increasing people's salaries and government interventions, I feel excited and hopeful that maybe if we play our cards right, we could actually bring any measurable fraction of humanity's ingenuity and energy to bear on preventing humanity's extinction and steering us towards a flourishing future, and most of those people of course will be more motivated by their own self-interest than their altruistic motivation.

But I am also afraid that with all of these resources around, we are transforming our ecosystem into a market for lemons. That we will see a rush of ever greater numbers of people into our community, far beyond our ability to culturally onboard them, and that nuance and complexity will have to get left at the wayside in order to successfully maintain any sense of order and coherence.

I think it is not implausible that for a substantial fraction of the leadership of EA, within 5 years, there will be someone in the world whose full-time job and top-priority it is to figure out how to write a proposal, or give you a pitch at a party, or write a blogpost, or strike up a conversation, that will cause you to give them money, or power, or status. For many months, they will sit down many days a week and ask themselves the question "how can I write this grant proposal in a way that person X will approve of" or "how can I impress these people at organization Y so that I can get a job there?", and they will write long Google Docs to their colleagues about their models and theories of you, and spend dozens of hours thinking specifically about how to get you to do what they want, while drawing up flowcharts that will include your name, your preferences, and your interests.

I think almost every publicly visible billionaire has whole ecosystems spring up around them that try to do this. I know some of the details here for Peter Thiel, and the "Thielosphere", which seems to have a lot of these dynamics. Almost any academic at a big lab will openly tell you that among the most crucial pieces of knowledge that any new student learns when they join, is how to write grant proposals that actually get accepted. When I ask academics in competitive fields about the content of their lunch conversations in their labs, the fraction of their cognition and conversations that goes specifically to "how do I impress tenure review committees and grant committees" and "how do I network myself into an academic position that allows me to do what I want" ranges from 25% to 75% (with the median around 50%).

I think there will still be real opportunities to build new and flourishing trust relationships, and I don't think that it will be impossible for us to really come to trust someone who joins our efforts after we have become 'cool,' but I do think it will be harder. I also think we should cherish and value the trust relationships we do have between the people who got involved with things earlier, because I do think that lack of doubt of why someone's here is a really valuable resource, and one that I expect is more and more likely to be a bottleneck in the coming years.

Replies from: Habryka, ThomasWoodside, gruban, GMcGowan, MaxRa, Ulrik Horn, Rob Mitchell, RedStateBlueState
comment by Habryka · 2022-05-11T02:01:41.236Z · EA(p) · GW(p)

Reading this, I guess I'll just post the second half of this memo that I wrote here as well, since it has some additional points that seem valuable to the discussion: 

When I play forward the future, I can imagine a few different outcomes, assuming that my basic hunches about the dynamics here are correct at all:

  1. I think it would not surprise me that much if many of us do fall prey to the temptation to use the wealth and resources around us for personal gain, or as a tool towards building our own empire, or come to equate "big" with "good". I think the world's smartest people will generally pick up on us not really aiming for the common good, but I do think we have a lot of trust to spend down, and could potentially keep this up for a few years. I expect eventually this will cause the decline of our reputation and ability to really attract resources and talent, and hopefully something new and good will form in our ashes before the story of humanity ends.
  2. But I think in many, possibly most, of the worlds where we start spending resources aggressively, whether for personal gain, or because we do really have a bold vision for how to change the future, the relationships of the central benefactors to the community will change. I think it's easy to forget that for most of us, the reputation and wealth of the community is ultimately borrowed, and when Dustin, or Cari or Sam or Jaan or Eliezer or Nick Bostrom see how their reputation or resources get used, they will already be on high-alert for people trying to take their name and their resources, and be ready to take them away when it seems like they are no longer obviously used for public benefit. I think in many of those worlds we will be forced to run projects in a legible way; or we will choose to run them illegibly, and be surprised by how few of the "pledged" resources were ultimately available for them.
  3. And of course in many other worlds, we learn to handle the pressures of an ecosystem where trust is harder to come by, and we scale, and find new ways of building trust, and take advantage of the resources at our fingertips.
  4. Or maybe we split up into different factions and groups, and let many of the resources that we could reach go to waste, as they ultimately get used by people who don't seem very aligned to us, but some of us think this loss is worth it to maintain an environment where we can think more freely and with less pressure.

Of course, all of this is likely to be far too detailed to be an accurate prediction of what will happen. I expect reality will successfully surprise me, and I am not at all confident I am reading the dynamics of the situation correctly. But the above is where my current thinking is at, and is the closest to a single expectation I can form, at least when trying to forecast what will happen to people currently in EA leadership.

To also take a bit more of an object-level stance, I currently very tentatively believe that I don't think this shift is worth it. I don't actually really have any plans that seem hopeful or exciting to me that really scale with a lot more money or a lot more resources, and I would really prefer to spend more time without needing to be worried about full-time people trying to scheme how to get specifically me to like them.

However, I do see the hope and potential in actually going out and spending the money and reputation we have to maybe get much larger fractions of the world's talent to dedicate themselves to ensuring a flourishing future and preventing humanity's extinction. I have inklings and plans that could maybe scale. But I am worried that I've already started trying to primarily answer the question "but what plans can meaningfully absorb all this money?" instead of the question of "but what plans actually have the highest chance of success?", and that this substitution has made me worse, not better, at actually solving the problem.

I think historically we've lacked important forms of ambition. And I am excited about us actually thinking big. But I currently don't know how to do it well. Hopefully this memo will make the conversations about this better, and maybe will help us orient towards this situation more healthily.

Replies from: Charles He, M. Y. Zuo
comment by Charles He · 2022-05-11T16:24:33.340Z · EA(p) · GW(p)

To onlookers: There’s a often a low amount of resolution and expertise in some comments and concerns on the LW and EAF, and this creates “bycatch” and reduces clarity. With uncertainty, I'll lay out one story that seems like it matches the concerns in the parent comment.
 

Strong Spending

I'm not entirely sure this is correct, but for large EA spending, I usually think of the following:

  • 30%-70% growth in head count in established institutions, sustained for multiple years
  • Near six figure salaries for junior talent, and well over six figure salaries for very good talent and management who can scale and build an organization (people who can earn multiple times that in the private sector and cause an organization to exist and have impact)
  • Seven figure salaries for extreme talent (world's best applied math, CS, top lawyers)
  • Discretionary spending
  • Buying operations, consulting and other services

So all the above is manageable,  even sort of fundamental for a good leader or ED or CEO. This is why quality CEO or leadership is so important, to hire and integrate this talent well and manage this spending. This is OK.

This is considered “high”, but it's not really by real world standards. 

Replies from: Charles He
comment by Charles He · 2022-05-11T16:46:43.587Z · EA(p) · GW(p)

Next-level Next-level

Now distinct from the above comment, there’s a whole other reference class of spending where:

  1. People can get an amount of cash that is a large fraction of all spending in an existing EA cause area in one raise.
  2. The internal environment is largely "deep tech" or not related to customers or operations

So I'm thinking about valuations in the 2010- tech sector for trendy companies. 

I'm not sure, but my model of organizations that can raise 8 figures per person in a series B, for spending that is pretty much purely CapEx (as opposed to capital to support operations or lower margin activity, e.g. inventory, logistics) has internal activity that is really, really different than the above "high" spending in the above comment. 

 

There's issues here, that are hard to appreciate. 

So Facebook's raises were really hot and oversubscribed. But building the company was a drama fest for the founders, and also there was a nuclear reactor hot business with viral growth. So that's epic fires to put out every week, customers and partners, actual scaling issues of hockey stick growth (not this meta-business advice discussion on the forum). It's a mess. So CEO and even junior people have to deal. 

But once you're just raising that amount in deep tech mode, my guesses for how people think,  feel, and behave inside of that company with valuations in the 8-9 figures per person. My guess is that the attractiveness, incentives and beliefs in that environment, are really different than even the hottest startups, even above those where junior people exit with 7 figures of income.

 

To be concrete, the issues on the rest of EA might be that:

  • Even strong EA CEOs won’t be able to hire many EA talent like software developers (but they should be worried about hiring pretty much anyone really). If they hire, they won't be able to keep them at comfortable, above EA salaries, without worrying about attrition.
  • Every person who can convincingly claim interest or signal interest in a cause area is inherently going to be treated very differently in any discussion, interaction, in a deep way that I don't EA has seen.
  • The dynamics emerge that good people won't feel comfortable adding this to their cause area anymore.

Again, this is not "strong spending" but the "next level, next level" world of both funding that is hard to match in human history in any for-profit, plus the nature of work that is different than any other.

Replies from: Charles He
comment by Charles He · 2022-05-11T17:03:44.709Z · EA(p) · GW(p)

I'm not sure, but in situations where this sort of dynamic or resource gradient happens, this isn't resolved by the high gradient stopping (people don't stop funding or founding institutions), because the original money is driven by underlying forces that is really strong. My guess is that a lot of this would be counter productive.

Typically in those situations, I think the best path is moderation and focusing on development and culture in other cause areas.

comment by M. Y. Zuo · 2022-05-11T12:38:37.719Z · EA(p) · GW(p)

These are some very important points, thanks for taking the time to write them out. 

I just made an account here, though I've only ever commented on LW before, just to stress how important and vital it is to soberly assess the change in incentives. Because even the best have strengths and weaknesses that need to be adapted to.

"Show me the incentives and I will show you the outcome" - Charlie Munger

comment by ThomasWoodside · 2022-05-10T21:41:44.038Z · EA(p) · GW(p)

I thought this comment was valuable and it's also a concern I have.

It makes me wonder if some of the "original EA norms", like donating a substantial proportion of income or becoming vegan, might still be quite important to build trust, even as they seem less important in the grand scheme of things (mostly, the increase in the proportion of people believing in longtermism).  This post makes a case for signalling.

It also seems to increase the importance of vetting people in somewhat creative ways. For instance, did they demonstrate altruistic things before they knew there was lots of money in EA? I know EAs who spent a lot of their childhoods volunteering, told their families to stop giving them birthday presents and instead donate to charities, became vegan at a young age at their own initiative, were interested in utilitarianism very young, adopted certain prosocial beliefs their communities didn't have, etc. When somebody did such things long before it was "cool" or they knew there was anything in it for them, this demonstrates something, even if they didn't become involved with EA until it might help their self-interest. At least until we have Silicon Valley parents making sure their children do all the maximally effective things starting at age 8.

It's kind of useful to consider an example, and the only example I can really give on the EA forum is myself. I went to one of my first EA events partially because I wanted a job, but I didn't know that there was so much money in EA until I was somewhat involved (also this was Fall 2019, so there was somewhat less money). I did some of the things I mentioned above when I was a kid (or at least, so I claim on the EA forum)! Would I trust me immediately if I met me? Eh, a bit but not a lot, partially because I'm one of the hundreds of undergrads somewhere near AI safety technical research and not (e.g.) an animal welfare person. It would be significantly easier if I'd gotten involved in 2015 and harder if I'd gotten involved in 2021.

Part of what this means is that we can't rely on trust so much anymore. We have to rely on cold, hard, accomplishments. It's harder, it's more work, it feels less warm and fuzzy, but it seems necessary in this second phase. This means we have to be better about evaluating accomplishments in ways that don't rely on social proof. I think this is easier in some fields (e.g. earning to give, distributing bednets) than others (e.g. policy), but we should try in all fields.

Replies from: Yitz
comment by Yitz · 2022-05-11T09:03:57.593Z · EA(p) · GW(p)

How bad is it to fund someone untrustworthy? Obviously if they take the money and run, that would be a total loss, but I doubt that’s a particularly common occurrence (you can only do it once, and would completely shatter social reputation, so even unethical people don’t tend to do that). A more common failure mode would seem to be apathy, where once funded not much gets done, because the person doesn’t really care about the problem. However, if something gets done instead of nothing at all, then that would probably be (a fairly weak) net positive. The reason why that’s normally negative is due to that money then not being used in a more cost-effective manner, but if our primary problem is spending enough money in the first place, that may not be much of an issue at all.

Replies from: ThomasWoodside
comment by ThomasWoodside · 2022-05-11T12:21:13.401Z · EA(p) · GW(p)

I think it's easier than it might seem to do something net negative even ignoring opportunity cost. For example, actively compete with some other better project, interfere with politics or policy incorrectly, create a negative culture shift in the overall ecosystem, etc.

Besides, I don't think the attitude that our primary problem is spending down the money is prudent. This is putting the cart before the horse, and as Habryka said might lead to people asking "how can I spend money quick?" rather than "how can I ambitiously do good?" EA certainly has a lot of money, but I think people underestimate how fast $50 billion can disappear if it's mismanaged (see, for an extreme example, Enron).

Replies from: Yitz
comment by Yitz · 2022-05-11T13:16:20.078Z · EA(p) · GW(p)

That’s a fair point, thank you for bringing that up :)

comment by Patrick Gruban (gruban) · 2022-05-11T04:51:48.262Z · EA(p) · GW(p)

I share your worries about the effects on culture. At the same time I don't see this vision as bad:

For many months, they will sit down many days a week and ask themselves the question "how can I write this grant proposal in a way that person X will approve of" or "how can I impress these people at organization Y so that I can get a job there?", and they will write long Google Docs to their colleagues about their models and theories of you, and spend dozens of hours thinking specifically about how to get you to do what they want, while drawing up flowcharts that will include your name, your preferences, and your interests.

Imagine a global health charity that wants to get on the GiveWell Top Charities list. Wouldn't we want it to spend much time thinking about how to get there, ultimately changing the way it works in order to come up with the evidence needed to get included? For example, Helen Keller International was founded more than 100 years ago and its vitamin A supplementation program is recommended by GiveWell. I would love to see more external organisations change in order to get EA grants instead of us trying to reinvent the wheel where others might already be good.

Organisations getting started or changing based on the available funding of the EA community seems like a win to me. As long as they have a mission that is aligned with what EA funders want and they are internally mission-aligned we should be fine. I don't know enough about Anthropic for example but they just raised $580M mainly from EAs while not intending to make a profit. This could be a good signal to more organisations out there trying to set up a model where they are interesting to EA funders.

In the end, it comes down to the research and decision making of the grantmaker. GiveWell has a process where they evaluate charities based on effectiveness. In the longterism and meta space, we often don't have such evidence so we may sometimes rely more on the value alignment of people. Ideally, we would want to reduce this dependence and see more ways to independently evaluate grants regardless of the people getting them.

Replies from: Charles He
comment by Charles He · 2022-05-11T07:57:43.406Z · EA(p) · GW(p)

I was going to write an elaborate rebuttal of the parent comment. 

In that rebuttal, I was going to say there's a striking lack of confidence. The concerns seems like a pretty broad argument against building any business or non-profit organization with a virtuous culture. There's many counterexamples against this argument—and most have the additional burden of balancing that growth while tackling existential issues like funding.

It's also curious that corruption and unwieldly growth has to set in exactly now, versus say with the $8B in 2019.

 

I don't know enough about Anthropic for example but they just raised $580M mainly from EAs while not intending to make a profit. This could be a good signal to more organisations out there trying to set up a model where they are interesting to EA funders.

Now I sort of see how, combined with several other factors, how maintaining culture and dealing with adverse selection ("lemons") might be an issue.

comment by GMcGowan · 2022-05-11T16:11:46.931Z · EA(p) · GW(p)

there will be someone in the world whose full-time job and top-priority it is to figure out how to write a proposal, or give you a pitch at a party, or write a blogpost, or strike up a conversation, that will cause you to give them money, or power, or status

 

IMO, a reasonable analogy here is to the relationship between startups and VCs.

What do VCs do to weed out the lemons here? Market forces help in the long run (which we won't have to the same degree) but surely they must be able to do this to some degree initially.

comment by MaxRa · 2022-05-11T13:42:09.066Z · EA(p) · GW(p)

I think I'm less worried about the risk of increased deception.

you won't have the time for long 10-hour conversations when you hang out in the evening.

The analogy breaks down somewhat because these number of 10-hour conversations are also scaling with size of the movement, right? And I think it's relatively discernible whether somebody actually cares about doing good when you talk to them a lot. I don't think you need to be a particularly senior EA for noticing altruistic and impact-driven intentions.

we could actually bring any measurable fraction of humanity's ingenuity and energy to bear on preventing humanity's extinction and steering us towards a flourishing future, and most of those people of course will be more motivated by their own self-interest than their altruistic motivation.

 Additionally I'm also less worried because I think most people actually also care about doing good and doing things efficiently. EA will still select for people who are less motivated to work in industry, where I expect wages to still be higher for somebody capable enough to scheme a great grant proposal.

comment by Ulrik Horn · 2022-05-11T09:38:45.626Z · EA(p) · GW(p)

Very good point on culture. Culture eats strategy for breakfast as they say. EA is definitely strategy heavy and I think your comment brings up a very important issue to investigate.

comment by Rob Mitchell · 2022-05-11T06:31:50.783Z · EA(p) · GW(p)

For many months, they will sit down many days a week and ask themselves the question "how can I write this grant proposal in a way that person X will approve of" or "how can I impress these people at organization Y so that I can get a job there?"

I would flip this and say, it's inevitable that this will happen, so what do we do about it? There are areas we can learn from:

  • Academia, as you mention - what do we want to avoid here? Which bits actually work well?
  • Organisations that have grown very rapidly and/or grown in a way that changes their nature. On a for-profit basis - Facebook as a cautionary tale of what happens when personal control and association isn't matched with institutional development? On a not-for-profit basis - I work for Greenpeace and we're certainly very different to what we were decades ago, with a mix of 'true believers' and people semi-aligned, generally in more support roles. Some would say we've sold out and indeed some people have abandoned us for other groups that are more similar to our early days, but we certainly have a lot more political influence that we did when we were primarily a direct action / protest group.
  • Corruption studies at a national level. What can we learn of the institutions of very low corruption countries e.g. in Scandinavia that we might adapt? 
Replies from: nathan
comment by Nathan Young (nathan) · 2022-05-12T09:06:58.542Z · EA(p) · GW(p)

I think the question is predictivity. How can you run the most predictive systems possible for selecting good grants/employing suitable people? 

I guess over time, networks will be worse predictors and the average trustworthiness of applicants will fall slightly, to which we should respond accordingly.

Though I guess that we have to ackowledge that some grants will be misspent and that the optimal  amount of bad grants may not be 0.

Replies from: Rob Mitchell
comment by Rob Mitchell · 2022-05-12T11:50:02.887Z · EA(p) · GW(p)

Definitely agree that networks will become worse predictors and ultimately grants, job offers etc. will become more impersonal. This isn't entirely a bad thing. For example personal and network-oriented approaches have significant issues around inclusivity that well-designed systems can avoid, especially if the original network is pretty concentrated and similar (see: the pic in the original post...)

As this happens this may also mean that over time people who have been in EA for a while may feel that 'over time the average person in the movement feels less similar to them'. This is a good thing!... if recognised, and well-managed, and people are willing to make the cognitive effort to make it work. 

comment by RedStateBlueState · 2022-05-11T06:03:09.606Z · EA(p) · GW(p)

If you want to get a lot of money for your project, EA grants are not the way to do it. Because of the strong philosophical principles of the EA community, we are more skeptical and rigorous than just about any funding source out there. Granted, I don't actually know much about the nonprofit grant space as a whole: if it comes to the point that EA grants are basically the only game in town for nonprofit funding, then maybe it could become an issue. But if that becomes the case I think we are in a very good position and I believe we could come up with some solutions.

Replies from: Habryka
comment by Habryka · 2022-05-11T06:41:31.821Z · EA(p) · GW(p)

Almost all nonprofit grants usually require everyone to take very low salaries. There are very few well-paying nonprofit projects. My guess is EA is the most widely-known community that might pay high salaries for relatively illegible nonprofit projects (and maybe the only widely-known funder/community that pays high-salaries for nonprofit projects in-general).

comment by Nathan Young (nathan) · 2022-05-10T13:11:01.313Z · EA(p) · GW(p)

Thanks for writing this, Will. I appreciate the honesty and ambition. Thank you for all you do and I hope you have people around you who love and support you.

I like the framing of judicious ambition. My key question around this and the related longtermism discussion is something like, What is the EA community for?

  • A democractic funding body?
  • A talent pool?
  • Community support?
  • Error checkers?

Are we the democratic body that makes funding decisions? No and I don't want us to be. Doing the most good likely involves decisions that the median EA will disagree with. I would like to trial forecasting funding outcomes and voting systems, but I don't assume that EA should be democratic. The question is what actually does the most good.

Are we a body of talented professionals who work on lower wages than they otherwise would? Yes, but I think we are more than that. Fundamentally it's our work that is undervalued, rather than us. Animals, the global poor and future generations cannot pay to save their own lives, so we won't be properly remunerated, except by the joy we take from doing it.

Are we community support for one another? Yes, and I think in regard to this dramatic shift in EA's fortunes that's particularly important. It is weird to move from scarcity to abundance. I wasn't around in the EArly days, but I was working on cash-strapped community projects and it is good to take that discussion seriously.

Are we error checkers for community decisions? Maayybee. For me, this is one where we could do better. Currently, you either get sent the important google docs or you don't. While I don't think the community should make decisions, there is a lot of value in allowing them to look over them. But largely I don't think the technology required to synthesise and correct huge amounts of information as an informal community exists yet. But given the potential value, it is worth looking into.

What is the EA community for? And what infrastructure is worth building in order to ensure that function as we scale as this post describes?

Replies from: Khorton, PhilippaE
comment by Khorton · 2022-05-10T16:52:58.780Z · EA(p) · GW(p)

I think this could be a standalone post

Replies from: vaidehi_agarwalla, nathan, captainjc
comment by Vaidehi Agarwalla (vaidehi_agarwalla) · 2022-05-10T18:22:51.505Z · EA(p) · GW(p)

(fwiw strongly agree! Even in it's current form I think it would start a really interesting / valuable discussion)

comment by Nathan Young (nathan) · 2022-05-10T17:08:54.885Z · EA(p) · GW(p)

Yeah, I thought I'd test the waters then write it up if people thought it was valuable.

Do you have any other frames for what the community might be?

Replies from: vaidehi_agarwalla, Khorton
comment by Vaidehi Agarwalla (vaidehi_agarwalla) · 2022-05-10T18:20:35.612Z · EA(p) · GW(p)

I think talent pool doesn't /quite/ capture things like entrepreneurs and field builders and people who are building things.

Error checkers is good, but also something like finding cause x isn't quite error checking. It's more like avoiding the risk of omission kind of stuff / rethinking fundamental assumptions.

I think my problem with this question, which I've been thinking about in maybe different ways for many years, is that there isn't a binary between community member and [core ea decision maker / etc.] - it's very much a continuum and people occupy multiple roles at once. This becomes complicated because you can't quite set boundaries the same way.

For example, if a community member starts their own grantmaking foundation, are they still a community members or a decision maker? Does it only count if they know the right people or are in a certain cause area? How do their responsibilities change (or not change)?

comment by Khorton · 2022-05-10T17:12:21.571Z · EA(p) · GW(p)

Researchers? People who develop and test different approaches to doing good at a rapid pace

comment by Jeremy (captainjc) · 2022-05-18T15:24:46.677Z · EA(p) · GW(p)

Strongly agree as well!

comment by PhilippaE · 2022-05-12T08:48:53.302Z · EA(p) · GW(p)

I see a big role of the community as being the facilitator of coordination. Tight coordination might enable more and better projects to be done, with less redundancy. It might also reduce beneficial competition & diversity of projects and ideas if people don't pursue good projects because someone else is already in that space. Career advising seems like a good example of where different approaches can usefully coexist.

comment by Abby Hoskin (AbbyBabby) · 2022-05-10T03:13:38.577Z · EA(p) · GW(p)

Thanks for this write up, Will! I hope it changes the minds of people who are skeptical/unhappy about our massive funding influx. 

I think a lot of EAs are not  motivated to seek personal financial rewards, instead they find themselves seeking truth in graduate school/academia or trying to improve the world via non-profits. They see their similarly intelligent, well educated peers go into industry, optimizing for "make as much money as possible" and they just fundamentally do not relate to that value function. I wonder if this kind of personality type (if you can call it that) lies at the root of a lot of people's discomfort with EA non-profit jobs suddenly paying really well. 

Maybe we could offer special community building grants with the option that you will work in a basement, subsisting only on baguettes and hummus? ;)

Replies from: Rob Mitchell
comment by Rob Mitchell · 2022-05-10T09:07:34.378Z · EA(p) · GW(p)

I agree (and have formerly resembled this type...)  This is quite embedded in a lot of nonprofit culture. Part of it is what motivates the individual and their personality, part of it is the concept of supporters' money. 'Would the person who gave you £5 a month want you to be spending your money on that?' In practice this leads to counterproductive underspending. I remember waiting weeks to get maybe £100 worth of extra memory so I could crunch numbers at a reasonable speed without crashing the computer. The concept of taxpayers' money works similarly. 

There's probably a good forum post in there somewhere about how the psychology of charity affects perceptions of EA...

comment by JoelMcGuire · 2022-05-12T20:48:42.083Z · EA(p) · GW(p)

Creating projects that are maximally cost-effective is now comparatively less valuable; creating projects that are highly scalable with respect to funding, and can thereby create greater total impact even at lower cost-effectiveness, is comparatively more valuable. 

I think this framing is wrong, or at best unhelpful because we shouldn’t avoid prioritizing cost-effectiveness. When you stop prioritizing cost-effectiveness, it stops being effective altruism. Resources are still finite. The effectiveness of solutions to dire problems still differs dramatically. And we have only scratched the surface of understanding which solutions are gold and which are duds. I think it’s cost-effectiveness all the way down. 

I hope Will doesn’t mean “creating maximally cost-effective projects is now less valuable” when he says “creating maximally cost-effective projects is now less valuable”. I hope Will means “We should use average cost-effectiveness instead of marginal cost-effectiveness because cost-effectiveness often decreases with more funding. This means that some projects which were more cost-effective at small levels of funding will become less cost-effective at larger levels of funding, which will shift our priorities.” I hope he means that, because I think that’s the correct take.

To illustrate, imagine there’s two projects, A and B, and we have to decide to allocate all of our funds to one or the other. Project A is more cost-effective if our funding supply is limited. This could be treating an incredibly painful but rare disease, where the cost to find potential patients quickly rises as you run out of people to treat. Then there’s project B, which is about as cost-effective regardless of how much money you spend. A classic example of a “project B” type project is cash transfers.

The figures below depict this. The first is a figure lent to me by Michael Plant. I also attach my less clear hand-drawn figure.


As EA acquires more funds, creating and funding maximally cost-effective projects is just as valuable. But our heuristics for cost-effectiveness will change. Instead of asking, “what’s the average cost-effectiveness of spending $10,000 on project A and B”, which would favor A,  we should ask “what’s the cost-effectiveness of spending $10,000,000 on project A and B”, which would favor project B.

Cost-effectiveness changes with the supply of funding
comment by Rob Mitchell · 2022-05-10T08:45:19.577Z · EA(p) · GW(p)

Really interesting, and something I'll need to come back to. Just to pick out one bit:

Often, it’ll involve people doing things that just aren’t that enjoyable: management and scaling organisations to large sizes are rarely people’s favourite activities; and, it will be challenging to incentivise enough people to do these things effectively.

I've seen variations on this theme in a few posts, and it doesn't resonate with my own experience. In a genuinely influential management/ops role, there's a great deal of satisfaction to be had in seeing your organisation become more effective - if what that organisation is doing is highly worthwhile. I worry a bit that the tone of 'yeah this isn't glamorous, but someone has to do it' is putting off talent in the area. If an attraction to EA is around doing the most good, and this area is a bottleneck, there seem much more positive framings to be used. 

One other question - I've seen quite a few posts trying to set what to do with EA's increased resources through inductive reasoning. I've seen less around examining what others have done successfully or unsuccessfully in terms of embedding sustainable growth and development (e.g. Singapore), managing  very large amounts of money effectively (e.g. Norway's sovereign wealth funds), or increasing the ability to spend money quickly and well (e.g. getting 'shovel ready' engineering plans ready to go), and seeing what lessons can be drawn. None of those map on perfectly to EA's situation, but they should be instructive. Is this research happening, and if so, how are the conclusions being brought together and acted on?

Replies from: gruban, Alex Catalán Flores, Charles He
comment by Patrick Gruban (gruban) · 2022-05-10T10:25:43.608Z · EA(p) · GW(p)

I was also surprised to be seeing management and scaling organisations described as "rarely people’s favourite activities", this seems to be a strong claim. For me, it's the most motivating activity and I'm trying to find an organisation where I can contribute in this area.

Replies from: Khorton
comment by Khorton · 2022-05-10T16:51:16.320Z · EA(p) · GW(p)

Agreed, I love management and improving organisational systems, and was really surprised by this comment!

comment by Alex Catalán Flores · 2022-05-10T13:44:16.466Z · EA(p) · GW(p)

Couldn't agree more, Rob. Perhaps my perception is coloured by my own experience and circle of friends, but there certainly seems to be a subset of people out there who genuinely enjoy scaling organisations. I think this is particularly the case in the for-profit sphere, where feedback loops are sometimes instantaneous thus leading to increased satisfaction among the scale-up types.  

comment by Charles He · 2022-05-10T17:14:43.756Z · EA(p) · GW(p)

As a caution, onlookers should know that there tends to be a large supply of would-be management advice or scaling advice whose quality is often mixed. This is because:

  • It is attractive to supply because this advice is literally executive or senior managerial work, so appears high status/impact/compensation.
  • It is attractive to supply because it moves into organizations where often the hard operational work and important niches have been developed successfully. In reality, it is often this object level activity that is hard and in "short supply".
    • Even in successful organizations, staff are often working around management/CEO or succeeding at their tasks despite leadership (it's not that leadership is bad it's that it provides many things in complicated ways).
  • Like other meta work, it can be (extremely) difficult to understand if you're good or bad. In particular, for scaling, the feedback loops can be very long.
  • Like other meta work, there can only be so many "cooks in the kitchen". In general, it is normal to scale or add more object level work, but for meta or managerial work the slots are limited (think of the reasons orgs only have have 1-2 CEOs) and more management can be negative.

To see this, look at the general opinion of "management consulting" and how rarely these services are actually used by small, highly effective companies and organizations that I think are similar in profile to EA orgs. 

I suspect that when they are used, it's because of great respect and trust for specific principals, and not because "management" can be easily sprinkled onto an existing organization.

 

Another issue is that "management" is a word that means many different things. As a very positive thing, and to an unusual degree, even junior EAs perform major management roles in EA organizations.

Maybe the source of more management talent or activity would be to promote or "tap on the shoulder" inside EAs. There should be caution about creating external processes.

 

(The following sentence can be it's own misread) but it's possible the OP meant that, given the necessary quality/culture/approach/familiarity, there is a shortage of specific, action ready and trusted talent whose deployment has high upsides and low downsides.

Replies from: Rob Mitchell
comment by Rob Mitchell · 2022-05-10T18:16:51.826Z · EA(p) · GW(p)

It's useful to separate out consultancy/advice-giving versus the actual doing. I would say though that a successful management/operations setup should be able to at least ameliorate the feedback issue you mention (e.g. by identifying leading and/or more quickly changing metrics that are aligned and gaining value from these). 

Replies from: Charles He
comment by Charles He · 2022-05-10T18:40:02.114Z · EA(p) · GW(p)

I think your comment and sentiment is great. My response wasn't directly related.

I guess I'm more concerned about "by catch" or overindexing. For example, activity and discussions that are wobbly about getting into management and scaling, "Great Leap Forward", sort of style.

 

Honestly, the root issue here is that I have some distrust related to the causes and processes about this post, the NB post, all of which seems to be related to discussion and concerns that might originated or closely involve the EA forum. I am don't think these have the best relationship to reality[1]. It seems healthy for the issues to settle down. 

  1. ^

    I think the discourse on funding/optics has been slightly defective or tinged. This caused Will, SBF to pop onto the forum. This presence is fantastic, great and should continue, but maybe in this instance, different processes or events could have occurred, so they could have used their valuable time and public presence to communicate to EA about something else.

comment by Linch · 2022-05-13T19:33:33.386Z · EA(p) · GW(p)

I know that they very much are not doing it for the gratitude, but I still want to express my thanks to Sam and the other core members of Alameda and FTX, as well as others earning-to-give, for making the current funding situation possible. I think earning-to-give has been downplayed in recent years in EA, and it seems like a thankless job to work crazy hours and take crazy risks for many years, doing frequently weird, uninteresting, or frustrating work, just to donate it in service of making the future better.

My own organization and I'm sure that of many others in this movement have and will benefit greatly from this influx of funding. But as the post says, this is a huge responsibility and we should not bear it lightly.

I hope that the large donors in EA will continue to be responsible stewards of the resources they've gathered. I hope too,  that the rest of us in the EA community will become strong enough to execute well on the most important priorities, to build a safe and flourishing future for our grandchildren's grandchildren.

comment by Jeff Kaufman (Jeff_Kaufman) · 2022-05-10T17:37:00.022Z · EA(p) · GW(p)

Quantitatively: suppose $200 is spent on an intro to EA retreat for someone. If that has a more than a one in a five hundred thousand chance of inspiring the attendee to earn to give and successfully donate $100 million over their lifetime, then the expected financial benefit is positive. Given the successes we’ve seen, both from FTX and outside of that, the real probability is orders of magnitude larger. That’s not to say $200 on a retreat is how much should be spent — if you can have the same impact at cheaper cost, you should. And excessive spending can even become counterproductive if it sends the wrong message. But it indicates just how small community-building spending is in comparison the potential benefits from changing people’s careers for the better.

This is a bit of a nit, but $200 is very low here. You're likely to spend that much per attendee on housing alone, before considering other costs like organizer time or covering transportation.

Replies from: AbbyBabby
comment by Abby Hoskin (AbbyBabby) · 2022-05-12T02:24:44.182Z · EA(p) · GW(p)

I had the same exact reaction! "Only $200 for one attendee? In this economy? What is that, 20 bananas?" 
 

Replies from: BenStewart
comment by Benjamin Stewart (BenStewart) · 2022-05-13T12:10:23.553Z · EA(p) · GW(p)

Haha what a crossover!

comment by lukeprog · 2022-05-11T02:26:44.195Z · EA(p) · GW(p)

a Christian EA I heard about recently who lives in a van on the campus of the tech company he works for, giving away everything above $3000 per year

Will this person please give an in-depth interview on some podcast? Could be anonymous if desired.

Replies from: GMcGowan
comment by GMcGowan · 2022-05-12T16:46:32.702Z · EA(p) · GW(p)

That person is Oliver Yeung and he has done a two part talk where he discusses this - main talk, Q&A.

(I spoke to him to okay sharing these, if any interviewer wants to speak to him then DM me and I can put you in touch)

comment by keith_wynroe · 2022-05-10T20:29:06.596Z · EA(p) · GW(p)

Thanks for writing this, really great post. 

I don't think this is super important, but when it comes to things like FTX I think it's also worth keeping in mind that besides the crypto volatility and stuff there's also the fact that a lot of what we're marking EA funding to aren't publicly-traded assets, and so numbers should probably be taken with an even bigger pinch of salt than usual. 

For example, the numbers for  FTX here are presumably backed out of the implied valuation from its last equity raise, but AFAIK this was at the end of January this year. Since then Coinbase (probably the best publicly traded comparator) stock has fallen ~62% in value, whereas FTX's nominal valuation hasn't changed in the interim since there hasn't been a capital raise. But presumably, were FTX to raise money today the implied valuation would reflect a somewhat similar move

Not a huge point, and in any case these kinds of numbers are always very rough proxies anyway since things aren't liquid, but I think maybe worth keeping in mind when doing BOTECs for EA funding

comment by Tessa (tessa) · 2022-05-11T17:51:14.785Z · EA(p) · GW(p)

I logically acknowledge that: "In some cases, an extravagant lifestyle can even produce a lot of good, depending on the circumstances... It’s not my preferred moral aesthetic, but the world’s problems don’t care about my aesthetics."

I know that, but... I care about my aesthetics.

For nearly everyone, I think there exists is a level of extravagance that disgusts their moral aesthetics. I'm sure I sit above that level for some, with my international flights and two $80 keyboards. My personal aesthetic disgust triggers somewhere around "how dare you spend $1000 on a watch when people die of dehydration". Giving a blog $100,000 isn't quite disgusting, yet, ew?

The post I've read that had the least missing mood around speculative philanthropy was probably the So You Want To Run A Microgrants Program retrospective on Astral Codex Ten, which included the following:

If your thesis is “Instead of saving 300 lives, which I could totally do right now, I’m gonna do this other thing, because if I do a good job it’ll save even more than 300 lives”, then man, you had really better do a good job with the other thing.

I like the scenario this post gives for risks of omission: a giant Don't Look Up asteroid hurtling towards the earth. I wouldn't be mad if people misspent some money, trying to stop it, because the problem was so urgent. Problems are urgent!

...yet, ew? So many other things look kind of extravagant, and they're competing against lives. I feel unsure about whether to treat my aesthetically-driven moral impulses as useful information about my motivations vs. obviously-biased intuitions to correct against.

(For example, I started looking into donating a kidney a few years ago and was like... man, I could easily save an equal number of years of life without accruing 70+ micromorts, but that's not nearly as rad? Still on the fence about this one.)

[crosspost from my twitter]

comment by MichaelPlant · 2022-05-12T16:47:50.625Z · EA(p) · GW(p)

Will, thanks very much for writing this. It's great to be having this discussion and to see the major players are thinking hard about this. I wanted to raise a couple of issues that merit reflection but haven't (AFAIT) been made so far.

You note that EA has gone from a few guys in a basement to commanding serious funding. But, what might the future of EA be? Where could it be in another 10 years? There could be 10x, or even 100x, of relevant funding. In line with the idea of judicious ambition, how should we be planning for it? Who should be planning for it?

Related to this, how much, and what type, of centralisation and governance are optimal across the various bits of the movement? One thing that strikes me is that 'EA resources' are very centralised: there are only a few major donors, advisors to those donors, and leaders of key organisations, and all those people know each other. What's more, lots of decision-making happens privately. All of this clearly has some major advantages, such as speed and coordination; it's appropriate, given it's about private individuals spending their money; it's also pretty unsurprising that this has happened because EA started so recently.

But, as EA 'grows up', should it transition to operating in some different ways? Some of the risks you flag - reduction in quality of thought, resentment, and the loss of evolutionary forces - seem to stem, at least in part, from this dynamic.

What would the ideal structure be? If I do a Bostromian-Ordian 'reversal test', I wouldn't want to see all 'EA resources' and decision-making concentrated in the hands of one person, no matter who it was. I'm not sure how far the other way would be best, but it seems worth reflecting on.

comment by Ivy_Mazzola · 2022-05-12T00:47:06.924Z · EA(p) · GW(p)

[EDIT 2: Based on the responses I received, I am probably wrong here. I will probably delete my portion eventually to not deter future readers from applying for grants. Leaving it up for a while longer for epistemic reasons. FYI that reading this thread might be a poor use of time.]

Side note: I see you mention multiple times that community building is a good use of money, and I agree, but that hasn't been what I have been seeing EAIF, the primary funder for CB work, go by. It is possible you are not using the term community building in the way I think of it, but:

Embarrassing context: I was refused by EAIF for FTE salary to do CB (in Austin, TX), and in response I: talked to many community builders, looked at past grant reports, dug through all the EA groups resources and relevant forum posts I could find, and spent most of my EAG London in 1:1s trying to understand. Result: It seems that funding for full-time community-building salary is not really a thing that happens (at least in America, outside of priority cities: NYC, DC, Boston, and the bay).

This to me says that funders (EAIF?) simply don't believe in CB. Personally, I think that if something is worth doing at all, it is:
1. worth doing full-time (if the person leading the project thinks so)
2. bad to limit candidates only to those who would work part-time
3. worth doing sooner than later (and information would be yielded sooner from full-time work)

I'm not complaining (really, I got another opportunity). But honestly, I am semi-replaceable in the coming role (better than the second-best hire), whereas community building would have been a pure counterfactual. No one will be doing CB in Austin when I go, and it doesn't seem that EAIF's funders thought that was a loss worth preventing.

TLDR; I agree that CB could absorb so much funding, and I also agree that, except for large risks, the lean should be toward funding CB projects, to gain information at least. But it doesn't look like EAIF thinks so.

[EDIT 1: This comment is getting some downvotes, and maybe that's appropriate so it doesn't rise too high, as it is off-topic. But if the downvotes are over differences in perspective, feel free to DM. 
Also: Maybe EAIF is right to be very conservative or unexcited about fulltime CB work. Or maybe I'm wrong that they feel that way.
But I like the judiciousness and ambition framework, and I think it is valuable for people to speak up where they may have noticed a pervasive imbalance. 
I'll also add that I probably never would have spoken up, if I weren't culturally allowed to drop it as a comment, so yeah thanks for reading.] 

Replies from: Michelle_Hutchinson, Charles He, casebash
comment by Michelle_Hutchinson · 2022-05-14T09:44:17.892Z · EA(p) · GW(p)

[I’m an EAIF grant manager, but I wasn’t involved in this particular grant.]

I’m sorry you’ve been having a frustrating time in your community building work. As you say, rejections sting even in the best of circumstances, particularly when it feels counter to the narrative being portrayed of there being funding available. Working hard to help others is difficult enough without feeling that others are refusing to support you in it.

It seems very difficult to me to accurately represent in advance what kinds of community building EAIF is and isn’t keen to fund, because it depends on a lot of details about the place/person/description of activities planned. Having said that, I’m keen to avoid people getting a false impression of our priorities. I wanted to clarify that we are in fact keen to fund full time community builders outside of existing EA hubs

It happens that the majority of past requests we’ve had for full time positions in the US have come from Boston/NY/SF/Berkeley. We’ve received a number of applications for full time community builders in non-hub cities in the rest of the world though. For example we’ve funded full time community builders in Italy, the Philippines, Denmark and the Czech Republic. 

I think a reason it looks like we prefer funding people part time is that we fund quite a bit of university community building. Doing that is often most suited to students at that university, who are therefore only able to do community building part time. 

We’ve tried to keep our application form short to make it feel low cost to apply. I’d be keen for people to put in speculative quick draft applications to see whether you might be a good fit for an EAIF grant!

[Edited for clarity]

Replies from: Ivy_Mazzola
comment by Ivy_Mazzola · 2022-05-14T17:27:09.326Z · EA(p) · GW(p)

Thanks for saying that. I understand that grantmaking is complex and that some CB plans simply won't be right to encourage. But I still don't really feel this changes my expectation around community building being funded for full-time. Some questions that would go a long way to correct this impression if answered:

(FWIW I feel weird posting this publicly, [EDIT: and I don't necessarily think you/EAIF should be expected to respond here] but I think it is important to ask these questions)

[EDIT: Also reading all this is probably not a productive activity for people who don't work in CB or grantmaking]

  1. Can you share how many of those organisers of non-american areas (Italy, Denmark, Czech Republic, Philippines) were mainly funded for FTE after doing PTE first? I know at least that in the Phillipines, some organizers were funded for part-time work first. I also remember reading one post by a part-time organizer (I think in the Phillipines) reflecting on that dilemma. S/he was lucky to get their full-time job to reduce to part-time, but wondering whether they should just quit their part-time job and work the remaining hours as EA Phillipine hours for free because there was so much to be done. This was very shocking to me. So, I'm glad to hear these areas have full-time organizer(s), but I wonder if that would have happened if a volunteer organizer came to you and said, "I want to make this my career, but I need to do it full-time and here are reasons I think it is impactful and relatively easy to prove out."
  2. Those full-time community builders you mentioned are still not in American cities. Can you by chance estimate how many applications you have gotten for full-time community building in American cities? (excluding the 4 priority cities that are now funded by CEA's Community Building Grants program, not EAIF?) Assumption being that you denied all of them, but definitely interested to hear if you approved any, but the applicant didn't take the role after all!
  3. (Not really a question) Would really appreciate if you guys would eventually publish the types of regional community organizing and programming you are excited to fund? You can still encourage submitting innovative ideas you may not have even thought of yet, but if you list established activities you like, the assumption can be that a lot of default community organizing activities don't make the cut (fair and I agree with this). I think clarifying this would have a big impact by helping people to create more impactful CB plans and making CB seem more skill-based and less vague and therefore a more desirable career path. The approval would still be contingent on the area and the applicant's strength, but this would go so far. And if is just an applicant's plans that aren't quite right, you can advise them to revise and resubmit very easily.

Other grantees are encouraged to work full-time on their ideas if they have good ideas. And while I appreciate your response I still don't get this vibe around CB. I think this is bad because I think that 1 FTE is usually worth much more than 2x0.5 FTEs. It is hard to do good work working part-time, and it is especially hard when you know that the people who are supposed to be better at evaluating good work than you don't believe in you or your work enough to encourage you to do it full-time.

Some more details (all events in the past 1-3 months):
-I realize that students can only do PTE and didn't lump University organizing in when forming this opinion. 
-one community organizer in DC did tell me that my few months of doing unpaid CB part-time was nowhere near enough to go straight into paid fulltime work, although I wasn't working at that time and was spending about 20 hours a week on CB and getting up to speed studying CB from EA and non-EA sources. I also had pre-covid CB experience.
-I shared my city plans at EAG and others in the field thought they were good. I do now think they could have been better, as I kept learning in the last month, but I also would have come to that conclusion while doing the work as well. 
-I know personally of at least one big US city (much bigger than Austin) which was denied for FTE and a month later approved for PTE, though I think their plans may have improved in the interim.

Replies from: Max_Daniel
comment by Max_Daniel · 2022-05-15T20:03:11.308Z · EA(p) · GW(p)

Hi, EAIF chair here. I agree with Michelle's comment above, but wanted to reply as well to hopefully help shed more light on our thinking and priorities.

As a preamble, I think all of your requests for information are super reasonable, and that in an ideal world we'd provide such information proactively. The main reason we're not doing so are capacity constraints.

I also agree it would be helpful if we shared more about community building activities we'd especially like to see, such as Buck did here [EA · GW] and as some AMA questions [EA · GW] may have touched uppn. Again this is because we need to focus our limited capacity on other priorities, such as getting back to applicants in a reasonable timeframe. 

I should also add that I generally think that most of the strategizing about what kind of community building models are most valuable is best done by organizations and people who (unlike the fund managers) focus on the space full time – such as the Groups team at CEA [EA · GW], Open Phil's Longtermist EA Movement Building Team [EA · GW], and the Global Challenges Project. Given the current setup of EA Funds, I think the EAIF will more often be in a role of enabling more such work. E.g., we funded the Global Challenges Project multiple times. Another thing we do is complementing such work by providing an additional source of funding for 'known' models. Us providing funding for university and city groups outside of the priority locations that are covered by higher-touch programs by other funders is an example of the latter.

(I do think we can help feed information into strategy conversation by evaluating how well the community building efforts funded by us have worked. This is one reason why we require progress reports, and we’re also doing more frequent check-ins with some grantees.)

To be clear, if someone has an innovative idea for how to do community building, we are excited and able to evaluate it. It’s just that I don’t currently anticipate us to do much in the vein of coming up with innovative models ourselves.


A few thoughts on your questions:

I wonder if that would have happened if a volunteer organizer came to you and said, "I want to make this my career, but I need to do it full-time and here are reasons I think it is impactful and relatively easy to prove out."

We would be excited to receive such applications.

We would then evaluate the applicant's fit for community building and their plans, based on their track record (while keeping in mind that they could devote only limited time and attention to community building so far), our sense of which kinds of activities have worked well in other comparable locations (while remaining open to experiments), and usually an interview with the applicant.

Can you by chance estimate how many applications you have gotten for full-time community building in American cities?

Unfortunately our grant data base is not set up in a way that would allow me to easily access this information, so all I can do for now is give a rough estimate based on my memory – which is that we have received very few such applications. 

In fact, apart from your application, I can only remember three applications for US-based non-uni city group organizing at all. Two were from the same applicants and for the same city (the second application was an updated version of a previous application – the first one had been unsuccessful, while we funded the second one). The other applicant wants to split their time between uni group (70%) and city group community building (30%). We funded the first of these, the second one is currently under evaluation. 

(And in addition there was a very small grant to a Chicago-based rationality group, but here the applicant only asked for expenses such as food and beverages at meetings.)

It's possible I fail to remember some relevant applications, but I feel 90% confident that there were at most 10 applications for US-based full-time non-uni community building since March 2021, and 60% confident that there were at most 3.

(I do think that in an ideal world we'd be able to break down the summary statistics we include in our payout reports [EA(p) · GW(p)] – number of applications, acceptance rate, etc. – by grant type. And so e.g. report these numbers for uni group community building, city group community building, and other coherent categories, separately. But given limited capacity we weren't able to prioritize this so far, and I'm afraid I'm skeptical that we will be able to any time soon.)

I know personally of at least one big US city (much bigger than Austin) which was denied for FTE and a month later approved for PTE, though I think their plans may have improved in the interim.

Was this at the EAIF? I only recall the case I mentioned above: One city group who originally applied for part-time work (30h/week spread across multiple people), was unsuccessful, updated their plans and resubmitted an application (still for part-time work), which then got funded.

It's very possible that I fail to remember another case though.

I think that 1 FTE is usually worth much more than 2 0.5 FTEs.

I generally agree with this.

FWIW a community organizer in DC did tell me that my few months of doing unpaid CB part-time was nowhere near enough to go straight into paid fulltime work, although I wasn't working at that time and was spending about 20 hours a week on CB and getting up to speed and studying CB from EA and nonEA sources.

I can't speak for that DC organizer (or even other EAIF managers), but FWIW for me the length of someone's history with community building work is not usually a consideration when deciding whether to fund them for more community building work – and if so, whether to provide funding for part-time or full-time work.

I think someone's history with community building mostly influences how I'm evaluating an application. When there is a track record of relevant work, there is more room for positive or negative updates based on that, and the applicant's fit for their proposed work is generally easier to evaluate. But in principle it's totally possible for applicants to demonstrate that they clear the bar for funding – including for full-time work – otherwise, i.e., by some combination of demonstrating relevant abilities in an interview, having other relevant past achievements, proposing well thought-through plans in their application, and providing references from other relevant contexts. 

I think part-time vs. full-time most commonly depends on the specific situation of the application and the location – in particular, whether there is 'enough work to do' for a full-time role. (In the context of this post, FWIW I think I agree that often an ambitious organizer would be able to find enough things to do to work full time, which may partly involve running experiments/pilots of untested activities.)

Another consideration can sometimes be the degree of confidence that a candidate organizer is a good fit for community building. It might sometimes make sense to provide someone with a smaller grant to get more data on how well things are going – this doesn't necessarily push for part-time funding (as opposed to full-time funding for a short period), but may sometimes do so. One aspect of this is that I worry more about the risk of crowding out more valuable initiatives when an organizer is funded full-time for an extended period because I think this will send a stronger implicit message to people in that area that valuable community building activities are generally covered by an incumbent professional compared to a situation when someone is funded for specific pilot projects, part-time work, or for a shorter time frame.

Replies from: Ivy_Mazzola
comment by Ivy_Mazzola · 2022-05-19T21:53:21.563Z · EA(p) · GW(p)

Thank you for this well-thought-out response. I appreciate the effort it took you and Michelle to respond to me. I am leaning much more that I was wrong about all this then. And if LA's application was initially part-time, that was one foundational wrong piece. I still wish that I could have received more details about my own application (the email specified that no feedback could be provided), but I will encourage more people I know to apply for CB work. 

I have added a qualifier to my original comment that I am probably wrong. As this particular forum piece and the comments are likely to be revisited for some time (maybe years?), I will probably eventually redact my comment fully to not confuse and deter future readers about how supported CB work would be. Will leave it up for epistemic reasons for at least a week longer.

Thanks again!

Replies from: Khorton
comment by Khorton · 2022-05-20T10:47:01.787Z · EA(p) · GW(p)

This was a really inspiring reply to read Ivy.

comment by Charles He · 2022-05-12T12:11:41.706Z · EA(p) · GW(p)

(Disclaimer: I don’t know if my support or comment has value, no one likes me and I have poor hygiene.)

People I know interacted with this person in real life in a professional context. From these interactions, what the commentor is saying seems accurate.

In this particular instance, it seems like a valuable and talented leader could have been funded to do good, well aligned EA work that they were deeply passionate about.

Replies from: Ivy_Mazzola
comment by Ivy_Mazzola · 2022-05-12T20:14:26.449Z · EA(p) · GW(p)

Thanks for your kind words. Most people have been surprised which has been affirming (much needed because rejections are the opposite). I got some in-person feedback suggesting EAIF saw risks to doing Austin CB too soon or with the wrong person (ouch). 

I'm sure lots of people submit actually-risky projects who simply can't see them as risky (or themselves as risky agents), so take my confusion with a grain of salt.  The fund managers are people I genuinely respect. I'm just concerned that it was beaurecrat's curse which is also modus operandi for non-uni CB all around. EA has some bottlenecks that early or mid-career professionals are better suited to fill than students. So I don't want non-uni groups to be unhelpfully neglected. 

comment by Chris Leong (casebash) · 2022-05-12T11:59:18.186Z · EA(p) · GW(p)

When did you apply? I wouldn't be surprised if you had a better chance of being funded now than in the past.

Replies from: Ivy_Mazzola
comment by Ivy_Mazzola · 2022-05-12T19:32:03.639Z · EA(p) · GW(p)

Recently, but it's complicated: 
I applied mid-March to FTX Future Fund. They passed me on to EAIF, saying they felt EAIF was better equipped to evaluate the grant (Fair, but I had chosen FTX because I saw no full-time employment grants for community builders in EAIF's past grant reports).  To their credit, EAIF did reach out to my references and gave me an interview in early April (so they were probably on the fence). Then said no on April 15th. 

P.S. I'm confident I could have gotten funding for part-time work, but I think the most impactful and innovative stuff for information value comes in the "later" hours, like designing and presenting workshops and courses. I could be thinking about this wrong but part-time work for an entire city is comparatively (and urgently) full of meetups, 1:1s, information dissemination, email, light outreach, and ops. Still important but not what I thought made the role most worth creating. 

comment by guyi · 2022-05-11T01:13:11.929Z · EA(p) · GW(p)

There’s one huge difference between aiming to do good and aiming to make profit. If you set up a company aiming to make money, generally the very worst that can happen is that you go bankrupt; there’s a legal system in place that prevents you from getting burdened by arbitrarily large debt. However, if you set up a project aiming to do good, the amount of harm that you can do is basically unbounded.

 

I am very confused about this reasoning. It seems clear that there is a lot worse harm that can be caused by a for-profit enterprise than simply that enterprise going bankrupt. What about weapons manufacturers or fossil fuel and tobacco companies? There are many industries that profit from activities that many people would consider a net harm to humanity.

The key difference I see with a non-profit enterprise aiming to do good is that its scale is driven by external factors, the choices of donors. The harm a non-profit can cause is bounded simply because the funding it receives from its donors is bounded. In contrast, a successful for profit enterprise has a mechanism to scale itself up by using its profits to grow the enterprise. The practical implication of this is for profit corporations do end up growing to scales where they have great potential to do a lot of harm.

None of which is to say that the effective altruism movement, which as you indicate is expecting many billions of USD funding, doesn't have great potential to do harm. It does need to take that responsibility seriously. Perhaps more importantly though, seeing as the EA movement is encouraging people to earn to give, it behooves the EA movement to consider harms caused in the process of earning. Moskovitz's wealth derives from Facebook which arguable has done great harm globally by, among many other things, helping organize the genocide of Rohingya muslims in Myanmar and Jan 6 capitol insurrection in US. Bankman-Fried's wealth derives from arbitrage on bitcoin sales and other crypto related  ventures. Cryptocurrency wealth currently has a massive externality of  CO2 emissions produced by running energy intensive proof-of-work algorithms on fossil power. Bankman-Fried isn't responsible for emissions to equivalent if he generated all his wealth by mining bitcoin on fossil power,  but he is certainly responsible for some fraction (50%? 20%?) of that.

Maybe if the EA community is judicious in allocating the capital Moskovitz and Bankman-Fried are planning to provide them with it will become the quite clear the benefits of that earning outweighed their harms. The funding of carefully considered public health measures and efforts to give directly to people living in poverty raises my confidence the EA community has a chance of achieving this. However, funding of efforts to mitigate the existential risk of imagined future AIs while ignoring the funding of institutes like the Algorithmic Justice League which seek to understand harms already being caused by existing AI algorithms lowers my confidence.

Replies from: Linch, Charles He
comment by Linch · 2022-05-11T03:49:45.257Z · EA(p) · GW(p)

There’s one huge difference between aiming to do good and aiming to make profit. If you set up a company aiming to make money, generally the very worst that can happen is that you go bankrupt; there’s a legal system in place that prevents you from getting burdened by arbitrarily large debt. However, if you set up a project aiming to do good, the amount of harm that you can do is basically unbounded.

I am very confused about this reasoning. It seems clear that there is a lot worse harm that can be caused by a for-profit enterprise than simply that enterprise going bankrupt. What about weapons manufacturers or fossil fuel and tobacco companies? There are many industries that profit from activities that many people would consider a net harm to humanity.

I think it was good that you noticed your confusion! In this case, I believe your confusion primarily stems from misunderstanding the paragraph. Will is not saying that the worst that a company can do from the impartial point of view  is go bankrupt. He's saying that the worst that a company can do from a profit-maximizing perspective ("aiming to make profit") is go bankrupt. Whereas (EA) charities are presumed to be judged from the impartial point of view, in which case it will be inappropriate to ignore the moral downsides.

Replies from: guyi
comment by guyi · 2022-05-11T16:42:21.874Z · EA(p) · GW(p)

To be clear, stating that I was confused was a polite way of indicating that I think this reasoning itself is confused. Why should we evaluate for-profit businesses only from a profit-maximizing perspective? Having profitability as the primary goal of enterprise doesn't preclude that enterprise from doing massive amounts of harm. If a for-profit enterprise does harm in its attempts to make profit should we ignore that harm simply because they've succeeded in turning a profit? If you interpretation of Will's reasoning is what he intended, then he is asking us compare aiming to do good and aiming to make profit by evaluating each on different criteria. Generally, this is a misleading way of making comparisons.

This is important because this sort of reasoning is used to justify an uncircumspect version of encouraging people to earn to give that I see coming from this community. As I've seen it, the argument goes that in your career your should focus on acquiring wealth rather than doing good because you can then use that wealth to do good by being an effective altruist. But this ignores that you can potentially do more harm in acquiring great wealth than you can compensate for by using your wealth altruistically.

As I mentioned, I think there are some good reasons to believe the money Bankman-Fried and Moskovitz are contributing to the EA community  was acquired in ways that may have caused significant harm. This doesn't mean the EA community should reject this money, and if the money is used judiciously it may well mean the Bankman-Fried and Moskovitz's  successes in acquiring wealth may will be net positives for humanity. But it is also not hard to think of examples where so much harm was done in acquiring the funding of a charitable organization even if the charity does a lot of good it won't be a net positive. For example, consider the Sackler family, who in the process of acquiring their wealth, were happy to ignore that they were creating tens of thousands of opioid addicts. Their charitable work (mostly donating to already well-funded museums as far as I know) also probably also wouldn't be evaluated well by the EA community. But the real harm happened even before they attempted to clean up their reputations through charitable donations.

In light of harms that can be caused in acquiring wealth, I think the EA community should be more circumspect about encouraging people to earn to give. Succeeding in business to maximize your positive impact through charitable work isn't necessarily bad advice, but you do need account for the harms businesses can do.

Replies from: jacksondc, Linch
comment by jacksondc · 2022-05-13T16:33:07.840Z · EA(p) · GW(p)

The way I read it, Will is comparing the challenge of doing good to the challenge of earning money (for its own sake). Not, as you assume, to the challenge of doing good by earning money.

The point is that if we try to learn any lessons from people who are optimizing to earn money, we'll need to keep in mind that we have reason to be more risk-averse than they are.

Replies from: Linch
comment by Linch · 2022-05-13T19:00:23.553Z · EA(p) · GW(p)

Yes, this is what I meant to say, but failed to communicate. Thank you for putting it more succinctly and politely than I could.

comment by Linch · 2022-05-11T20:28:04.156Z · EA(p) · GW(p)

If someone from High-Impact Athletes writes a post outlining what we can learn from deliberate practice in professional sports, and mentions salient differences between how metrics work in sports training and EA training as a caveat to this advice, I do not think the correct takeaway is "this whole discussion is irrelevant to EA because sports are of ~0 impact for the world, and probably net negative anyway."

Replies from: guyi
comment by guyi · 2022-05-11T23:21:22.849Z · EA(p) · GW(p)

What did I say warranted this comparison? I am not saying anything anyone has said is irrelevant to EA. I am saying argument Will made that for-profit businesses can't do much harm because their worst outcome is bankruptcy is misleading. I am also saying that encouraging people to earn to give without considering that harm can by done by earning is one of the ways the EA movement could end up having a net negative impact.

Replies from: Linch
comment by Linch · 2022-05-12T01:46:13.019Z · EA(p) · GW(p)

At first I thought you genuinely misread his comment, and politely corrected you. And then your next comment (which did not thank me, or acknowledge the correction) suggested you deliberately misread it so you can get on your hobbyhorse about something else. So I tried to illustrate why this is not an appropriate strategy with an analogy. Perhaps my issue was that I being too metaphorical and insufficiently direct earlier. In that case I'm sorry for the lack of clarity in communication. 

comment by Charles He · 2022-05-11T02:39:24.291Z · EA(p) · GW(p)

I am very confused about this reasoning. It seems clear that there is a lot worse harm that can be caused by a for-profit enterprise than simply that enterprise going bankrupt. 

There's several extremely bad outcomes of bad charities:

  • One example is in the footnotes and where people actually died[1].
  • Another famous example is a pumping system for developing countries that consumed donor money and actively made it more difficult to get water.

It's not clear anything would have stopped these people besides their own virtue or self awareness, or some kind of press attention. The effect of first world faculty and wealth is overwhelming and can trample all kinds of safeguards (a American volunteer blew the whistle on the first case, a fact that is thought provoking and informative, after hundreds of children and actual doctors passed through the clinic).

 

What about weapons manufacturers or fossil fuel and tobacco companies? There are many industries that profit from activities that many people would consider a net harm to humanity.
Structurally, in a business, help or harm isn't related to the main activities of the business. 

In a real business, overwhelming effort is being made to make sure the business is successful. In the hundred trillion dollar world economy, almost no one is paying money to help or harm people. 

For any given amount of money, you can do tremendous more harm and kill people with the meme of doing good, than running a business, even in situations where there aren't many functioning institutions.

  1. ^

    Here is one example that made it into NPR and the New Yorker, Guardian, etc. 

    I suspect the truth is more complicated than these articles suggest. I think being a woman, blonde, American and blogging/Instagram played a role in the reason this person is being read about on US national media—the implication I'm making is that this might be happening all the time. 

    It's incredibly, deceptively hard to accomplish anything in very different cultures/economies/societies, much less cost effectively. Achieving this is possible, but rare and hard and undervalued.

Replies from: guyi
comment by guyi · 2022-05-11T03:20:19.962Z · EA(p) · GW(p)

My point wasn't that charities are incapable of doing harm. There are many examples of charities doing harm as you point out. The point is that reasoning that non-profits have more potential to cause harm that for-profit seems to ignore that many for-profit enterprises operate at much larger scale than any non-profits and do tremendous amounts of harm.

In a real business, overwhelming effort is being made to make sure the business is successful. In the hundred trillion dollar world economy, almost no one is paying money to help or harm people.

Yes, most successful business are primarily focused on making profit rather doing good or harm. But this doesn't mean they aren't willing to do harm in the pursuit of profit! If someone dedicates an amount of money to business that then grows it large enough to do lots of harm, even if as a side-effect,  it's quite conceivable they could accomplish more total harm than someone simply dedicating that same money to a directly harmful (but not profitable) venture.

Replies from: Charles He
comment by Charles He · 2022-05-11T03:33:57.443Z · EA(p) · GW(p)

The point is that reasoning that non-profits have more potential to cause harm that for-profit seems to ignore that many for-profit enterprises operate at much larger scale than any non-profits and do tremendous amounts of harm

You're absolutely right. For profits absolutely do harm. In general "capitalism has really huge harms", almost every EA or reader here would agree (note that I'm not an EA necessarily or represent EA thought).

The scale is the point here—you're also exactly right. For many activities, it makes many, many millions to create a situation where we are harming people. 

To be tangible, it's hard to think of any business you or I could setup that would be that harmful as posing as a fake charity and disrupting medical service or food supply.

Replies from: Charles He, guyi
comment by Charles He · 2022-05-11T04:01:19.960Z · EA(p) · GW(p)

Well, I've actually sort of slipped into another argument about scale and relative harm, and got you to talk about that. 

But that doesn't respond to your original point, that businesses can do huge harm and EA needs to account for that. So that's unfair to you.

Trying to answer your point, and using your view about explicitly weighing and balancing harms, there's another point about "counterfactual harm" that responds to a lot of your concerns. 

In the case of a crypto currency company:

 

If you make a new make crypto company, and become successful by operating a new exchange, even if you become the world's biggest exchange, it's unclear how much that actually caused any more mining (e.g. by increasing Bitcoin's price)

There's dozens of exchanges already, besides the one you created. So it's not true that you can assign or attribute 20% or 50% of emissions to money, just from association. 

In reality, I think it's reasonable that the effect is small, so even if the top #1 trading platform wasn't founded, almost the same amount of mining would occur. (If you track cryptocurrency prices, it seems plausible that no one cares that much about the quality of exchanges). 

So the money that would have gone to your platform and been donated to charity, would buy yachts for someone else instead.

(By the way—as part of your crypto currency company, if you make and promote a new cryptocurrency that doesn't mine, and "stakes" instead, then your cryptocurrency company might accelerate the transition to "staking", which doesn't produce greenhouse gasses like mining. Your contribution to greenhouse gasses is negative despite being a crypto company. But I share the sentiment that you can totally roll your eyes at this idea, let's just leave this point here.)

 

You mentioned other concerns about other companies. I think it's too difficult for me to respond, for reasons that aren't related to the merit of the concern.

comment by guyi · 2022-05-11T04:11:34.995Z · EA(p) · GW(p)

Since we agree scale is a key part of this, I don't know how you can be confident that an imagined fake charity that disrupts medical service or food supply would ever be large enough to equal scale the harms caused by some of the most powerful global corporations. In this era of extreme wealth inequality it's plausible that some billionaire could accrue massive personal wealth and then transfer that wealth to a non-profit, which, freed from the shackles of having to make a profit, could focus solely on doing harm. But equally that billionaire could found an institution focused on turning a profit while doing harm and use the profits to grow the institution to a scale and concomitant harm that far exceeds what they would have been able to achieve with a non-profit. 

For example, if we are going to imagine a fake that charity that disrupts delivery of a medical service, why don't we imagine that they do this by acting as a middleman that extracts a profit by reselling medical supplies for an unconscionable profit. This profit, in turn enables them to grow and slip their "services" between more people and their medical providers. While this may seem like a criminal enterprise, for many companies that exist today this basically their business model, and they operate at scales that eclipse most medical non-profits I know of.

Being a non-profit does provide good cover for operating in harmful fashion, but growth through accumulation of capital is very power mechanism--I don't think we should be surprised to find the largest harm causing institutions use it as their engi

Replies from: Charles He
comment by Charles He · 2022-05-11T04:23:50.166Z · EA(p) · GW(p)

I don't know how you can be confident that an imagined fake charity that disrupts medical service or food supply would ever be large enough to equal scale the harms caused by some of the most powerful global corporations.

But we're talking about the relative harm of a bad new charity compared to a harmful business. 

I think you agree it doesn't make sense to compare the effect of our new charity versus, literally all of capitalism or a major global corporation. 

 

But equally that billionaire could found an institution focused on turning a profit while doing harm and use the profits to grow the institution to a scale and concomitant harm that far exceeds what they would have been able to achieve with a non-profit. 

Let's be honest, we both know perfectly well, that your view and understanding of the world is that, if a business could make significantly more profit being evil, it would already be doing it. That niche would be filled. I probably agree.

But if that's true, it must be that even an amoral business person could not make profit by doing the same evil—all the evil capitalists got there first. So there's no evil super harm possible as described in your story.

why don't we imagine that they do this by acting as a middleman that extracts a profit by reselling medical supplies for an unconscionable profit. This profit, in turn enables them to grow and slip their "services" between more people and their medical providers. While this may seem like a criminal enterprise, for many companies that exist today this basically their business model, and they operate at scales that eclipse most medical non-profits I know of.

Yes, basically, we sort of both agree this is happening. 

The difference between our opinions is that, I think in healthy marketplaces, this profit seeking is extremely positive and saves lives (ugh, I sound like Kevin Murphy.)

 

Also, we both know that there's not going to be any way to agree or prove eachother wrong or right about this specific issue.

And we're really really far from the point here (and I think it's better addressed by my other comment [EA(p) · GW(p)]).
 

comment by aaronmayer · 2022-05-10T19:17:51.008Z · EA(p) · GW(p)

Great post  - always fun to see Will weighing in on hot-button issues. 🎉

As for where to draw the line on personal spending and frugality, the example of flying business class on an airplane is a perfect illustration: no one needs to fly business class, and the marginal benefits of extra legroom and early boarding are so not worth the 2x or 3x ticket price, imo. 

To the concern about value drift and optics, our reputation as a movement would obviously be tarnished if folks like Will and Toby (or any of us) bought yachts and mansions. If we can avoid flagrantly conspicuous consumption, that'd be great. Beyond that, we shouldn't be eating rice and beans every night. I want a well-balanced diet of kale and quinoa for every EA doing good work out there!

Let's not forget that SBF cooks his own meals 

Also relevant, I once asked Peter Singer why we don't all walk around in ash and sackcloth in order to donate every spare penny, and his response was "if everyone in a movement you had never heard of before were walking around in ash and sackcloth, would you really want to join?" So even my homeboy Peety acknowledges the importance of optics.

Last point: take the Further Pledge if you're concerned about individual value drift. I took the pledge in July 2021 (capped my salary at $70k USD) and I can attest that it feels fabulous! It's definitely increased my overall felicity. ❤️

Replies from: Jeff_Kaufman, Buck, MatthewDahlhausen
comment by Jeff Kaufman (Jeff_Kaufman) · 2022-05-10T19:44:50.320Z · EA(p) · GW(p)

no one needs to fly business class, and the marginal benefits of extra legroom and early boarding are so not worth the 2x or 3x ticket price

If all you want is extra legroom, you can get an exit-row seat for much less. Early boarding isn't worth very much unless you're traveling with a carry-on that can't be checked (ex: musical instrument) and there are cheaper ways to get it. I see the real benefit of business class as (a) a more comfortable place to work or (b) arriving better rested, especially if it gets you a lay-flat seat on an overnight flight. Personally, this isn't a trade-off that has made sense for me, but I can see cases where it would be worth it if it gives you essentially an extra working day.

Replies from: Owen_Cotton-Barratt, bec_hawk
comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-05-10T21:27:40.549Z · EA(p) · GW(p)

FWIW I've been trying to fly business class for transatlantic flights for a few years for these reasons. I think it's an usually big effect size for me because otherwise long haul flights play badly with my chronic fatigue and can cost me effectively >1 day, but I expect that many people would get a few hours' worth of extra productive time (I take advantage of both the lie-flat bed and the good work environment for writing that doesn't need internet).

I've felt weird about expensing it so mostly just been paying for it myself (I don't have many other big expenses in my life except childcare), but I have noticed that I'm sometimes wanting to make a strong recommendation that a particular other person try to fly business class, and offered to pay for it personally because I think this will help significantly more than what I can otherwise do with my donations. So I seem to be at "we should probably do a bit more of this at the margin".

comment by Rebecca (bec_hawk) · 2022-05-11T17:18:08.846Z · EA(p) · GW(p)

Yeah I think it's a very different calculation if your flight is up to 24 hours long. Also you can only take an exit row seat if you have the (physical) capacity to help in an emergency (e.g. can't be flying solo with kids or elderly relatives, you have to be able  to throw 20kg at a time,  can't have certain other impairments, have to be able to understand the language of the relevant country and so on), so I don't know how scalable a suggestion that is.

comment by Buck · 2022-05-11T19:16:46.000Z · EA(p) · GW(p)

I massively disagree re the business class point. In particular, many people (e.g. me) can sleep in business class seats that let you lie flat, when they would have not slept and been quite sad and unproductive.

not worth the 2x or 3x ticket price

As a general point, the ratio between prices is irrelevant to the purchasing choice if you're only buying something once--you only care about the difference in price and the difference in value.

Replies from: DB
comment by DB · 2022-05-11T23:58:18.336Z · EA(p) · GW(p)

you only care about the difference in price and the difference in value

Agree with this as a general principle, provided the "difference in value" also takes into account longer-term effects like movement reputational cost.

I don't think individuals choosing to fly business class based on productivity calculations has much, if any, movement reputational cost. On the other hand, a prominent EA figure might accurately calculate that they gain one extra productive work hour each week, valued at say $100, by paying someone $50 to brush and floss their teeth for them while they sit there working.

This is obviously a fanciful scenario, but I think there are lots of murky areas between flying business class and having a personal teeth brusher where the all-things-considered value calculation isn't trivial. This is especially the case for purchasing decisions that can't easily be converted to work productivity boosts, e.g. buying expensive luxury items for the pleasure they bring.

comment by MatthewDahlhausen · 2022-05-12T15:54:16.860Z · EA(p) · GW(p)

Here's a prediction: In the not-too-distant future, someone who calls themselves an effective altruist is going to purchase a private plane or helicopter and justify it saying the time it saves and the amount of extra good they can do with that saved time is worth the expense. The community is going to have a large population that disagrees and sees it as a wasteful extravagance, and a smaller but vocal population that will agree with the purchase as a worthwhile tradeoff, especially if that person is part of a sub-community within EA that is ok with more speculative expected value calculations. Instead of there being a clear, coordinated response disavowing the purchase as extravagant, the community is going to hesitate and argue about the extent to which it is good to feed utility monsters and be muted in its outward response. But that's not going to stop the wider media picking up the story. A small fraction of the population will then henceforth liken EAs to the pastors at megachurches with private jets who use do-gooder justifications for selfish purposes. And yes, you could construct some sort of hypothetical where someone needs a helicopter to more quickly fly between trolley levers to save a bunch of people. But the much more likely scenario is that someone wants a helicopter and is fine using an iffy, cursory justification for it and the trolley brakes are working just fine.

Replies from: aaronmayer
comment by aaronmayer · 2022-05-12T17:02:23.262Z · EA(p) · GW(p)

Well phrased! I'd bet you're right that something akin to this will happen in the future. Solid prediction.

I guess I'm just trying to not be that guy, and I hope everyone else tries to not be that guy too.

comment by Linch · 2022-05-10T04:42:39.144Z · EA(p) · GW(p)

Great post! I'll think about the ramifications later. One minor note:

As well as the unilateralist’s curse (where the most optimistic decision-maker determines what happens), there’s a risk of falling into what we could call the bureaucrat’s curse,[10] [EA(p) · GW(p)] where everyone has a veto over the actions of others; in such a situation, if everyone follows their own best-guesses, then the most pessimistic decision-maker determines what happens.

If I understand Bostrom's paper correctly, this is just a special case of the unilateralist's curse, and is mathematically equivalent (bottom of pg.7-8 in pdf, 355-356 in the journal):

Finally, fifth, though we have thus far focused on cases where a number of agents can undertake an initiative and it matters only whether at least one of them does so, a similar problem arises when any one of a group of agents can spoil an initiative—for instance, where universal action is required to bring about an intended outcome. Consider the following example: 

[...]

These cases of unilateral spoiling or abstinence are formally equivalent to the original unilateralist curse, with merely the sign reversed. 

Since the problem in these cases is the result of unilateral abstinence, it seems appropriate to include them within the scope of the unilateralist’s curse. Thus, in what follows, we assume that the unilateralist’s curse can arise when each member of a group can unilaterally undertake or spoil an initiative (though for ease of exposition we sometimes mention only the former case).

comment by olly · 2022-05-10T20:42:06.966Z · EA(p) · GW(p)

Great to hear Will's thinking this. Some thoughts

I believe that EA is about

  • discovering problem areas that are underfunded/ignored
  • discovering robust, testable (maybe capital efficient?) solutions to those problem areas
  • allocating capital to these solutions

EA has capital, but the bottleneck is either problem areas that are underfunded and/or robust testable solutions to problem areas.

To me, this sounds a lot like starting a startup. Founders aim to find under-explored areas in which they can build robust solutions. We know this works well for incentivizing innovation.

So my thinking is as follows: rather than attempting to prescribe what people could solve, EA starts giving large cash payouts for demonstrated QALYs saved (near term / long term / recurring). Admittedly this could be a lot of work. However, EA is already doing this in its assessment of charities. The objective would be to use EA capital to provide an upside for startups that demonstrate that they save a considerable number of QALYs. Maybe this creates strange incentives but could be something to explore.  

comment by Vaidehi Agarwalla (vaidehi_agarwalla) · 2022-05-10T13:18:13.813Z · EA(p) · GW(p)

Some clarifications on terminology:

Briefly, the main reasons I see in favour of giving more are:

  • ....
  • Option value: if we build the infrastructure to productively absorb funding, then we can choose not to use it if it turns out to not be necessary; whereas if we don’t build the infrastructure now, then it will take time to do so if in a few years’ time it does turn out to be necessary.
  1. How do you define infrastructure ? Could you give some concrete examples of infrastructure you had in mind when you wrote this? (I've noticed there is a lack of clarity around this term in general). 
Replies from: David Mears
comment by David Mears · 2022-05-14T14:55:06.624Z · EA(p) · GW(p)

I turned this into a question. Maybe someone will answer: https://forum.effectivealtruism.org/posts/piLpBkxFyGcnvT6fA/what-is-meant-by-infrastructure-in-ea

comment by ElikaSomani · 2022-05-10T03:22:13.685Z · EA(p) · GW(p)

As someone who has heard a lot of criticism from non-EAs about the perception of loose money and funding in EA, I really appreciate this post and the thoughtfulness :)

 I do really believe that the outsider perception of lots of money in EA is a significantly negative and potentially a major deterrent to community-building efforts. Posts like this and, as you mention, criticisms/red teaming of EA I think really help build a more positive image and more importantly accountability. Curious about how to translate these efforts (which take place mainly on the Forum, not something new EAs typically are on) into outreach material and community building. 

comment by Jonas Vollmer · 2022-05-13T00:54:25.995Z · EA(p) · GW(p)

Regarding Harming quality of thought, my main worry  is a more subtle one:

It is not that people might end up with different priorities than they would otherwise have, but that they might end up with the same priorities but worse reasoning

I.e. before there was a lot funding, they thought "Oh I should really think about what to work on. After thinking about it really careful, X seems most important". 

Now they think "Oh X seems important and also what I will get funded for, so I'll look into that first. After looking into it, I agree with funders that this seems most important."

This is still for the same X, and their conclusions are still the same. But their reasoning about X has now become worse because they investigated important claims less thoroughly.

comment by PabloAMC · 2022-05-10T10:50:20.743Z · EA(p) · GW(p)

My intuition is that there is also some potential cultural damage, not from the money the community has, but from not communicating well that we also care a lot about many standard problems such as third world poverty. I feel that too often the cause prioritization step is taken for granted or obvious, and can lead to a culture where only "cool AI Safety stuff" is the only thing worth doing.

Replies from: Stephen Clare, PabloAMC
comment by Stephen Clare · 2022-05-10T11:26:30.265Z · EA(p) · GW(p)

Can you give an example of communication that you feel suggests "only AI safety matters"?

Replies from: MichaelPlant, projectionconfusion, PabloAMC
comment by MichaelPlant · 2022-05-10T13:12:50.319Z · EA(p) · GW(p)

Not exactly the same thing, but there was a whole post and discussion on whether EA is "just longtermism" [EA · GW]last week. 

Replies from: Dion, nathan
comment by Dion · 2022-05-10T18:09:09.757Z · EA(p) · GW(p)

Adding onto this, the Virtual Programs (Introductory) currently has 3 weeks dedicated to Longtermism, Existential Risks and Emerging Technologies whereas there are little to no compulsory content on poverty, global health or climate change. (except Pandemics) Many of my participants have voiced out on this. If facilitators are not able to give a good answer, it can be easy for newcomers to have a skewed perspective that EA is just longtermism and x-risk.

comment by Nathan Young (nathan) · 2022-05-10T14:23:21.402Z · EA(p) · GW(p)

In particular, this comment by Max Dalton. While I don't think that means "Only AI safety matters" I think it would lead to much more content on AI safety than I expected.

Where we have to decide a content split (e.g. for EA Global or the Handbook), I want CEA to represent the range of expert views on cause prioritization. I still don't think we have amazing data on this, but my best guess is that this skews towards longtermist-motivated or X-risk work (like maybe 70-80%). 


https://forum.effectivealtruism.org/posts/LRmEezoeeqGhkWm2p/is-ea-just-longtermism-now-1#2_1_Funding_has_indeed_increased__but_what_exactly_is_contributing_to_the_view_that_EA_essentially_is_longtermism_AI_Safety_ [EA · GW]

comment by projectionconfusion · 2022-05-11T21:46:01.088Z · EA(p) · GW(p)

Its not specific communications so much as it is the level of activity around specific causes. How many posts and how much discussion time is spent on AI and other cool intellectual things, vs. more mundane but important things like malaria. Danger of being seen as just a way for people to morally justify doing the kind of things they already want to do. 

comment by PabloAMC · 2022-05-10T13:33:50.127Z · EA(p) · GW(p)

I don't think we have ever said this, but the is what some people (eg Timnit Gebru) have come to believe. That was why, as the EA community grows and becomes more widely known, it is important to get the message of what we believe right.

See also the link by Michael above.

comment by PabloAMC · 2022-05-10T13:32:17.869Z · EA(p) · GW(p)

I don't think we have ever said this, but the is what some people (eg Timnit Gebru) have come to believe. That was why, as the EA community grows and becomes more widely known, it is important to get the message of what we believe right. See also https://forum.effectivealtruism.org/posts/LRmEezoeeqGhkWm2p/is-ea-just-longtermism-now-1 [EA · GW]

comment by MichelJusten (MJusten) · 2022-05-11T05:41:11.097Z · EA(p) · GW(p)

Here’s a reminder I’ve found helpful for feeling risks of omission: there’s a world in which we fail.

There’s a world in which we tried to stop X-risk by unaligned AI or a catastrophic pandemic, but we didn’t. We didn’t act with enough urgency, and being really cautious about our spending slowed us down.

comment by Jeff Kaufman (Jeff_Kaufman) · 2022-05-10T19:50:25.049Z · EA(p) · GW(p)

Thanks for writing this! I am also very strongly drawn to frugality from a combination of, as you say, moral aesthetic, and experience during a time when money was much less available. I wish I could give some solid advice here on "how I learned to stop making bad labor-money tradeoffs" but this is still not something I'm great at.

comment by Finngoeslong · 2022-05-11T04:43:42.582Z · EA(p) · GW(p)

This is a great post and a fascinating history - has inspired me to create an account.

"So far, we've generated more than $30 bn for something like $200 mn, at a benefit:cost ratio of 150 to 1;[6]"

This I think is a weak chain in the argument. You may have raised $30bn for great causes, but how much of that would have gone to great or even good causes regardless?

Charities can often find fundraising to be profitable, but they are also normally taking resources from each other - (shifting donations from charity a to charity b) and so sometimes the net effect across the charitable economy is just spending more on seeking donations.

In EA case - would these people pledging to against malaria foundation really have spent the money on sports cars? Or even on ineffective opera house charities? I don't think so. The kind of people motivated to seek out a good charity would still do so, but they would arguably do less effectively in the absence of an effective altruism movement guiding them.

I don't disagree with the general thrust of the piece. But I think that throwaway 10x return claim on fundraising is perhaps dangerous.

Replies from: Jeff_Kaufman, Benjamin_Todd
comment by Jeff Kaufman (Jeff_Kaufman) · 2022-05-11T13:24:13.580Z · EA(p) · GW(p)

The kind of people motivated to seek out a good charity would still do so, but they would arguably do less effectively in the absence of an effective altruism movement guiding them.

I think the question of what people would have done in the absence of EA movement building is really hard, but my impression is different here. Personally, without being surrounded by a group of people who view altruistic dedication as normal, I think a likely outcome would have been to increasingly prioritize myself and my family as I got older. That is a common pattern, with idealism and willingness to make sacrifices decreasing with age.

The excited/obligatory motivation perspective [? · GW] is also relevant here: without a movement I think you get many fewer excitement-motivated people working on altruistically valuable things.

Replies from: Finngoeslong
comment by Finngoeslong · 2022-05-13T16:29:43.587Z · EA(p) · GW(p)

Agree it is hard to know. I think it's a very good point that a movement/community can sustain dedication over time.

comment by Benjamin_Todd · 2022-05-12T13:52:45.604Z · EA(p) · GW(p)

The unstated claim is that the charities EAs are donating to now are significantly more effective than where people would have donated otherwise (assuming they would have donated at all).

If the gain in cost-effectiveness is (say) 10-fold, then the value of where the money would have been donated otherwise is only 10% of the value now generated. That would reduce the cost-effectiveness multiple from 10x to 9x.

I think a 10x average gain seems pretty plausible to me – though it's a big question!

Some of the reasoning is here, though this post is about careers rather than donations: https://80000hours.org/articles/careers-differ-in-impact/

comment by Henry Howard · 2022-05-10T07:42:59.048Z · EA(p) · GW(p)

Great post

What are we going to spend this on? There seems to be a shortage of evidence-based global development causes. Current GiveWell charities are growing as fast as they can but the funding pool is growing faster (which is great! But it is already making me hesitant to give or encourage others to)

Should we not be working with and giving to orgs like Innovations for Poverty Action and JPAL to help us find new causes? Our cause discovery rate seems very slow as it is. The only new GiveWell cause in the last few years has been New Incentives, which is already fully-funded.

Replies from: freedomandutility, Tyner
comment by freedomandutility · 2022-05-10T09:32:58.364Z · EA(p) · GW(p)

If it’s possible / feasible, I imagine Charity Entrepreneurship collaborating with JPAL and IPA would lead to some new highly cost-effective global health startups.

comment by Tyner · 2022-05-11T19:04:05.085Z · EA(p) · GW(p)

EA funds gave $3 Million to IPA over the last two years:

https://funds.effectivealtruism.org/funds/global-development

Did you mean something different?

Replies from: Henry Howard
comment by Henry Howard · 2022-05-12T07:09:55.933Z · EA(p) · GW(p)

That's great. Cause discovery should be top priority while we're running out of causes

comment by Arepo · 2022-05-13T07:42:53.550Z · EA(p) · GW(p)

But morally-motivated people, especially on college campuses, often find seemingly-extravagant spending distasteful.

 

As far as I can see, no-one else has raised this, but to me the optics of having large sums of money available and not spending it are as bad or worse as spending too freely. Cf Christopher Hitchens' criticism of Mother Teresa - and closer to home, Evan's criticisms a few years ago that EA fund payouts were being granted too infrequently [EA · GW]. For what it's worth, I find the latter a much bigger concern.

Replies from: Arepo
comment by Arepo · 2022-05-13T08:09:13.869Z · EA(p) · GW(p)

Sub-hypothesis: the people  who find extravagant spending distasteful are  disproportionately likely to be the people who object to  the billionaires that enable it - and so that spending it isn't what pisses them off so much as what draws their attention to the scenario they dislike.

comment by Drew Spartz (Meta) · 2022-05-12T20:40:26.577Z · EA(p) · GW(p)

The "bureaucrat's curse" reminds me of Vitalik's bulldozer vs vetocracy political axis: https://vitalik.ca/general/2021/12/19/bullveto.html

Vetocracy can be beneficial if a system's strength depends on it not changing. e.g. People invest in Bitcoin because it's incredibly difficult to change its monetary policy. Bitcoin doesn't need to innovate. 

But if Ethereum is too vetocratic and fails to innovate - it could get outcompeted by other more nimble startups like Solana or Avalanche.

The current mood in the AI Safety community appears to be pessimistic. For example, Eliezer bet Bryan Caplan (2-1 odds) that humans will be extinct by Jan 1, 2030.

If you believe that inaction will lead to extinction, reducing vetoes and increasing the variance of outcomes could increase the probability we'll survive.

As Scott Alexander says, 

Healthy people are fragile (increased variance can mostly make them worse), very sick people are antifragile (increased variance can mostly make them better). So it is reasonable to give a terminal cancer patient an experimental drug - the worst that happens is they die (which would happen anyway) and the best that happens is they recover - it's all upside and no downside.

comment by Duarte M · 2022-05-12T12:59:42.631Z · EA(p) · GW(p)

Apologies for the basic question but, if there’s more money allocated than there are effective projects, should those of us earning to give start funding less effective charities? Is there a ranking with effectiveness and funding % we can sort by?

comment by Ulrik Horn · 2022-05-11T09:31:05.743Z · EA(p) · GW(p)

I think there might be several things we could spend more money on while still being viewed in a positive light by most of our stakeholders and the more spartan among us. Examples include:

-More staff in organisations and projects to prevent burn-out

-Child-care facilities, paid parental leave, etc.

-At least 4 weeks paid holidays/year 

I agree that all such spending should be vetted according to our high bars and must admit I have not done so before making this comment. That said, anecdotal evidence from for example from Patagonia offering child care seems to indicate that it might as well also increase the output of the organization, especially over the long term.

I am probably forgetting many examples in the list above as I am a parent of 2 small children and am biased towards the challenges of parenting. For example, I have a hard time going to EAG but would probably go if there was quality child care offered. I am sure others like people with disabilities and other groups have other suggestions on ways to use increased funding to invest in the health, diversity and perhaps also productivity of the EA movement.

(I made a similar comment on another post but decided to post here too as it seems comments are more useful the quicker they are made after the publication of the post)

comment by Michael_Wiebe · 2022-05-25T02:27:48.824Z · EA(p) · GW(p)

Among Giving Pledge signatories, there are around ten who are at least somewhat sympathetic to either effective altruism or longtermism. And there are a number of other successful entrepreneurs who take EA or longtermism seriously

How are you defining longtermism, to end up with EA and longtermism being alternatives? 

comment by Rubi · 2022-05-10T19:27:39.346Z · EA(p) · GW(p)

One of the keys things you hit on is  "Treating expenditure with the moral seriousness it deserves. Even offhand or joking comments that take a flippant attitude to spending will often be seen as in bad taste, and apt to turn people off."

However, I wouldn't characterize this as an easy win, even if it would be an unqualified positive. Calling out such comments when they appear is straightforward enough, but that's a slow process that could result in only minor reductions. I'd be interested in hearing ideas for how to change attitudes more thoroughly and quickly, because I'm drawing a blank.

comment by mtrazzi · 2022-05-10T13:20:04.422Z · EA(p) · GW(p)

Thanks for the thoughtful post. (Cross-posting a comment I made on Nick's recent post.)

My understanding is that people were mostly speculating on the EAF about the rejection rate for the FTX future fund's grants and distribution of $ per grantee. What might have caused the propagation of "free-spending" EA stories:

  • the selection bias at EAG(X) conferences where there was a high % of  grantees.
  • the fact that the FTX future fund did not (afaik) released their rejection rate publicly
  • other grants made by other orgs happening concurrently (eg. CEA)

This post helped me clarify my thoughts on this. In particular, I found this sentence useful to shed light on the rejection rate situation:

 "For example, Future Fund is trying to scale up its giving rapidly, but in the recent open call it rejected over 95% of applications" 

comment by Linch · 2022-05-11T13:37:17.413Z · EA(p) · GW(p)

There are other prospective major donors, too. Jaan Tallinn, the cofounder of Skype, is an active EA donor. At least one person earning to give (and not related to FTX) has a net worth of over a billion; a number of others are on track to give hundreds of millions in their lifetime. Among Giving Pledge signatories, there are around ten who are at least somewhat sympathetic to either effective altruism or longtermism. And there are a number of other successful entrepreneurs who take EA or longtermism seriously, and who could increase the total aligned funding by a lot. So, while FTX’s rapid growth is obviously unusual, it doesn’t seem like a several-orders-of-magnitude sort of fluke to me, and I think it would be a mistake to think of it as a ‘black swan’ sort of event, in terms of EA-aligned funding.  

So the update I’ve made isn’t just about the level of funding we have, but also the growth rate.

As an aside, Joel Becker and I have an ongoing bet on whether or not there will be another EA who becomes a billionaire in the next 9-10 years, excluding crypto and inheritances (I'm for, he's against). 

comment by freedomandutility · 2022-05-10T09:30:12.841Z · EA(p) · GW(p)

To clarify, is there another earning-to-give EA billionaire who is choosing to remain anonymous?

Replies from: gruban
comment by Patrick Gruban (gruban) · 2022-05-10T10:22:14.822Z · EA(p) · GW(p)

He might be referring to Gary Wang as he does later in the text, but not sure about this

Replies from: Lorenzo Buonanno, Greg_Colbourn
comment by Lorenzo (Lorenzo Buonanno) · 2022-05-10T13:50:47.939Z · EA(p) · GW(p)

At least one person earning to give (and not related to FTX) has a net worth of over a billion


Can't be Gary Wang, as he's related to FTX

comment by projectionconfusion · 2022-05-11T21:48:11.708Z · EA(p) · GW(p)

There being a lot of funding available in EA also changes the calculus for people deciding if they want to donate their own money. If there are super rich people donating to EA, to the extent that finding ways to spend money is a problem, then the motivation for normal individuals to donate is lower. 

Replies from: Jeff_Kaufman
comment by Jeff Kaufman (Jeff_Kaufman) · 2022-05-12T00:10:23.298Z · EA(p) · GW(p)

I think it changes it some, but not hugely? Even if the best remaining option for making the world better was direct cash transfers I think donations would still make a lot of sense; There's Lots More To Do [? · GW].

I also don't think we stay in the current dynamic. Part of why it is important for a lot of people to go into directly doing useful things now is to identify and scale up opportunities for directing a lot of money toward valuable things. It's become harder to identify extremely cost-effective ways of spending your money to speed that process up, but money will still be very important.

Replies from: projectionconfusion
comment by projectionconfusion · 2022-05-13T21:21:49.272Z · EA(p) · GW(p)

Sorry I should have been clearer, I was meaning more in psychological terms than economic ones. An extra dollar might still do the same amount of good, but the way people intuitively assess impact it will feel very different depending on the funding context people feel it is in. 

comment by Anthony Repetto · 2022-05-13T21:32:50.470Z · EA(p) · GW(p)

I was responding to Jeff - and thank you, Jeff, for clarifying that downvotes can hide me.

In my response to him, I was expressing my concern that a subset of the Forum has the power to hide my self-defense, so that my correction of their misrepresentation goes unnoticed, while their misrepresentations stand in full view.

Another EA Forum post, just recently ("Bad Omens in Current Community Building") was trying to bring to the community's attention that, among other things, EA is sometimes perceived as cultish or cliquish. I hope you can all see that, when my correction of others' misrepresentations are downvoted to obscurity, then that concern of cliquishness is real.

Replies from: Anthony Repetto, Anthony Repetto
comment by Anthony Repetto · 2022-05-13T21:50:05.825Z · EA(p) · GW(p)

I should also add this note: there is a double-standard in communication, here. I was asked repeatedly to 'calm down and speak nicely, because only then will we listen' - meanwhile, the ones who misrepresented were given a pass to lead the listener by the nose along imputations such as "because you posted a lot, no one is going to listen to you." They got that pass, easily, with the header "being brutally honest"/"honestly". Should I just begin all my posts with "just being brutally honest", so that no one uses my tone as a reason to ignore the content of what I say?

Replies from: bec_hawk
comment by Rebecca (bec_hawk) · 2022-05-15T08:06:34.667Z · EA(p) · GW(p)

Hi Anthony. I would say that in the responses I’ve read where they use words like ‘honestly’, my reading of the tone was that they were going for a “tough love” approach. Using the word ‘honestly’ (when not said to manipulate people) often indicates the person is aware that what they’re saying may been seen as too harsh, but that they think what they’re saying is of enough value to others that it still merits saying (and sometimes may only have that value if said bluntly).

In contrast, my interpretation of the tone in your comments, using the word ‘disrespect’ a lot, asking for an apology etc, was that it was solely about providing value to yourself. For most people I know, the concept of feeling ‘disrespected’ by others, and going around demanding apologies for it, would never occur to them. Having that mindset is something I associate with arrogance, aggression and self-righteousness. I think in general people in this forum are wary of engaging further with people who appear to lack some level of humility.

Perhaps in certain circles it is expected that you ought to defend yourself in that way, in order to show that that what someone has said about you really is incorrect? But in the absence of social pressure in that direction, doing so suggests personality traits that some might be wary of.

comment by Anthony Repetto · 2022-05-13T21:33:29.932Z · EA(p) · GW(p)

[Jeff deleted his response, yet it was still helpful!]

comment by Anthony Repetto · 2022-05-11T21:37:57.821Z · EA(p) · GW(p)

"it can cause us to go awry if it means we don’t take chances of upside seriously, or when we focus our concern on false positives rather than false negatives"

I've encountered this problem repeatedly in my attempts to speak with EAs here in the East Bay. With one topic, for example, I can napkin the numbers for them: $5 Trillion in real estate impacted by hurricanes, in US alone - so, there's on-the-order-of a $1 Trillion wealth-effect if we can stop hurricanes. A proposal with a 1:1,000 chance of doing so would still be worth $1 Billion-ish to check for feasibility. Just running a simulation to get a sanity-check. Yet?

EAs are "busy coordinating retreats, so I don't have time to help connect you to someone for a new project." I'm NOT looking for any funding, either - there's a decent chance that the cost of the solution is lower than the Federal Government's increased tax-revenue from hurricane-prevention, so I say that the government should pay for it. They're also the ones who can negotiate with all those countries; a charity would fail, there. As I said in my EA Forum post on the topic, six months ago: I am looking for people to have a conversation. I expect that any particular individual would not be able to help, so I hope that each of the people listening would instead ask their own circle - in just two or three steps, that social network can reach almost anyone. The EAs are consistently reticent, wondering instead if I want funding, or if I am trying to get hired by them.

This is indicative of a pattern:

EAs have formats for dialogue which each serve certain needs well - the EA Forum, Slack, Conferences, Retreats, and local gatherings. Unfortunately, a vital format is missing from that list: the 1 to 2 hour, 2 to 5 person in-depth discussion, which includes people from outside that particular academic clique of sub-causes. I keep requesting this format, and I am told to "go to the Forum, or Slack", where I have been waiting months for any response, now.

Replies from: Jeff_Kaufman, Khorton, Linch, DonyChristie, Anthony Repetto
comment by Jeff Kaufman (Jeff_Kaufman) · 2022-05-12T00:05:07.090Z · EA(p) · GW(p)

where I have been waiting months for any response

You're referring to Seeking a Collaboration to Stop Hurricanes [EA · GW], right?

Reading your comments on that post, it sounds like you were hoping readers would respond to your post by making things happen. That's occasionally the way things go, but almost always if you want to move something along you need to drive it. For example, if you think this is possibly one of the most important things to do you could consider:

  • Seeking funding to work on this full-time. With your full time work you could learn how to run a simulation, or attempt to convince the government to run one. You could also explore related hurricane prevention proposals (for example, Myhrvold's proposal) .

  • Seeking funding to hire someone to work on this full-time (but if you aren't willing to do it I expect funders to consider that a negative signal)

  • Actively looking for collaborators, for example by attempting to identify relevant academics

Replies from: Anthony Repetto, Anthony Repetto, Anthony Repetto
comment by Anthony Repetto · 2022-05-13T05:13:18.938Z · EA(p) · GW(p)

Just to note for the next person: I am now being called "powerless and vulnerable" because I stand against being mis-represented. I have been mis-represented repeatedly, and so I have responded to each - yet, the fact that I  "repeatedly post (complaints about being misrepresented)... makes sure that not many people take (me) seriously." If your clique repeatedly mis-represents me, and then they use my own self-defense as a reason to justify exclusion, you've earned the title of clique!

Replies from: casebash
comment by Chris Leong (casebash) · 2022-05-13T07:10:33.638Z · EA(p) · GW(p)

I'm going to be honest. I think you'd have a better experience here if you engaged with people in a way that was less adversarial. I can understand why you might be frustrated that people didn't engage more with your ideas or that people misinterpreted what you wrote, but it seems to me that you're currently in a cycle where you felt you were mistreated or ignored which leads you to send out negative energy which then results in further negative interactions; and hence the cycle continues.

Replies from: Anthony Repetto, Anthony Repetto, Anthony Repetto, Anthony Repetto
comment by Anthony Repetto · 2022-05-13T20:05:20.829Z · EA(p) · GW(p)

For some reason, my original response is not showing up. I definitely did NOT make any attack on anyone, during my comment. I don't see why it would be deleted - I request a review of whoever deleted my response. Here it is, again:

"We rich white people would give you so much more respect if you poor black people spoke nicely when you complained." <--- this argument has been used a thousand times, around the world, to get people to cower while you continue to disrespect them. I won't cower; I am right to be upset, and I expect an apology for being misrepresented by them. I am not wrong for requesting this.

Further, again, I am not talking about "lack of engagaement" - ONLY your people have made that claim, and I dismiss it each time you've made it. I continue to point-out: I have been repeatedly misrepresented. I deserve an apology.

comment by Anthony Repetto · 2022-05-13T07:32:11.032Z · EA(p) · GW(p)

More to the point: why is my tone the more pressing issue, compared to the fact that I've been repeatedly misrepresented? Your Us/Them priorities are showing.

Replies from: Jeff_Kaufman, casebash, casebash
comment by Jeff Kaufman (Jeff_Kaufman) · 2022-05-13T14:14:28.797Z · EA(p) · GW(p)

I think the biggest reason why your tone is relevant here is that you are seeking introductions to potential collaborators. People care a lot about what others are like to work with!

Replies from: Anthony Repetto
comment by Anthony Repetto · 2022-05-13T21:59:30.893Z · EA(p) · GW(p)

I agree! So, consider the scenario: I stand-up and ask "does anyone know someone I might talk to?" and the response I get is "but we don't want to give you money". I correct that misrepresentation, repeatedly, until I suspect that I am being trolled - and my self-defense is used as a reason to ignore me. If I hadn't been poked-in-the-eye repeatedly, those introductions would begin on a pleasant footing.

Core to this problem: each of you are focusing on how I can "get better results by playing nice". I am focusing on "I was misrepresented, and that should be considered first, in the moral calculus." If I roll-over every time someone bullies me, then I'll be liked by a whole lot of bullies. That doesn't sound like a win, to me.

comment by Chris Leong (casebash) · 2022-05-16T10:07:48.164Z · EA(p) · GW(p)

I honestly think this is the one thing I could have said that could have helped you achieve your goals the most, more than offering a connection to a relevant person, if I actually knew someone interested in hurricane prevention.

comment by Chris Leong (casebash) · 2022-05-13T10:49:05.885Z · EA(p) · GW(p)

I suppose what you're saying makes sense from where you stand. I guess I'm trying to show you another way of seeing the world, even though I know it won't make sense from your perspective. I'd encourage you to imagine briefly what it would mean for this to be true, to explore what the world looks like through this lens, and its internal logic. I suspect that this exercise could be valuable even if you don't end up agreeing with this perspective.

Replies from: Anthony Repetto, Anthony Repetto
comment by Anthony Repetto · 2022-05-13T22:41:34.642Z · EA(p) · GW(p)

When did you edit your response? You were saying something else, originally...

Yes, I can imagine the world where I respond to the misrepresentations with politeness - I did that for twenty years, and the misrepresentations continued, along with so many other forms of bullying. I have seen the world from that lens, and I learned that it's better for me to stand-up to misrepresentations, even if that means the bully doesn't like me.

Replies from: casebash, casebash
comment by Chris Leong (casebash) · 2022-05-14T00:43:27.955Z · EA(p) · GW(p)

I have no idea if I edited it or not. I tried checking to see if they had a history feature, but apparently not.

comment by Chris Leong (casebash) · 2022-05-13T23:15:17.330Z · EA(p) · GW(p)

Maybe I should have been clearer. I'm asking you to imagine the world where everyone isn't intrinsically against you, they've tried to help and they've been pushed away. I know that's a difficult ask, but I suspect it would be worthwhile.

Replies from: anonymous_ea
comment by anonymous_ea · 2022-05-14T00:21:50.675Z · EA(p) · GW(p)

Strong downvote for extreme and inappropriate condescension in the guise of helping someone. There is no adequate reason for you to assume that Anthony is living in a world where everyone is intrinsically against him, and that he cannot even imagine not living in a different world. This is an extremely strong statement to make about someone you know through a few online comments. Why do you think you're right? 

Even if you were right, helping him would not take the form of trying to point this out publicly in such a tactless way. 

Replies from: casebash
comment by Chris Leong (casebash) · 2022-05-14T00:33:38.298Z · EA(p) · GW(p)

I can see why you might think it's a guise, but it really isn't the case. I think you're correct that it does come off as slightly condescending, but this isn't intentional. I'm trying to expand the range of what I can say without coming off as condescending, but for there are some things where I find it challenging; where it feels to me like trying to thread a needle. In any case, your comment contains useful feedback.

I just want to make it clear that it's a genuine attempt to say the most helpful thing that I can, even if I think it only has a small chance of making a difference, but I agree that a private message might have been better. As for why I think this what I think, it's mostly based on my experience of dealing with people. I could produce some explicit reasons if I really wanted to, but I'm not sure if it's worthwhile given that they are more the sideshow than anything.

Replies from: anonymous_ea
comment by anonymous_ea · 2022-05-14T01:09:01.608Z · EA(p) · GW(p)

Thanks, this is a good followup. I'm glad my comment contained useful feedback for you. 

I think your attempt to help Anthony went awry when he asked you why his tone was the bigger issue than whether he had been misrepresented, and you did not even seem to consider that he could be right in your reply. [EA(p) · GW(p)] Perhaps he is right? Perhaps not? But it's important to at least genuinely consider that he could be.

Replies from: Anthony Repetto, Charles He, casebash
comment by Anthony Repetto · 2022-05-14T06:49:26.240Z · EA(p) · GW(p)

Thank you for recognizing that my concern was not addressed. I should mention, I am also not operating from an assumption of 'intrinsically against me' - it's an unusually specific reaction that I've received on this forum, in particular. So, I'm glad that you have spoken-up in favor of due consideration. My stomach knots thank you :)

comment by Charles He · 2022-05-14T05:35:37.870Z · EA(p) · GW(p)

I don’t feel good about this situation, but I think your judgement is really different than most reads of what happened:

  • It’s clear to me that there’s someone who isn’t communicating or creating beliefs in a way that would be workable. Chris Leong’s comments seem objectively correct (if not likely to be useful).
  • (While committing this sin with this comment itself) It’s clearly better to walk away and leave them alone than risk stirring up another round of issues.
Replies from: casebash
comment by Chris Leong (casebash) · 2022-05-14T09:27:17.612Z · EA(p) · GW(p)

My comment very well may not be useful. I think there's value in experimenting with different ways of engaging with people. I think it is possible to have these kind of conversations but I don't think that I've quite managed to figure out how to do that yet.

Replies from: Charles He
comment by Charles He · 2022-05-14T10:52:09.770Z · EA(p) · GW(p)

I think the person involved is either having a specific negative personal incident, or revealing latent personality traits that suggest the situation is much less promising and below a reasonable bar for skilled intervention in a conversation.

With an willingness to be wrong and ignore norms, I think I could elaborate or make informative comments (maybe relevant of trust, scaling and dilution that seem to be major topics right now?). But it feels distasteful and inhumane to do this to one individual who is not an EA.

(I think EAs can and should endure much more, directly and publicly, and this seems like it would address would be problems with trust and scaling).

comment by Chris Leong (casebash) · 2022-05-14T02:11:15.348Z · EA(p) · GW(p)

That's useful feedback. I agree that it would have been better for me to engage with that more.

Replies from: anonymous_ea
comment by anonymous_ea · 2022-05-17T15:24:41.659Z · EA(p) · GW(p)

Glad to have been helpful :)

comment by Anthony Repetto · 2022-05-13T20:07:14.583Z · EA(p) · GW(p)

It seems someone is deleting my posts, when I have not said anything in those posts except my own self-defense and what has been done to me. Here it is, again:

I am waiting for an apology from them - I don't know why I should be pleasant after repeatedly being disrespected. That sounds like you're asking me to "be a good little girl, and let them be mean to you, because if you're good enough, then they'll start to be nice." It's not a fault upon me that I should 'be nice until they like me' - they misrepresented me, which is the issue, NOT "lack of engagement".

Replies from: Jeff_Kaufman, Jeff_Kaufman
comment by Jeff Kaufman (Jeff_Kaufman) · 2022-05-13T21:28:35.728Z · EA(p) · GW(p)

I still see that comment at https://forum.effectivealtruism.org/posts/cfdnJ3sDbCSkShiSZ/ea-and-the-current-funding-situation?commentId=6NRE6vxA5rhAC8cQP [EA(p) · GW(p)]

I think it's showing up as collapsed by default because it has been heavily downvoted?

Replies from: Anthony Repetto
comment by Anthony Repetto · 2022-05-13T21:55:33.248Z · EA(p) · GW(p)

Thank you for letting me know.

comment by Jeff Kaufman (Jeff_Kaufman) · 2022-05-13T21:26:29.184Z · EA(p) · GW(p)

I still see your comment at https://forum.effectivealtruism.org/posts/cfdnJ3sDbCSkShiSZ/ea-and-the-current-funding-situation?commentId=CdszwoXwCcZpdxxWd [EA(p) · GW(p)]

I think it is displaying as collapsed by default because it was heavily downvoted?

Replies from: Anthony Repetto
comment by Anthony Repetto · 2022-05-13T21:29:21.707Z · EA(p) · GW(p)

Thank you for the clarification. It's still worrisome that a subset, by downvoting, can ensure that my correction of their misrepresentation goes un-noticed, while their misrepresentation of me stands in full view. There was another post on the Forum, recently, talking about how outsiders worry that EA is a cult or a clique - I hope you can see where that concern is coming from, when my self-defense is downvoted to obscurity, while the misrepresentations stand.

comment by Anthony Repetto · 2022-05-13T07:19:20.139Z · EA(p) · GW(p)

I am waiting for an apology from them - I don't know why I should be pleasant after repeatedly being disrespected. That sounds like you're asking me to "be a good little girl, and let them be mean to you, because if you're good enough, then they'll start to be nice." It's not a fault upon me that I should 'be nice until they like me' - they misrepresented me, which is the issue, NOT "lack of engagement".

comment by Anthony Repetto · 2022-05-13T07:37:11.660Z · EA(p) · GW(p)

"We rich white people would give you so much more respect if you poor black people spoke nicely when you complained." <--- this argument has been used a thousand times, around the world, to get people to cower while you continue to disrespect them. I won't cower; I am right to be upset, and I expect an apology for being misrepresented by them. I am not wrong for requesting this.

comment by Anthony Repetto · 2022-05-13T04:05:24.514Z · EA(p) · GW(p)

I am not looking for funding - I asked if anyone is interested in running those simulations, or knows someone they could put me in touch with.

I quote from my post directly above: "I'm NOT looking for any funding, either - there's a decent chance that the cost of the solution is lower than the Federal Government's increased tax-revenue from hurricane-prevention, so I say that the government should pay for it."

I'm appalled that multiple times, here now and when I posted originally, after stating that I am NOT seeking funding, I am repeatedly misrepresented as "seeking funding". It's a basic respect to read what I actually wrote.

Included in my hope for connections are the relevant academics - I began my search at the EA Berkeley campus chapter. I know that the government would not listen to me without at least a first-pass simulation of my own; and I know that it is ludicrous for me to invest time into developing a skill that others possess, or re-inventing the wheel by making my own simulation. Those are all significantly more wasteful and ineffectual than asking people if they know anyone in related fields - this is because social networks are dense graphs, so only two or three steps away, I am likely to find an appropriate specialist. Your advice is not appropriate.

At no point did I ask the readers to do the work of simulation, or proposing to the government, on my behalf; you used a strawman against me. I am specifically asking if people can look in their social network for anyone with relevant skill-sets, who I might talk to - those skilled folks are the place where I'm likely to find someone who would actually do work, not here on the forums with a string of dismissive 'help' and fallacies.

Replies from: Jeff_Kaufman
comment by Jeff Kaufman (Jeff_Kaufman) · 2022-05-13T11:08:12.442Z · EA(p) · GW(p)

I can think of two main reasons why your posts haven't resulted in introductions to relevant specialists:

  • People with those connections haven't seen your posts.

  • While such people have seen your posts they don't consider this opportunity sufficiently promising to pass it on.

While many people do read the Forum it wouldn't be surprising if no one had seen your post who knew anyone relevant, since there aren't that many relevant experts. And even if they are, when you give someone an introduction you are staking some of your social capital, and based on your initial post and comments here I would not, personally, be willing to stake such capital.

I had seen that you'd written that you weren't looking for funding, and my post above doesn't suggest that you were. Instead, I was suggesting that you do and giving ideas on how you might use funding to make progress on this project. After reading your responses here, however, I withdraw that suggestion.

Replies from: Anthony Repetto
comment by Anthony Repetto · 2022-05-13T22:19:39.763Z · EA(p) · GW(p)

I apologize for lumping your funding-suggestion along-side others' funding-misrepresentation. I see that you are looking for ways to make it possible, and funding is what came to mind. Thank you.

(I am still surprised that funding is continually the first topic, after I specify that the government is the best institution to finance such a project. EA would go bankrupt, if they tried to stop hurricanes...)

And, I understand if people don't consider my proposal promising - I am not demanding that they divert resources, especially funds which are best spent on highest guaranteed impact! Yet, there is a cliquishness in excluding diverse dialogue based upon "social capital/reputation" - I hope you can see that the social graph's connectivity falls apart when we cut those ties.

It's also odd that the only data-point used to evaluate me would be the slice of time immediately after I'd been prodded repeatedly. I wish I could hand you the video-tapes of my life, and let you evaluate me rightly. When I am repeatedly misrepresented, defending myself, then you don't see a representative slice of who I am.

Worst of all, no measure of my persona or character is a measure of the worth of a thought. If I am not a good fit for making it happen, then the best I can do I find someone who fits that well. The idea itself stands or falls on its own merits, and measuring me ignores that. I won't know if it's worth doing until I have a simulation, at least. I don't know how anyone else has certainty on the matter, especially from such a noisy proxy as "perceived tone via text message".

Replies from: Jeff_Kaufman
comment by Jeff Kaufman (Jeff_Kaufman) · 2022-05-13T22:39:39.261Z · EA(p) · GW(p)

I am still surprised that funding is continually the first topic, after I specify that the government is the best institution to finance such a project. EA would go bankrupt, if they tried to stop hurricanes...

The reason I brought up funding was not that I thought it might make sense for EAs to fund the entire thing, but that it might allow you to address the reasons your proposal is currently stalled. I gave a few ideas of specific things you might do with funding:

  • Free up your time to learn how to run a simulation.
  • Free up your time to for lobbying.
  • Exploring existing work on hurricane prevention.
  • Hiring someone else to do any of the above.
Replies from: Anthony Repetto
comment by Anthony Repetto · 2022-05-13T22:46:41.955Z · EA(p) · GW(p)

Yes, I understand that funding can let me hire people to do that work - and I don't need funding to free my time. I understand that, if I delay for the sake of doing-it-alone, then I am responsible for that additional harm. It doesn't make sense for me to run a simulation or lobby by myself; and I've been in the position of hiring people, as well as working with people who are internally motivated. I hoped to find the internally motivated people, first - that's why I asked EA for connections, instead of just posting something on a job site.

comment by Anthony Repetto · 2022-05-13T03:54:20.115Z · EA(p) · GW(p)

I am not looking for funding - I asked if anyone is interested in running those simulations, or knows someone they could put me in touch with.

Included in that are the relevant academics - I began my search at the EA Berkeley campus chapter. I know that the government would not listen to me without at least a first-pass simulation of my own; and I know that it is ludicrous for me to invest time into developing a skill that others possess, or re-inventing the wheel by making my own simulation. Those are all significantly more wasteful and ineffectual than asking people if they know anyone in related fields - this is because social networks are dense graphs, so only two or three steps away, I am likely to find an appropriate specialist. Your advice is not appropriate.

At no point did I ask the readers to do the work of simulation, or proposing to the government, on my behalf; you used a strawman against me. I am specifically asking if people can look in their social network for anyone with relevant skill-sets, who I might talk to - those skilled folks are the place where I'm likely to find someone who would actually do work, not here on the forums with a string of dismissive 'help' and fallacies.

comment by Khorton · 2022-05-11T21:49:22.226Z · EA(p) · GW(p)

It is not reasonable to expect people to spend 1 to 2 hours listening to an idea that is not relevant to them.

Replies from: Anthony Repetto
comment by Anthony Repetto · 2022-05-13T03:58:37.093Z · EA(p) · GW(p)

You espouse a bizarre cliquishness, by your claim. Look at the outcome it generates: you fail to hear new information from outside your bubble. Your claim does not become virtuous or correct, nor does it facilitate progress - you're just claiming it to feel right.

Replies from: Charles He
comment by Charles He · 2022-05-13T04:33:38.247Z · EA(p) · GW(p)

(Here is a brutal answer but maybe helpful at this point.)

I haven’t read every comment of yours but my sense is that you are frustrated that no one has engaged with your idea.

One issue with this sentiment is that there is little or nothing in EA that is like “a fully general engine” for taking someone’s paragraphs of thoughts and executing on it. (This isn’t true but complicated/political to explain).

EA provides a lot of resources, but this takes some leg work and demonstration to get going. This price of entry is a good thing. There is a lot of horsepower and leadership needed to execute even a pretty obvious appearing project that could almost immediately get seed or exploratory funding.

An example is the snakebite post which seems like a great potential project. But even this project’s execution is uncertain.

To be concrete, you can come up with substantially more progress and much more substantial and credible work and no one needs to provide this for you. In fact it’s much much better you do it yourself at least in a first stage if you want your project to succeed.

(I think the above is brutal but maybe helpful. The below is brutal and probably unhelpful but it’s useful for onlookers).

There is a large supply of people who have ideas or public complaints that don’t seem to achieve progress.

When examined, the underlying issue often isn’t related with their idea in principle.

The reality is that many patterns of forum use or complaints indicate lack of effectiveness (or distinctly, telegraph a sense of powerlessness and sometimes vulnerability). So their repeatedly posting is sort of an anti pattern that makes sure not many people take them seriously.

Replies from: Anthony Repetto
comment by Anthony Repetto · 2022-05-13T04:56:54.350Z · EA(p) · GW(p)

I am frustrated that I am repeatedly misrepresented, which is what I said in my responses. I am not frustrated by a lack of "people doing leg work for me". I am specifically asking if anyone has connections toward the relevant specialists, so that I can talk to those specialists. I'm not sure why that would be "something I should do on my own" - I'm literally reaching out to gather specialists, which is the first leg work, obviously. Re-inventing the wheel to impress an audience by "going it alone" is actually counter-productive.

I don't need a "fully general engine" - you are misrepresenting my request, as others have. I am asking if anyone knows someone with the relevant background. I am NOT asking for funding, nor a general protocol that addresses every post. Those are strawmen. No one has apologized for these strawmen; they just ghost the conversation.

And, if you are using the fact that I stood-up to repeated mis-representations as "telegraph a sense of powerlessness and sometimes vulnerability", and as a result, I should not be taken seriously, then you are squashing the only means of recourse available to me. When my request is repeatedly mis-represented, and I respond to each of them, I am necessarily "repeatedly posting" - I'm not sure why that noisy proxy for "lack of effectiveness" is a better signal for you than actually reading what I wrote.

Replies from: Charles He
comment by Charles He · 2022-05-13T05:18:08.307Z · EA(p) · GW(p)

You’ve responded with hostility and intense frustration to Linch and Khorton, who are goofy, but well meaning people. That’s really bad and you should stop writing like this. (EDIT: also Jeff Kaufman).

(Note that I suspect there something unseemly about my personal conduct in replying to you. To myself, in my head, I think I am doing it because it provides useful information to onlookers, because this would be mansplaining in other circumstances. I need to think about this.)

The brutal truth is that “specialist” access is sort of like gold. I and most people wouldn’t give someone with this account access to any specialists because this is unpromising but also because these relationships are valuable and reflect on them.

Separately, I think hard, esoteric projects in EA deserve real seed or exploratory funding. I am not really following to be honest, but the fact that you have this thread about misrepresentation might be because there is some underlying issue you don’t understand this or how projects are executed.

Replies from: Anthony Repetto, Anthony Repetto
comment by Anthony Repetto · 2022-05-13T06:07:00.625Z · EA(p) · GW(p)

It's also telling that, though I pointed-out how you sought to use "repeated posting" as a proxy for my "powerlessness and vulnerability...lack of effectiveness", you made no mention of it, afterwards. Judging someone on such shallow evidence is the opposite of skeptical inquiry; it doesn't bode well for your own effectiveness. Am I being hostile when I say that to you, while you are NOT hostile, when you say it to me, first?

comment by Anthony Repetto · 2022-05-13T05:41:30.894Z · EA(p) · GW(p)

When I am repeatedly misrepresented, and no one who does so responds with an apology, I am supposed to adhere to your standards of dialogue? Why are my standards not respected, first?

If specialist access is gold, then what do I need to pay them? I'll figure funding separately - who, and how much?

Exploratory work is great - yet, as Jeff was saying in this exact thread's original post - EA needs to be willing to take the leap on risky new ideas. That was, also, the part of his post that I quoted, in my original response. Do you see how they are related to what we are talking about? Perhaps EA should take a risk, and connect me to a specialist, and if EA thinks that specialist should be paid, I'll work that out, next.

comment by Linch · 2022-05-11T22:09:08.042Z · EA(p) · GW(p)

Have you talked to Sella Nevo, who does flood prediction at Google?

My own hot take here is that if you spent 1 billion dollars of EA money to save the US gov't 1 trillion dollars, you've likely wasted >900 million dollars. But I know other people are more optimistic about US gov't funding priorities (or more pessimistic about EA uses of money).

Replies from: Anthony Repetto, Anthony Repetto
comment by Anthony Repetto · 2022-05-13T04:06:37.642Z · EA(p) · GW(p)

I quote from above: "I'm NOT looking for any funding, either - there's a decent chance that the cost of the solution is lower than the Federal Government's increased tax-revenue from hurricane-prevention, so I say that the government should pay for it."

I'm NOT asking to use EA money - I repeatedly clarify that, at every opportunity, and yet it is insisted-up multiple times, on this forum. No, EA only has $30B, so you can't afford stopping hurricanes, even if you spent your entire budget. I pointed to the potential value of looking for solutions at $1B, which is the actual expected-value, NOT the 'value to EA'. I'm not trying to take ANY dollars from your charities. Do you hear that, yet? I don't appreciate being repeatedly misrepresented and strawmanned.

Are you suggesting that I cold-call Sella Nevo? Do you have a way to put me in touch, so that I am not ignored, as I have been here?

Replies from: Linch
comment by Linch · 2022-05-13T15:38:42.445Z · EA(p) · GW(p)

I did not say you were looking for funding. I am sorry to the degree I am responsible for miscommunication, or if I unintentionally upset you in any way. I am always trying to be better at communication. I hope you have a good day. 

Replies from: anonymous_ea, Anthony Repetto
comment by anonymous_ea · 2022-05-14T04:43:53.556Z · EA(p) · GW(p)

Upvoted for the last three sentences, but I believe your first sentence is incorrect. The second paragraph of your initial comment does not make sense to me in the absence of you believing that Anthony was looking for funding. 

Replies from: Linch, Charles He
comment by Linch · 2022-05-14T13:05:34.076Z · EA(p) · GW(p)

I was not intentionally suggesting that Anthony was asking for a billion dollars in funding. It's strange to me that >=2 people will read my comment that way. I'm again sorry for any miscommunication.

I don't think it's prudent for me to engage further in this thread, even though this type of thing naturally draws me in. I will donate $10 to Homeopaths Without Borders if I comment further.

I hope you have a good day.

Replies from: anonymous_ea
comment by anonymous_ea · 2022-05-14T14:31:55.661Z · EA(p) · GW(p)

My read on your comment is that you misread Anthony's allusion to $1b as about potentially spending $1b at some stage (whether right now or later), rather than about the expected impact of his idea. I could be wrong, but that's the only way your comment makes sense to me ("if you spent $1b of EA money" - what could this refer to besides spending $1b of money?). 

Anthony is asking for connection to someone who is skilled at running a particular kind of simulation to see if his idea has potential. He believes that the value of checking of his idea might be $1b, because of potentially trillions of dollars worth of gains. Crucially, it would not take $1b to check his idea - that figure is an estimate of the potential value of checking the idea, not of the cost of checking it. The cost of checking is probably something like the social capital to connect him with a relevant person and the costs involved in running the simulation (if it progresses to that stage).

I don't think this was a bad mistake on your end, just a quick, incorrect assumption that you made while trying to help someone. It only led to a fractious response because so many other EAs have also misread and misunderstood Anthony, and he is naturally tired and upset by this. In my opinion, the fault here lies mostly with social dynamics rather than any one person acting particularly badly. 

I appreciate your attempts to engage productively (including deciding not to engage if that seems better to you), take responsibility for any mistakes you may have made, and without assigning blame to other parties. That is a clear positive to me. 

Hope you have a good day as well :)

comment by Charles He · 2022-05-14T05:45:59.763Z · EA(p) · GW(p)

The “misrepresentation” about a search for funding was related for monies for a personal project or org to develop the intervention.

The second paragraph was for funding for the intervention itself.

They are really different things. Like the difference between an org researching food aid, versus buying billions of dollars of actual food.

I doubt the person believes he can literally stop hurricanes without government funding.

Unfortunately, I think you are muddying the waters in your intervention. With my read of the relevant person, this might not serve them well.

comment by Anthony Repetto · 2022-05-13T20:11:42.372Z · EA(p) · GW(p)

My posts where I expressed what had been misrepresented and requested apologies have been deleted. And now, you apologize, after those deletes. I am suspicious. Why is your crew hiding the times I clarified and defended myself?

In truth, you DID talk about "Anthony getting EA funding" when you said "My hot take here is that if you spend $1B of EA money..." SO, don't lie to me. You did, in fact, say that I would take funding. I hope your apology is real, and not just covering face by pretending you did nothing to misrepresent me. You did misrepresent me. Can you admit that?

comment by Anthony Repetto · 2022-05-13T03:57:15.707Z · EA(p) · GW(p)

I'm not asking to use EA money - I repeatedly clarify that, at every opportunity, and yet it is insisted-up multiple times. No, EA only has $30B, so you can't afford stopping hurricanes, even if you spent your entire budget. I pointed to the potential value of looking for solutions at $1B, which is the actual expected-value, NOT the 'value to EA'. I don't appreciate being repeatedly misrepresented and strawmanned.

Are you suggesting that I cold-call Sell Nevo? Do you have a way to put me in touch, so that I am not ignored, as I have been here?

comment by DonyChristie · 2022-05-15T22:15:41.709Z · EA(p) · GW(p)

As someone who knows Anthony in-person and has engaged in more high-bandwidth communication with him than anyone else on this thread, I am happy to stake social capital on his insights being very much worth listening to broadly speaking and that he's worth connecting to anyone who could give his ideas legs. 

I have downvoted at least one comment  in this thread that I felt was not conducive to more of his ideas being externalized into the world due to what I believe to be unnecessary focus on social norms/tone policing over tracking object-level ideas. I am not responding further nor am I responding to particular comments as I don't want to feed the demon thread, but I do want to provide clarity on my judgement of what-is-in-the-right and also state I think Anthony could very possibly provide us Cause X as much as anyone I've seen. 

To that end, I believe his interest in new/different infrastructure for how to communicate and internalize ideas is reasonable, and that it's unreasonable to expect idea providers to also have to be the idea executors in the ideal impact marketplace, especially to the extent of expecting them to engage in implicit politics more than is strictly necessary to get the ball rolling.

Replies from: Lorenzo Buonanno, Charles He
comment by Lorenzo (Lorenzo Buonanno) · 2022-05-15T22:23:50.633Z · EA(p) · GW(p)

it's unreasonable to expect idea providers to also have to be the idea executors in the ideal impact marketplace

I could be wrong, but I think that most people think that the key bottleneck is "idea executors", not "idea providers". (E.g. I heard Charity Entrepreneurship has many intervention ideas, but even after extensive selection and training they are bottlenecked by finding enough founders).

So one shouldn't be surprised if they share a great idea but it doesn't get any traction, it seems to be the current state of things.

comment by Charles He · 2022-05-16T05:54:15.077Z · EA(p) · GW(p)

I think it’s important that this actually involves staking social capital because I would otherwise find such revision of very negative behaviour based on what is clearly external friendship (as well as the mass upvoting) more problematic than anything else that has occurred.

Imagine if everyone did this for their friends/enemies on the forum.

comment by Anthony Repetto · 2022-05-13T04:17:21.394Z · EA(p) · GW(p)

Quote from above: "I'm NOT looking for any funding, either - there's a decent chance that the cost of the solution is lower than the Federal Government's increased tax-revenue from hurricane-prevention, so I say that the government should pay for it."

Hopefully, you read this comment BEFORE saying something like "But EA shouldn't spend $1B on your idea" or "So you want us to fund this?"

I've received numerous mis-representations, each insisting that I am somehow asking for EA money. You demonstrate how poorly you pay attention; I'll copy the quote again, in case anyone forgot by now:

"I'm NOT looking for any funding, either - there's a decent chance that the cost of the solution is lower than the Federal Government's increased tax-revenue from hurricane-prevention, so I say that the government should pay for it."

Why am I repeatedly addressing such an obvious and shallow misrepresentation? What is going on with these people?

comment by Anthony Repetto · 2022-05-13T06:20:16.495Z · EA(p) · GW(p)

It's telling that, among four people responding to my earlier comment, all four repeatedly misrepresented me. None of them have apologized. Instead of addressing the issue I presented, my credibility as a speaker was questioned, as well. Then, one of the people responding came to the defense of the others, none of whom had apologized. This isn't rational discussion; this is a troll cave. I expect better, if you hope to be a healthy organization into the longterm.