FTX/CEA - show us your numbers!

post by Jack Lewars (jlewars) · 2022-04-18T12:05:25.707Z · EA · GW · 112 comments

Forgive the clickbait title, but EA is as prone to clickbait as anywhere else.

It seemed at EAG that discussions focussed on two continuums:

Neartermist <---> Longtermist

Frugal spending <---> Ambitious spending

(The labels for the second one are debatable but I'm casually aiming for ones that won't offend either camp.)

Finding common ground on the first has been an ongoing project for years.

The second is much more recent, and it seems like more transparency could really help to bring people on opposite sides closer together.

Accordingly: could FTX and CEA please publish the Back Of The Envelope Calculations (BOTECs) behind their recent grants and community building spending?

(Or, if there is no BOTEC and it's more "this seems plausibly good and we have enough money to throw spaghetti at the wall", please say that clearly and publicly.)

This would help in several ways:

  1. for sceptics of some recent spending, it would illuminate the thinking behind it. It would also let the community kick the tires on the assumptions and see how plausible they are. This could change the minds of some sceptics; and potentially improve the BOTECs/thinking
  2. it should help combat misinformation. I heard several people misrepresent (in good faith) some grants, because there is not a clear public explanation of the grants' theory of change and expected value. A shared set of facts would be useful and improve debate
  3. it will set the stage for future evaluation of whether or not this thinking was accurate. Unless we make predictions about spending now, it'll be hard to see if we were well calibrated in our predictions later

Objection: this is time consuming, and this time is better spent making more grants/doing something else

Reply: possibly true, and maybe you could have a threshold below which you don't do this, but these things have a much higher than average chance of doing harm. Most mistaken grants will just fail. These grants carry reputational and epistemic risks to EA. The dominant theme of my discussions at EAG was some combination of anxiety and scorn about recent spending. If this is too time-consuming for the current FTX advisers, hire some staff (Open Phil has ~50 for a similar grant pot and believes it'll expand to ~100).

Objection: why drag CEA into this?

[EDIT: I missed an update on this last week and now the stakes seem much lower - but thanks to Jessica and Max for engaging with this productively anyway: https://forum.effectivealtruism.org/posts/xTWhXX9HJfKmvpQZi/cea-is-discontinuing-its-focus-university-programming [EA · GW]]

Reply: anecdata, and I could be persuaded that this was a mistake. Several students, all of whom asked not be named because of the risk of repercussions, expressed something between anxiety and scorn about the money their own student groups had been sent. One said they told CEA they didn't need any money and were sent $5k anyway and told to spend it on dinners. (Someone from CEA please jump in if this is just false, or extremely unlikely, or similar - I do realise I'm publishing anonymous hearsay.) It'd be good to know how CEA is thinking about spending wisely as they are very rapidly increasing their spending on EA Groups (potentially to ~$50m/year).

Sidenote: I think we have massively taken Open Phil for granted, who are exceptionally transparent and thoughtful about their grant process. Well done them.

112 comments

Comments sorted by top scores.

comment by jessica_mccurdy · 2022-04-18T16:58:10.099Z · EA(p) · GW(p)

Hi Jack,

Just a quick response on the CEA’s groups team end.

We are processing many small grants and other forms of support for CB  and we do not have the capacity to publish BOTECs on all of them. 

However, I can give some brief heuristics that we use in the decision-making.

Institutions like Facebook, Mckinsey, and Goldman spend ~ $1 million per school per year at the institutions they recruit from trying to pull students into lucrative careers that probably at best have a neutral impact on the world. We would love for these students to instead focus on solving the world’s biggest and most important problems.

Based on the current amount available in EA, its projected growth, and the value of getting people working in EA careers, we currently think that spending at least as much as McKinsey does on recruiting pencils out in expected value terms over the course of a student’s career. There are other factors to consider here (i.e. double-counting some expenses) that mean we actually spend significantly less than this. However, as Thomas said - even small chances that dinners could have an effect on career changes make them seem like effective uses of money. (We do have a fair amount of evidence that dinners do in fact have positive effects on groups.)

As for your comment on funding student groups, we haven’t sent money to any group that has not asked for it. It is plausible that one of us encouraged them to ask for more since we do think it is a good use of money and would like groups to think ambitiously. We have a list of common group expenses with some tips at the bottom (including considerations on optics)

Given the current landscape, we think missing out on great people and great opportunities is a huge loss. This is especially true if you think there are heavy tails in the amount of impact individuals have. We have thought a lot about our funding guidelines, and suggestions, and feel comfortable with our current status though we are constantly reviewing and updating as the landscape changes.

We appreciate your concern and are always eager for feedback. If you (or others) want to expand on this post with a more in-depth, comprehensive version of this feedback, we’d be open to responding to this in more depth as well.  
 

 (The below is copied from a comment by Max Dalton below and I am adding it here for visibility)

 

"By the way, we are not planning to spend $50m on groups outreach in the near future. Our groups budget is $5.4m this year. 

Also note that our focus university program  is passing to Open Philanthropy [EA · GW]."
 

Replies from: Lucas Lewit-Mendes, Larks, jlewars, Vivian.Lwd, Paige_Henchen, Guy Raveh, MaxRa
comment by Lucas Lewit-Mendes · 2022-04-19T12:45:25.782Z · EA(p) · GW(p)

Hi Jessica, 

Thanks for outlining your reasoning here, and I'm really excited about the progress EA groups are making around the world. 

I could easily be missing something here, but why are we comparing the value of CEA's community building grants to the value of Mckinsey etc? 

Isn't the relevant comparison CEA's community building grants vs other EA spending, for example GiveWell's marginally funded programs (around 5x the cost-effectiveness of cash transfers)? 

If CEA is getting funding from non-EA sources, however, this query would be irrelevant. 

Looking forward to hearing your thoughts :) 

Replies from: Nathan_Barnard
comment by Nathan_Barnard · 2022-04-20T16:56:25.417Z · EA(p) · GW(p)

I'm obviously not speaking for Jessica here, but I think the reason the comparison is relevant is that the high spend by Goldman ect suggests that spending a lot on recruitment at unis is effective. 

If this is the case, which I think is also supported by the success of well funded groups with full or part time organisers, and that EA is in an adversarial relationship to  with these large firms, which I think is large true, then it makes sense for EA to spend similar amounts of money trying to attract students. 

The relvent comparison is then comparing the value of the marginal student recurited with malaria nets ect. 

Replies from: Lucas Lewit-Mendes
comment by Lucas Lewit-Mendes · 2022-04-22T10:46:27.476Z · EA(p) · GW(p)

Thanks Nathan, that would make a lot of sense, and motivates the conversation about whether CEA can realisticly attract as many people through advertising as Goldman etc. 

I guess the question is then whether: 

a) Goldman's activities are actually effective at attracting students; and

b) This is a relevant baseline prior for the types of activities that local EA groups undertake with CEA's funding (e.g. dinners for EA scholars students)

comment by Larks · 2022-04-18T21:09:11.180Z · EA(p) · GW(p)

Just a quick response on the CEA’s groups team end.

...

Institutions like Facebook, Mckinsey, and Goldman spend ~ $1 million per school per year at the institutions they recruit from trying to pull students into lucrative careers that probably at best have a neutral impact on the world.

I'm surprised to see CEA making such a strong claim. I think we should have strong priors against this stance, and I don't think I've seen CEA publish conclusive evidence in the opposite direction.

Firstly, note that these three companies come from very different sectors of the economy and do very different things. 

Secondly, even if you assign high credence to the problems with these firms, it seems like there is a fair bit of uncertainty in each case, and you are proposing a quite harsh upper bound - 'probably at best neutral'.

Thirdly, each of these are (broadly) free market firms, who exist only because they are able to persuade people to continue using their services. It's always possible that they are systematically mistaken, and that CEA really does understand social network advertising, management consulting, trading and banking better than these customers... but I think our prior should be a little more modest than this. Usually when people want to buy something it is because they want that thing and think it will be useful for them.

Finally, there are in fact for each of these firms a bunch of concrete benefits they provide. Rarely do I see these explicitly weighed in the calculus against the problems:

  • Facebook allows people to keep in touch with friends and relatives, to share their thoughts and news about their lives, and meet like-minded new friends. Certainly I have personally made many new friends over facebook, and engaged in many good discussions. It also allows advertisers to show their products to the people who are most likely to appreciate them, saving others from having their time wasted with irrelevant ads.
  • McKinsey provides advice and allows for the diffusion of best practices from leading firms to others in the economy. They can also help management overcome internal veto players and other opposition to change by helping supply credibility to decisions. For some types of consulting (though a little different to what McKinsey mainly does) we even have RCTs showing that they improve productive efficiency.
  • Goldman's trading arm provides a wide range of services to market participants, like research, prime brokerage and market making, that are necessary to help keep markets efficient. They also provide investment banking services, allowing companies and governments to raise money to finance projects, and retail banking, giving ordinary people higher interest rates than they'd get from their legacy banks. 

It's possible that these has been some explicit analysis of these firms to support your very strong statement. I searched on the forum for 'McKinsey' to try to find it, but at least the first page or so of results were generally positive references - e.g. people quoting their work on climate change, or positively referencing how they would address a problem. 80k does have an old article with some cursory analysis of the harms of finance, but the analysis is seriously flawed [EA(p) · GW(p)], and it doesn't cover Management Consulting or Social Networks at all. 

Replies from: nonn, jessica_mccurdy, MichaelStJules, Charles He, calebp, MichaelStJules, Linch
comment by nonn · 2022-04-19T00:34:14.380Z · EA(p) · GW(p)

Curious if you disagree with Jessica's key claim, which is "McKinsey << EA for impact"? I agree Jessica is overstating the case for "McKinsey <= 0", but seems like best-case for McKinsey is still order(s) of magnitude less impact than EA.

Subpoints:

  • Current market incentives don't address large risk-externalities well, or appropriately weight the well-being of very poor people, animals, or the entire future.
  • McKinsey for earn-to-learn/give could theoretically be justified, but that doesn't contradict Jessica's point of spending money to get EAs
  • Most students require a justification for anyone charitable spending significant amounts of money on movement building & competing with McKinsey reads favorably

Agree we should usually avoid saying poorly-justified things when it's not a necessary feature of the argument, as it could turn off smart people who would otherwise agree.

comment by jessica_mccurdy · 2022-04-19T12:50:48.212Z · EA(p) · GW(p)

Sorry, I was trying to get a quick response to this post and I made a stronger claim than I intended. I was trying to say that I think that EA careers are doing much more good than the ones mentioned on average and so spending money is a good bet here. I wasn’t intending to make a definitive judgment about the overall social impact of those other careers, though I know my wording suggests that. I also generally want to note that this element was a personal claim and not necessarily a CEA endorsed one. 

Replies from: Charles He
comment by Charles He · 2022-04-20T09:51:04.054Z · EA(p) · GW(p)

This was a great comment and thoughtful reply and the top comment was great too.

Looking at the other threads generated from the top comment, it looks like tiny turns of phrase in that top comment, produced (unreasonably) large amounts of discussion.

I think we all learned a valuable lesson about the importance of clarity and precision when commenting on the EA forum.

Replies from: Jeff_Kaufman
comment by Jeff Kaufman (Jeff_Kaufman) · 2022-04-22T19:39:02.725Z · EA(p) · GW(p)

FYI I would have upvoted this if not for the final paragraph

comment by MichaelStJules · 2022-04-19T04:09:09.439Z · EA(p) · GW(p)

Thirdly, each of these are (broadly) free market firms, who exist only because they are able to persuade people to continue using their services. It's always possible that they are systematically mistaken, and that CEA really does understand social network advertising, management consulting, trading and banking better than these customers... but I think our prior should be a little more modest than this. Usually when people want to buy something it is because they want that thing and think it will be useful for them.

I consider this to be a pretty weak argument, so it doesn't contribute much to my priors, which although weak (and so the particulars of a company matter much more), are probably centered near neutral on net welfare effects (in the short to medium term). I think a large share of goods people buy and things they do are harmful to themselves or others before even considering the loss of income/time as a result, or worse for them than the things they compete with. It's enough that I wouldn't have a prior strongly in favour of what profitable companies are doing being good for us. Here are reasons pushing towards neutral or negative impacts:

  1. A lot of goods are mostly for signaling, especially signaling wealth, which often has negative externalities and I'd guess little positive value for the individual. Brand name versions of things, clothing, jewelry, cars.
  2. Many modern ways people spend their time (enabled by profitable companies) have probably made us less active, more indoor-bound, less close with others, and less pursuant of meaning and meaningful goals, which may conflict with people's reflective preferences, as well as generally be bad for health, mental health and other measures of wellbeing. Basically a lot of the things we do on our computers and phones.
  3. Many things are stimulating and addictive, and companies are optimizing for want, not welfare. Want and welfare can come apart when we optimize for want. So we get cigarettes, addictive video games, junk food, algorithms optimizing for clicks when we'd be better off stepping away from the internet or doing more substantial things online, and lots of salt, sugar and calories in our foods.
  4. Media companies may optimize for revenue over accurate reporting. This includes outrage, playing to our fears, demonizing and polarization.
  5. Some companies make us want their stuff for fear of missing out or social pressure, so it can be closer to coercion than providing a valuable opportunity.
  6. I'd guess relatively little is spent on advertisement for things that we have good evidence for improving our welfare, because most of those things are hard to profit from: basic healthy foods, exercise (although there are certainly exercise products and programs that get advertised, but less so just gym memberships, joining sports leagues, running outside), just spending more time with your friends and family (in cheap ways, although travel and amusement parks are advertised), pursuing meaning or meaningful goals, helping others (even charity ads are relatively rare). So, advertisement seems to push us towards things that are worse for us than the alternatives we'd have gone with. To capitalize on the things that do make us substantially better off, companies may sell us more expensive versions that aren't (much) better or things to go with them that don't substantially help.
  7. I'd expect a lot of hedonic adaptation for many goods and services, but not mental health (almost by definition), physical pain and to a lesser extent general health and mobility, which are worsened by a lot of the things companies provide, directly or indirectly by competing with the things that are better for health.
  8. Company valuations don't usually substantially reflect their externalities, and shorting companies is riskier and more costly than buying and holding shares, so this biases markets towards positively valuing companies even if their overall value for the world is negative.
  9. There are often negative externalities on nonhuman animals in particular, although the overall effects on nonhuman animals may be complicated when you also consider the effects on wild animals.

I do think it's plausible McKinsey and Goldman have done and do more good than harm for humans in the short term, based on the arguments you give, but I don't have a strong view either way. It could depend largely on whether raising people's consumption levels makes them better off overall (and how much) in the places where people are most affected by these companies. Measures of well-being do seem to positively correlate with income/wealth/consumption at the individual level, and I'd guess also at the aggregate level for developing countries, but I'd guess not for developed countries, or at best weakly so. There are negative externalities for increasing an individual's income on others' life satisfaction, although it's possible a large share is due to rescaling, not actually thinking your life is worse absolutely than otherwise. See:

  1. Haushofer, J., Reisinger, J., & Shapiro, J. (2019). Is your gain my pain? Effects of relative income and inequality on psychological well-being.
    1. Based on GiveDirectly in Kenya. They had multiple measures of wellbeing, but negative effects were only observed for life satisfaction for non-recipient households of cash transfers in the same village. See Table A5.
  2. This table from Veenhoven, R. (2019). The Origins of Happiness: The Science of Well-Being over the Life Course., reproduced in this post [EA · GW].
  3. This graph, reproduced in this post [EA · GW].
  4. Other writing on the Easterlin Paradox.

 

Some companies may also contribute to relative inequality or even counterfactually make the median or poor person absolutely poorer through their political activities.

 

The categories of things I'm optimistic about for human welfare in the short to medium term are:

  1. Things that save us time, so we can spend more time on things that actually make us better off.
  2. Things that improve or protect our health (including mental health).
  3. Things that make us (feel) safer/more secure (physically, financially, etc.).
  4. Things that make us more confident, but without substantially net negative externalities (negative externalities may come from positional goods, costly signaling, peer pressure).
  5. Things that help us make better decisions, without important negative effects.

I'm neutral to optimistic about these (possibly neutral because they just replace cheaper versions of themselves that would be just as good):

  1. In-person activities with friends/family.
  2. Things for hobbies or projects.
  3. Restaurants.

I'm about neutral and pretty uncertain about screen-based entertainment (TV, movies, video games), and recreational substances that aren't extremely addictive or harmful (alcohol, marijuana).

I'm pessimistic about:

  1. Social media.
  2. Status-signaling goods/positional goods/luxuries.
  3. Processed foods.
  4. Cigarettes.
Replies from: Guy Raveh
comment by Guy Raveh · 2022-04-20T08:06:18.248Z · EA(p) · GW(p)

There are also a lot of externalities that act at least equally on humans, like carbon emissions, promotion of ethnic violence, or erosion of privacy. Those are all examples off the top of my head for Facebook specifically.

I upvoted Larks' comment, but like you I think this particular argument, "people buy from these firms", is weak.

comment by Charles He · 2022-04-18T23:28:35.902Z · EA(p) · GW(p)

Ok. Lark’s response seems correct.

But surely, the spirit of the original comment is correct too.

No matter which worldview you have, the value of a top leader moving into EA is overwhelmingly larger than the the social value of the same leader “rowing” in these companies.

Also, at the risk of getting into politics (and really your standard internet argument) gesturing at “free market” is really complicated. You don’t need to take the view of Matt Stoller or something to notice that the benefits of these companies can be provided by other actors. The success of these companies and their resources that allow recruitment with 7 figure campus centres probably has a root source different than pure social value.

The implication that this statement requires CEA to have a strong model of these companies seems unfair. Several senior EAs, who we won’t consider activists or ideological, have deep experiences in these or similar companies. They have opinions that are consistent with the parent comment’s statement. (Being too explicit here has downsides.)

comment by calebp · 2022-04-19T08:23:01.660Z · EA(p) · GW(p)

I think the main crux here is that even if Jessica/CEA agrees that the sign of the impact is positive, it still falls in the neutral bracket because on the CEA worldview the impact is roughly negligible relative to the programs that they are excited about. 

If you disagree with this maybe you agree with the weaker claim of the impact being comparatively negligible weighted by the resources these companies consume? (there's some kind of nuance to 'consuming resources' in profitable companies, but I guess this is more gesturing at a leaving value on the table framing as opposed to just is the organisation locally net negative or positive.

comment by MichaelStJules · 2022-04-19T00:27:24.421Z · EA(p) · GW(p)

Do you think people are better off overall than otherwise because of Facebook (and social media generally)? You may have made important connections on Facebook, but many people probably invest less in each connection and have shallower relationships because of social media, and my guess is that mental health is generally worse because of social media (I think there was an RCT on getting people to quit social media, and I wouldn't be surprised if there were multiple studies. I don't have them offhand). I'd guess social media is basically addictive for a lot of people, so people often aren't making well-informed decisions about how much to use, and it's easy for it to be net negative despite widespread use. People joining social media pressures others to join, too, making it more costly to not be on it, so FB creates a problem (induces fear of missing out) and offers a solution to it. Cancel culture, bubbles/echo chambers, the spread of misinformation, and polarization may also be aggravated by social media.

That being said, maybe FB was really important for the growth of the EA community. I mostly got into EA through FB initially, although it's not where I was first exposed to EA. If we think the EA community is important enough, then this plausibly dominates. And, of course, it's where Open Phil's funding came from, but that seems to be historical luck, not really anything special about Facebook, except the growth of its market cap.

On the other hand, FB accelerated the development of AI capabilities, e.g. PyTorch was primarily built by FB. But maybe we should also consider this to be only weakly related to FB's role in social media, and more related to the fact that it's just a large tech company.

There are also multiple counterfactuals we could consider: no Facebook + people spend less time on social media, and no Facebook + people spend about as much time on social media (possibly on one similar to FB, or whatever other options there are now). In the first case, I think it's hard to make a balanced argument for FB being robustly net positive. In the second case, the impact is closer to 0, from either direction, and it's harder to evaluate its sign. Then there's the counterfactual impact of FB getting a more productive hire, or one who is otherwise more valued by FB.

I think McKinsey and Goldman would have other firms step into their spaces if they weren't around.

comment by Linch · 2022-04-19T15:42:50.587Z · EA(p) · GW(p)

I don't think this is persuasive. I think most actions people take either increase or decrease x-risk, and you should start with a ~50% prior for which side of neutrality a specific action is on (though not clearly true; see discussion here [EA · GW]). I agree there's some commonsensical notions that economic growth is good, including for the LT future, but I personally find arguments in the opposite direction to be slightly stronger. Your own comment [EA(p) · GW(p)] to an earlier post is one interesting item on the list of arguments I'd muster in that direction.

Replies from: Larks
comment by Larks · 2022-05-03T02:19:20.759Z · EA(p) · GW(p)

Ahh, interesting argument! I wasn't thinking about the argument that these firms might (e.g.) slightly accelerate economic growth, which might then cause an increase in x-risk (if safety is not equivalently accelerated). In general I feel sufficiently unclear about such considerations - like maybe literally 50:50 equipoise is a reasonable prior - that I am loath to let them overwhelm a more concrete short-term impact story in our cost-benefit analysis, in the absence of a clear causal link to a long run impact in the opposite direction, as you suggest in the article.

In this case I think my argument still goes through, because the claim I'm objecting to is so strong - that there is in some sense a >50% probability that every reasonable scenario has all three firms being negative.

comment by Jack Lewars (jlewars) · 2022-04-18T17:25:58.372Z · EA(p) · GW(p)

Thanks Jessica, this is helpful, and I really appreciate the speed at which you replied.

A couple of things that might be quick to answer and also helpful:

  • is there an expected value of someone working in an EA career that CEA uses? The rationale above suggests something like 'we want to spend as much as top tier employers' but presumably this relates to an expected value of attracting top talent that would otherwise work at those firms?
  • I agree that it's not feasible to produce, let alone publish, a BOTEC on every payout. However, is there a bar that you're aiming to exceed for the manager of a group to agree to a spending request? Or a threshold where you'd want more consideration about granting funding? I'm sure there are examples of things you wouldn't fund, or would see as very expensive and would have some rule-of-thumb for agreeing to (off-site residential retreats might be one). Or is it more 'this seems within the range of things that might help, and we haven't spent >$1m on this school yet?'
  • is there any counterfactual discounting? Obviously a lot of very talented people work in EA and/or have left jobs at the employers you mention to work in EA. So what's the thinking on how this spending will improve the talent in EA?
Replies from: Maxdalton
comment by MaxDalton (Maxdalton) · 2022-04-19T09:35:35.951Z · EA(p) · GW(p)
  • Some non-CEA people have made estimates that we sometimes refer to. I'm not sure I have permission to share them, but they suggest significant value. Based in part on these figures, I think that the value of a counterfactual high-performing EA is in the tens of millions of dollars.
    • I think we should also expect higher willingness to pay than private firms because of the general money/people balance in the community, and because we care about their whole career (whereas BCG  will in expectation only get about 4 years of their career (number made up)).
  • I'll let Jessica answer with more specifics if she wants to, but we're currently spending much less than $1m/school.
  • Yes, it's obviously important that figures are counterfactually discounted. But groups seem have historically been counterfactually important to people (see OP's survey [EA · GW]), and we think it's likely that they will be in the future too. Given the high value of additional top people, I think spending like this still looks pretty good.
Replies from: jessica_mccurdy
comment by jessica_mccurdy · 2022-04-19T14:20:03.863Z · EA(p) · GW(p)

Overall, CEA is planning to spend ~$1.5mil on uni group support in 2022 across ~75 campuses, which is a  lot less than $1mil/campus. :) 

Replies from: calebp, jlewars
comment by calebp · 2022-04-22T12:28:56.253Z · EA(p) · GW(p)

Fwiw, I personally would be excited about CEA spending much more on this at their current level of certainty if there were ways to mitigate optics, community health, and tail risk issues.

comment by Jack Lewars (jlewars) · 2022-04-19T16:29:39.583Z · EA(p) · GW(p)

Indeed :-) I had understood from this post (https://forum.effectivealtruism.org/posts/FjDpyJNnzK8teSu4J/ [? · GW]) that this was the destination, though, so the current rate of spending would be less relevant than having good heuristics before we get to that scale.

I see from Max below, though, that Open Phil is assuming a lot of this spending, so sorry for throwing a grenade at CEA if you're not actually going to be behind a really 'move the needle' amount of campus spending.

comment by Vivian.Lwd · 2022-04-21T14:26:50.980Z · EA(p) · GW(p)

Isn't part of what's going on with FB, McKinsey, and Goldman  spending at schools that they're in an arms race  with their direct competitors for talent?   I.e. they have to spend to keep up with Alphabet/Bain/Morgan Stanley.  I don't think there's an analogue for EA - I'm sure EA groups are competing with these types of companies, but they're much more widely distributed, so the dynamics are less intense.  I don't mind spending on community-building but I would be curious to see evidence of impact (it may be available and I just don't know) since I've been on both sides of on-campus recruiting and suspect the marginal impact of money spent is ~0.  And unlike many longtermist projects, the data to assess impact should be pretty accessible?

comment by Sunny1 (Paige_Henchen) · 2022-04-20T15:52:17.620Z · EA(p) · GW(p)

Just as a casual observation, I would much rather hire someone who had done a couple of years at McKinsey than someone coming straight out of undergrad with no work experience. So I'm not sure that diverting talented EAs from McKinsey (or similar) is necessarily best in the long run for expected impact. No EA organization can compete with the ability of McK to train up a new hire with a wide array of generally useful skills in a short amount of time. 

Replies from: Nathan_Barnard
comment by Nathan_Barnard · 2022-04-20T17:02:01.800Z · EA(p) · GW(p)

I think the key point here is that it is unsually easy to recuirt EAs at uni compared to when they're at McKinsey. I think it's unclear if a) among the the best things for a student to do is go to McKinsey and b) how much less likely it is that an EA student goes to McKinsey. I think it's pretty unlikely going to McKinsey is the best thing to do, but I also think that EA student groups have a realtively small effect on how often students go into elite coporate jobs (a bad thing from my perspective) at least in software engineering.  

Replies from: DavidNash
comment by DavidNash · 2022-04-20T18:02:09.438Z · EA(p) · GW(p)

I'm not sure how clear it is that it's much better for people to hear about EA at university, especially given there is a lot more outreach and onboarding at the university level than for professionals.

comment by Guy Raveh · 2022-04-21T15:28:22.893Z · EA(p) · GW(p)

Hi, thanks for your comment.

While it's reasonable not to be able to provide an impact estimate for every specific small grant, I think there are some other things that could increase transparency and accountability, for example:

  • Publishing your general reasoning and heuristics explicitly on the CEA website.
  • Publishing a list of grants, updated with some frequency.
  • Giving some statistics on which sums went to what type of activities - again, updated once in a while.
comment by MaxRa · 2022-04-18T18:02:45.923Z · EA(p) · GW(p)

Institutions like Facebook, Mckinsey, and Goldman spend ~ $1 million per school per year at the institutions they recruit from trying to pull students into lucrative careers that probably at best have a neutral impact on the world.

That's really interesting to me because I'm currently thinking about potential recruitment efforts at CS departments for AI safety roles. I couldn't immediately find a source for the numbers you mention, do you remember where you got them from?

Replies from: AndreaM, Charles He
comment by AndreaM · 2022-04-18T20:36:03.783Z · EA(p) · GW(p)

I also couldn't find much information on campus recruitment expenses for top firms. However, according to the US National Association of Colleges and Employers (NACE), in 2018 average cost-per-hire from US universities was $6,110

FAANG and other top tier employers are likely to spend much more than the average.

comment by Charles He · 2022-04-19T10:59:37.169Z · EA(p) · GW(p)

For each of the companies, if you look at publicly available websites for the campus recruiting centre for one of the HYPS schools for these companies, and just look at the roster of public facing “ambassadors”, who have significant skills and earning counterfactual (so fully burdened cost may be over 200K per head) it’s clear it’s a 7 figure budget for them once you include operations, physical offices, management and other oversight (which won’t appear on the PL per se).

1 mil is the low end.

I can’t immediately pull up a link here as I am on mobile.

comment by Holly Morgan (Holly) · 2022-04-19T15:57:59.534Z · EA(p) · GW(p)

Good to see a post that loosely captures my own experience of EAG London and comes up with a concrete idea for something to do about the problem (if a little emotionally presented).

I don't have a strong view on the ideal level of transparency/communication here, but something I want to highlight is: Moving too slowly and cautiously is also a failure mode [? · GW]

In other words, I want to emphasise how important "this is time consuming, and this time is better spent making more grants/doing something else" can be. Moving fast and breaking things tends to lead to much more obvious, salient problems and so generally attracts a lot more criticism. On the other hand, "Ideally, they should have deployed faster" is not a headline. But if you're as consequentialist as the typical EA is, you should be ~equally worried about not spending money fast enough. Sometimes to help make this failure mode more salient, I imagine a group of chickens in a factory farm just sitting around in agony waiting for us all to get our act together (not the most relevant example in this case, but the idea is try to counteract the salience bias associated with the problems around moving fast). Maybe the best way for e.g. CEA to help these chickens overall is to invest more time reducing "reputational and epistemic risks to EA". Maybe it's to keep trying to get resources out the door according to their best judgements and accepting their predicted levels of failed grants, confused community members, and loss of potentially useful feedback that could come from more external scrutiny. It's not clear to me. But it seems like it could well be the latter. True, "these things have a much higher than average chance of doing harm", but there's also a lot more at stake if they move too slowly.

To be clear: This is not to say FTX/CEA are getting the balance right (and even if they broadly are, your suggestion for them to say something like "this seems plausibly good and we have enough money to throw spaghetti at the wall" still seems good to me). I just wanted to give more prominence to a consideration on the other side of the argument that seems to be relatively neglected in these discussions. So, à la your sidenote: Props to FTX for moving fast.

Replies from: Michelle_Hutchinson, MichaelDickens, jlewars
comment by Michelle_Hutchinson · 2022-04-20T13:06:07.271Z · EA(p) · GW(p)

Thanks so much for this comment. I find it incredibly hard not to be unwarrantedly risk averse. It feels really tempting to focus on avoiding doing any harm, rather than actually helping people as much as I can. This is such an eloquent articulation of the urgency we face, and why we need to keep pushing ourselves to move faster. 

I think this is going to be useful for me to read periodically in the future - I'm going to bookmark it for myself.

comment by MichaelDickens · 2022-04-20T20:04:36.841Z · EA(p) · GW(p)

A related thought: If an org is willing to delay spending (say) $500M/year due to reputational/epistemic concerns, then it should easily be willing to pay $50M to hire top PR experts to figure out the reputational effects of spending at different rates.

(I think delays in spending by big orgs are mostly due to uncertainty about where to donate, not about PR. But off the cuff, I suspect that EA orgs spend less than the optimal amount on strategic PR (as opposed to "un-strategic PR", e.g., doing whatever the CEO's gut says is best for PR).)

comment by Jack Lewars (jlewars) · 2022-04-19T18:19:53.103Z · EA(p) · GW(p)

I like this.

I'm not sure I agree with you that I find it equally worrying as moving so fast that we break too many things, but it's a good point to raise. On a practical level, I partly wrote this because FTX is likely to have a lull after their first grant round where they could invest in transparency.

I also think a concern is what seems to be such an enormous double standard. The argument above could easily be used to justify spending aggressively in global health or animal welfare (where, notably, we have already done a serious, serious amount of research and found amazing donation options; and, as you point out, the need is acute and immediate). Instead, it seems like it might be 'don't spend money on anything below 5x GiveDirectly' in one area, and the spaghetti-wall approach in another.

Out of interest, did you read the post as emotional? I was aiming for brevity and directness but didn't/don't feel emotional about it. Kind of the opposite, actually - I feel like this could help to make us more factually aligned and less driven by emotional reactions to things that might seem like 'boondoggles'.

Replies from: Holly, Holly
comment by Holly Morgan (Holly) · 2022-04-19T20:17:19.683Z · EA(p) · GW(p)

Yeah personally speaking, I don't have very developed views on when to go with Spaghetti-wall vs RCT, so feel free to ignore the following which is more of a personal story. I'd guess there's a bunch of 'Giving Now vs Giving Later' content lying around that's much more relevant.

I think I used to be a lot more RCT because:

  1. I was first motivated to take cost-effectiveness research seriously after hearing the Giving What We Can framing of "this data already exists, it's just that it's aimed at the health departments of LMICs rather than philanthropists" - that's some mad low-hanging fruit right there (OTOH I seem to remember a bunch of friends wrestling with whether to fund Animal Charity Evaluators or ACE's current best guesses - was existing cost-effectiveness research enough to go on yet?)
  2. I was basically a student trying to change the world with a bunch of other students - surely the grown-ups mostly know what they're doing and I should only expect to have better heuristics if there's a ton of evidence behind them
  3. My personality is very risk-averse

Over time, however:

  1. I became more longtermist and there's no GiveWell for longtermism
  2. We grew up, and basically the more I saw of the rest of the world the less faith I had in people generally being sensible and altruistic and having their **** together
  3. I recognised how much of my aversion to Spaghetti-wall is a personality thing [edit: maybe writing my undergrad dissertation on risk aversion in ethics made me acknowledge this more fully :P]
comment by Holly Morgan (Holly) · 2022-04-19T20:23:57.802Z · EA(p) · GW(p)

| Out of interest, did you read the post as emotional? I was aiming for brevity and directness

Ah, that might be it. I was reading the demanding/requesting tone ("show us your numbers!", "could FTX and CEA please publish" and  "If this is too time-consuming...hire some staff" vs "Here's an idea/proposal") as emotional, but I can see how you were just going for brevity/directness, which I generally endorse (and have empathy for emotional FWIW, but generally don't feel like I should endorse as such).

comment by rossaokod · 2022-04-19T10:00:59.610Z · EA(p) · GW(p)

It's bugged me for a while that EA has ~13 years of community building efforts but (AFAIK) not much by way of "strong" evidence of the impact of various types of community building / outreach, in particular local/student groups. I'd like to see more by way of baking self-evaluation into the design of community building efforts, and think we'd be in a much better epistemic place if this was at the forefront of efforts to professionalise community building efforts 5+ years ago. 

By "strong" I mean a serious attempt at causal evaluation using experimental or quasi-experimental methods - i.e. not necessarily RCTs where these aren't practical (though it would be great to see some of these where they are!), but some sort of "difference in difference" style analysis, or before-after comparisons. For example, how do groups' key performance stats (e.g. EA's 'produced', donors, money moved, people going on to EA jobs) compare in the year(s) before vs after getting a full/part time salaried group organiser? Possibly some of this already exists either privately or publicly and the relevant people know where to look (I haven't looked hard, sorry!). E.g. I remember GWWC putting together a fundraising prospectus in 2015 which estimated various counterfactual scenarios. Have there been serious self-evaluations since ? (Sincere apologies if I've missed them or could find them easily - this is a genuine question!)

In terms of what I'd like to see more of with respect to self-evaluation, and tentatively think we could have done better on this over the last 5+ years: 

  • When new initiatives are launched, serious consideration should be paid to how to get high quality evidence of the impact of those initiatives, which aspects of them work best. 
    • E.g. with the recent scale-up of funding for EA groups and hiring or full time coordinators, it would be great if some sort of small-scale A/B test could be run and/or a phased-in introduction. E.g. you could take the top 30-40 universities/groups that we'd ideally have professional outreach at and randomly select half of them to start a (possibly phased-in) programme of professional group leading at the start of 2022-23, and another half at the start of 2023-24.
    • Possibly this is already happening and I don't know - apologies if so! (I've had one very brief conversation with someone involved which suggested that it isn't being approached like this)
    • One objection is that this would delay likely-valuable outreach and is hard to do well. This is true, but it builds knowledge for the future and I wish we'd done more of this 5+ years ago so we'd be more confident in the effectiveness of the increased expenditure today and ideally have a better idea what type of campus support is most effective!
  • I would love to see 1-4 people with strong quant / social science / impact evaluation skills work for ~6-12 months to do a retrospective evaluation of the evidence of the last ~13 years of movement-building efforts, especially support to local groups. They would need the support of people and organisations that led these efforts, to share data on expenditure and key outcomes. Even if lots of this relied on observational data, my guess is that distilling the information from various groups / efforts would be very valuable in understanding their effectiveness.
Replies from: Jonas Vollmer, David_Moss, MichaelDickens, jlewars
comment by Jonas Vollmer · 2022-04-19T14:24:38.050Z · EA(p) · GW(p)

I'd personally be pretty excited to see well-run analyses of this type, and would be excited for you or anyone who upvoted this to go for it. I think the reason why it hasn't happened is simply that it's always vastly easier to say that other people should do something than to actually do it yourself.

Replies from: rossaokod, IanDavidMoss
comment by rossaokod · 2022-04-20T09:26:38.006Z · EA(p) · GW(p)

I completely agree that it is far easier to suggest an analysis than to execute one! I personally won't have the capacity to do this in the next 12-18 months, but would be happy to give feedback on a proposal and/or the research as it develops if someone else is willing and able to take up the mantle. 

I do think that this analysis is more likely to be done (and in a high quality way) if it was either done by, commissioned by, or executed with significant buy-in from CEA and other key stakeholders involved in community building and running local groups. This is partly a case of helping source data etc, but also gives important incentives for someone to do this research. If I had lots of free time over the next 6 months, I would only take this on if I was fairly confident that the people in charge of making decisions would value this research. One model would be for someone to write up a short proposal for the analysis and take it to the decision makers; another would be for the decision-makers to commission it (my guess is that this demand-driven approach is more likely to result in a well-funded, high quality study). 

To be clear, I massively appreciate the work that many, many people (at CEA and many other orgs) do and have done on community building and professionalising the running of groups (sorry if the tone of my original comment was implicitly critical). I think such work is very likely very valuable. I also think the hits-based model is the correct one as we ramp up spending and that not all expenditure should be thoroughly evaluated. But in cases where it seems very likely that we'll keep doing the same type of activity for many years and spend comparatively large resources on it (e.g. support for groups), it makes sense to bake self-evaluation into the design of programmes, to help improve their design in the future.

Replies from: rossaokod
comment by rossaokod · 2022-04-20T09:36:58.297Z · EA(p) · GW(p)

P.S. I've also just seen Joan's write-up of the Focus University groups [EA · GW] in the comments below, which suggests that there is already some decent self-evaluation, experimentation and feedback loops happening as part of these programmes' designs. So it is very possible that there is a good amount of this going on that I (as a very casual observer) am just not aware of!

comment by IanDavidMoss · 2022-04-19T14:57:04.387Z · EA(p) · GW(p)

Agreed! Note, however, that in the case of the FTX grants it will be pretty hard to do this analysis oneself without access to at the very least the list of funded projects, if not the full applications.

comment by David_Moss · 2022-04-19T13:48:00.135Z · EA(p) · GW(p)

I also agree this would be extremely valuable. 

I think we would have had the capacity to do difference-in-difference analyses (or even simpler analyses of pre-post differences in groups with or without community building grants, full-timer organisers etc.) if the outcome measures tracked in the EA Groups Survey were not changed across iterations and, especially, if we had run the EA Groups Survey more frequently (data has only been collected 3 times since 2017 and was not collected before we ran the first such survey in that year).

comment by MichaelDickens · 2022-04-20T20:26:55.494Z · EA(p) · GW(p)

As a positive example, 80,000 Hours does relatively extensive impact evaluations. The most obvious limitation is that they have to guess whether any career changes are actually improvements, but I don't see how to fix that—determining the EV of even a single person's career is an extremely hard problem. IIRC they've done some quasi-experiments but I couldn't find them from quickly skimming their impact evaluations.

comment by Jack Lewars (jlewars) · 2022-04-19T16:20:16.110Z · EA(p) · GW(p)

This would be great. It also closely aligns with what EA expects before and after giving large funding in most cause areas.

comment by Ben Pace · 2022-04-18T12:58:31.608Z · EA(p) · GW(p)

Forgive the clickbait title, but EA is as prone to clickbait as anywhere else.

I mean, sometimes you have reason to make titles into a simple demand, but I wish there were a less weaksauce justification than “because our standards here are no better than anywhere else”.

Replies from: Ben Pace, jlewars, Samuel Shadrach
comment by Ben Pace · 2022-04-18T12:59:45.410Z · EA(p) · GW(p)

To be clear I think this instance is a fairly okay request to make as a post title, but I don’t want the reasoning to imply anyone can do this for whatever reason they like.

comment by Jack Lewars (jlewars) · 2022-04-18T22:17:55.311Z · EA(p) · GW(p)

Candidly, I'm a bit dismayed that the top voted comment on this post is about clickbait.

Replies from: Ben Pace, Jeff_Kaufman
comment by Ben Pace · 2022-04-19T09:47:51.868Z · EA(p) · GW(p)

Well, you don’t have to be any more, because now it’s Jessica McCurdy’s reply.

Replies from: jlewars
comment by Jack Lewars (jlewars) · 2022-04-19T16:24:59.483Z · EA(p) · GW(p)

Indeed - and to be clear, I wasn't trying to suggest that you shouldn't have made the comment - just that it's very secondary to the substance of the post, and so I was hoping the meat of the discussion would provoke the most engagement.

Replies from: Ben Pace
comment by Ben Pace · 2022-04-19T17:05:23.571Z · EA(p) · GW(p)

Yeah, pretty reasonable.

comment by Jeff Kaufman (Jeff_Kaufman) · 2022-04-22T19:43:54.279Z · EA(p) · GW(p)

Voting is biased toward comments that are easy to evaluate as correct/helpful/positive/valuable. With that in mind, I don't especially find this individual instance dismaying?

comment by acylhalide (Samuel Shadrach) · 2022-04-18T16:49:55.950Z · EA(p) · GW(p)

See also [EA(p) · GW(p)]:

[-]acylhalide [EA · GW]12d [EA(p) · GW(p)]15

IMO a standard norm on whether clickbait EA titles are good or bad might help.

I remember seeing a post once arguing for the exact opposite - that clickbait/catchy titles/summaries,  and generally writing styles that draw you in - are good because they draw attention to important issues. So much so that you have a moral obligation to use them, if you believe you're pointing at something important.

Reply

[-]Stefan_Schubert [EA · GW]12d [EA(p) · GW(p)]4

I think you refer to this post [EA · GW]. Note that there was a discussion about the title of that post, and that it was eventually changed.

In general, I think that one should be more careful about being clickbaity regarding sensitive and emotionally charged topics.

comment by alexrjl · 2022-04-19T10:39:58.492Z · EA(p) · GW(p)


If this is too time-consuming for the current FTX advisers, hire some staff 

 

Hiring is an extremely labour and time intensive process, especially if the position you're hiring for requires great judgement. I think responding to a concern about whether something is a good use of staff time with 'just hire more staff' is pretty poor form, and given the context of the rest of the post it wouldn't be unreasonable to respond to it with 'do you want to post a BOTEC comparing the cost of those extra hires you think we should make to the harms you're claiming?'

Replies from: IanDavidMoss, jlewars, freedomandutility, Holly
comment by IanDavidMoss · 2022-04-19T13:57:59.475Z · EA(p) · GW(p)

The top-voted suggestion in FTX's call for megaproject ideas [EA(p) · GW(p)] was to evaluate the impacts of FTX's own (and other EA) grantmaking. It's hard to conduct such an evaluation without, at some point, doing the kind of analysis Jack is calling for. I don't have  a strong opinion about whether it's better for FTX to hire in-house staff to do this analysis or have it be conducted externally (I think either is defensible), but either way, there's a strong demonstrated demand for it and it's hard to see how it happens without EA dollars being deployed to make it possible. So I don't think it's unreasonable at all for Jack to make this suggestion, even if it could have been worded a bit more politely.

comment by Jack Lewars (jlewars) · 2022-04-19T16:07:24.294Z · EA(p) · GW(p)

That's right, and this was very casually phrased, so thanks for pulling me up on it. A better way of saying this would be: "if you're going to distribute billions of dollars in funding, in a way that is unusually capable of being harmful, but don't have the time to explain the reasoning behind that distribution, it's reasonable to ask you to hire people to do this for you (and hiring is almost certainly necessary for lots of other practical reasons)."

comment by freedomandutility · 2022-04-19T14:07:33.118Z · EA(p) · GW(p)

I agree with you that it’s important to account for hiring being very expensive.

My view on more transparency is that its main benefit (which I don’t think OP mentions) is as a long-term safeguard to reduce poor but well intentioned reasoning, mistakes and nepotism around grant processes, and is likely to be worth hiring costs even if we don’t expect to identify ongoing harms.

In other words, I think the stronger case for EA grantmakers being more transparent is the potential for transparency to reduce future harms, rather than its potential to reveal possible ongoing harms.

comment by Holly Morgan (Holly) · 2022-04-22T01:06:11.959Z · EA(p) · GW(p)

Relevant comment from Sam Bankman-Fried in his recent 80,000 Hours podcast episode: "In terms of staffing, we try and run relatively lean. I think often people will try to hire their way out of a problem, and it doesn’t work as well as they’re hoping. I’m definitely nervous about that." (https://80000hours.org/podcast/episodes/sam-bankman-fried-high-risk-approach-to-crypto-and-doing-good/#ftx-foundation-002022)

comment by Peter Wildeford (Peter_Hurford) · 2022-04-19T00:36:31.881Z · EA(p) · GW(p)

One generic back-of-the-envelope calculation from me:

Assume that when you try to do EA outreach, you get the following funnel:

  • ~10% (90% CI[1] 3%-30%) of people you reach out to will be open to being influenced by EA

  • ~10% (90% CI 5%-20%) of people who are reached and are open to being influenced by EA will actually take the action of learning more about EA

  • ~20% (90% CI 5%-40%) of people who learn more about EA actually become EA in some meaningful way (e.g., take GWWC pledge or equivalent)

Thus we expect outreach to a particular person to produce ~0.002 EAs on average.

Now assume an EA has the same expected impact as a typical GWWC member, and assume a typical GWWC member donates ~$24K/yr [EA · GW] for ~6 years [EA · GW], making the total value of an EA worth ~$126,000 in donations, discounting at 4%. I imagine the actual mean EA is likely more valuable than that given a long right tail of impact.

Note that these numbers are pretty much made up[2] and each number ought to be refined with further research - something I'm working on and others should too. Also keep in mind that obviously these numbers will vary a lot based on the specific type of outreach being considered and so should be modified for modeling the specific thing being done. But hopefully this is a useful example.

But basically from this you get it being worth ~$252 to market effective altruism to a particular person and break even. So if a dinner markets EA to ten people that otherwise would not have been marketed to, it will be worth ~$2500 to run just that one dinner. So spending $5000 to run a bunch of dinners can make sense.

Also note that of course EA marketing is not a single-touchpoint-and-then-done-forever system, so you will frequently be spending time/money on the same person multiple times. But this is hopefully made up for by the person becoming more likely to convert (both from self-selection and from the outreach).

Note: This is personal to just me, and does not reflect the views of Rethink Priorities or the Effective Altruism Infrastructure Fund or any other EA institution.


  1. According to me, using my intuition forecaster powers ↩︎

  2. Hopefully even though a lot of this is completely made up, it's useful as a scaffold/demonstration and eventually we can collect more data to try to refine these numbers. ↩︎

Replies from: Jonas Vollmer, Fermi–Dirac Distribution
comment by Jonas Vollmer · 2022-04-19T14:19:11.628Z · EA(p) · GW(p)

I imagine the actual mean EA is likely more valuable than that given a long right tail of impact.

This still sounds like a strong understatement to me – it seems that some people will have vastly more impact. Quick example that gestures in this direction: assuming that there are 5000 EAs, Sam Bankman-Fried is donating $20 billion, and all other 1999 4999 EAs have no impact whatsoever, the mean impact of EAs is $4 million, not $126k. That's a factor of 30x, so a framing like "likely vastly more valuable" would seem more appropriate to me.

Replies from: Linch, Linch, Jeff_Kaufman
comment by Linch · 2022-04-19T15:46:47.164Z · EA(p) · GW(p)

One reason to be lower than this per recruited EA is that you might think that the people who need to be recruited are systematically less valuable on average than the people who don't need to be. Possibly not a huge adjustment in any case, but worth considering. 

Replies from: Jonas Vollmer
comment by Jonas Vollmer · 2022-04-19T16:33:58.902Z · EA(p) · GW(p)

Yeah I fully agree with this; that's partly why I wrote "gestures". Probably should have flagged it more explicitly from the beginning.

comment by Linch · 2022-04-19T15:35:26.642Z · EA(p) · GW(p)

assuming that there are 5000 EAs, Sam Bankman-Fried is donating $20 billion, and all other 1999 EAs

Should be 4999

comment by Jeff Kaufman (Jeff_Kaufman) · 2022-04-22T19:48:38.883Z · EA(p) · GW(p)

assuming that there are 5000 EAs

I know this isn't your main point, but that's ~1/10 what I would have guessed. 5k is only 3x the people who attended EAG London this year.

Replies from: Jonas Vollmer
comment by Jonas Vollmer · 2022-04-23T19:04:37.963Z · EA(p) · GW(p)

Personally I think going for something like 50k doesn't make sense, as I expect that the 5k (or even 500) most engaged people will have a much higher impact than the others.

Also, my guess of how CEA/FTX are thinking about this is actually that they assume an even smaller number (perhaps 2k or so?) because they're aiming for highly engaged people, and don't pay as much attention to how many less engaged people they're causing.

Replies from: Jeff_Kaufman
comment by Jeff Kaufman (Jeff_Kaufman) · 2022-04-24T12:18:04.899Z · EA(p) · GW(p)

Peter was using a bar of "actually become EA in some meaningful way (e.g., take GWWC pledge or equivalent)". GWWC is 8k on its own, though there's probably been substantial attrition.

But yes, because we expect impact to be power-lawish if you order all plausible EAs by impact there will probably not be any especially compelling places to draw a line.

comment by Fermi–Dirac Distribution · 2022-04-19T17:10:36.759Z · EA(p) · GW(p)

But basically from this you get it being worth ~$252 to market effective altruism to a particular person and break even. 

I don’t think that’s how it works. Your reasoning here is basically the same as “I value having Internet connection at $50,000/year, so it’s worth it for me to pay that much for it.” 

The flaw is that, taking the market price of a good/service as given, your willingness to pay for it only dictates whether you should get it, now how much you should pay for it. If you value people at a certain level of talent at $1M/career, that only means that, so long as it’s not impossible to recruit such talent for less than $1M, you should recruit it. But if you can recruit it for $100,000, whether you value it at $100,001 or $1M or $ does not matter: you should pay $100,000, and no more. Foregoing consumer surplus has opportunity costs. 

To put it more explicitly: suppose you value 1 EA  with talent X at $1M. Suppose it is possible to recruit, in expectation, one such EA for $100,000. If you pay $1M/EA instead, the opportunity cost of doing so is 10 EAs for each person you recruit, so the expected value of the action is -9 EAs per recruit, and you are in no way breaking even. 

Of course, the assumption I made in the previous paragraph, that both the value of an EA and the cost of recruiting one are constant, does not reflect reality: if we had a million EAs, the cost of an additional recruit would be higher and its value would be lower, if we hold other EA assets constant, and so the opportunity cost isn’t constant. But my main point, that you should pay no more than the market price for goods and services if you want to break even (taking into account time costs and everything), still stands.

Replies from: Peter_Hurford
comment by Peter Wildeford (Peter_Hurford) · 2022-04-19T18:40:41.530Z · EA(p) · GW(p)

I agree with what you are saying that yes, we ideally should rank order all the possible ways to market EA and only take those that get the best (quality adjusted) EAs per $ spent, regardless of our value of EAs - that is, we should maximize return on investment.

**However, in practice, as we do not currently yet have enough EA marketing opportunities to saturate our billions of dollars in potential marketing budget, it would be an easier decision procedure to simply fund every opportunity that meets some target ROI threshold and revise that ROI threshold over time as we learn more about our opportunities and budget. ** We'd also ideally set ourselves to learn-by-doing when engaging in this outreach work.

Replies from: jlewars
comment by Jack Lewars (jlewars) · 2022-04-20T08:00:15.029Z · EA(p) · GW(p)

Absolutely. And so the questions are:

  • have we defined that ROI threshold?

  • what is it?

  • are we building ways to learn by doing into these programmes?

The discussions on post suggest that it's at least plausible that the answers are 'no', 'anything that seems plausibly good' and 'no', which I think would be concerning for most people, irrespective of where you sit on the various debates/continuums within EA.

Replies from: Peter_Hurford
comment by Peter Wildeford (Peter_Hurford) · 2022-04-20T19:31:15.686Z · EA(p) · GW(p)

This varies grantmaker-to-grantmaker but I personally try to get an ROI that is at least 10x better than donating the equivalent amount to AMF.

I'd really like to help programs build more learning by doing. That seems like a large gap worth addressing. Right now I find myself without enough capacity to do it, so hopefully someone else will do it, or I'll eventually figure out how to get myself or someone at Rethink Priorities to work on it (especially given that we've been hiring a lot more).

comment by Robert_Wiblin · 2022-04-20T21:41:06.400Z · EA(p) · GW(p)

My guess is this would reduce grant output a lot relative to how much I think anyone would learn (maybe it would grantmaking in half?) so personally I'd rather see them just push ahead and make a lot of grants then review or write about just a handful of them from time to time.

comment by MichaelStJules · 2022-04-18T21:13:33.332Z · EA(p) · GW(p)

I also wish all the EA Funds and Open Phil would do this/make their numbers more accessible.

comment by MaxDalton (Maxdalton) · 2022-04-19T09:35:54.661Z · EA(p) · GW(p)

By the way, we are not planning to spend $50m on groups outreach in the near future. Our groups budget is $5.4m this year. 

Also note that our focus university program  is passing to Open Philanthropy [EA · GW].

Replies from: jlewars
comment by Jack Lewars (jlewars) · 2022-04-19T16:02:25.208Z · EA(p) · GW(p)

Hi Max - I took this from CEA's post here (https://forum.effectivealtruism.org/posts/FjDpyJNnzK8teSu4J/ [? · GW]), which aims for campus centres at 17 schools controlling "a multi-million dollar budget within three years of starting", and which Alex HT suggested in the comments would top out at $3m/year. This suggested a range of $17m-$54m.

Replies from: Maxdalton
comment by MaxDalton (Maxdalton) · 2022-04-19T16:16:38.616Z · EA(p) · GW(p)

Cool, I see where you got the figure from. But yeah, most of that work is passing to Open Philanthropy, so we don't plan to spend $50m/year.

Replies from: jlewars
comment by Jack Lewars (jlewars) · 2022-04-19T16:33:29.740Z · EA(p) · GW(p)

Thanks - I missed that update, and wouldn't have written about CEA above if I had seen it, I think.

comment by Markus Amalthea Magnuson (peppersghost) · 2022-04-18T12:31:30.569Z · EA(p) · GW(p)

Just a list of projects and organisations FTX has funded would be beneficial and probably much less time-consuming to produce. Some of the things you mention could be deducted from that, and it would also help in evaluating current project ideas and how likely they are to get funding from FTX at some point.

Replies from: jlewars
comment by Jack Lewars (jlewars) · 2022-04-18T17:18:41.266Z · EA(p) · GW(p)

True, and it seems like a necessary step on its own, but I'm wary of people 'deducing' too much. Right now, a lot of the anxiety seems to be coming from people trying to deduce what funders might be thinking; ideally, they'd tell people themselves.

comment by calebp · 2022-04-18T23:48:40.374Z · EA(p) · GW(p)

I kind of like the general sentiment but I'm a bit annoyed that it's just assumed that your burden of proof is so strongly on the funders.

Maybe you want to share your BOTEC first, particularly given the framing of the post is "I want to see the numbers because I'm concerned" as opposed to just curiosity?

Replies from: jlewars, freedomandutility
comment by Jack Lewars (jlewars) · 2022-04-19T16:16:10.220Z · EA(p) · GW(p)

I'm not sure why the burden wouldn't fall on people making the distribution of funds? (Incidentally, I'm using this to mean that the funders could also hire external consultancies etc. to produce this.)

But, more to the point, I wrote this really hoping that both organisations would say "sure, here it is" and we could go from there. That might really have helped bring people together. (NB: I realise FTX haven't engaged with this yet.)

In many ways, if the outcome is that there isn't a clear/shared/approved expected value rationale being used internally to guide a given set of spending, that seems to validate some of the concerns that were expressed at EAG.

Replies from: calebp, calebp
comment by calebp · 2022-04-21T08:15:28.817Z · EA(p) · GW(p)

I think what I'm getting at is that burden of proof is generally an unhelpful framing, and an action that you could take that might be helpful is communicating your model that makes you sceptical of their spending.

Hiring consultancies to do this seems like it's not going to go well unless it's rethink priorities or they have lot of context and on the margin I think it's reasonable for CEA to say no, they have better things to do.

I feel confused about the following but I think that as someone that runs an EA org you could easily have reached out directly to CEA/FTX to ask this question (maybe you did, if so apologies) and this action seems kind of like outing them more than being curious. I'm not necessarily against this (in fact I think this is helpful in lots of ways) but many forum users seen to not like these kinds of adversarial actions.

Replies from: jlewars
comment by Jack Lewars (jlewars) · 2022-04-22T09:19:02.762Z · EA(p) · GW(p)

Like you, I'm fairly relaxed about asking people publicly to be transparent. Specifically in this context, though, someone from FTX said they would be open to doing this if the idea was popular, which prompted the post.

As a sidenote, I think also that MEL consultancies are adept at understanding context quickly and would be a good option (or something that EA could found itself - see Rossa's comment). My wife is an MEL consultant, which informs my view of this. But that's not to say they are necessarily the best option.

Replies from: calebp, calebp
comment by calebp · 2022-04-22T12:30:54.306Z · EA(p) · GW(p)

I as an individual would endorse someone hiring an MEL consultant to do this for the information value and would also bet on this not providing much value due to the analysis being poor at $100.

Terms to be worked out of course, but if someone was interested in hiring the low context consultant, I'd be interested in working out the terms.

comment by calebp · 2022-04-22T12:25:39.786Z · EA(p) · GW(p)

Oh right, I didn't pick up on the ftx said they'd like to see if this was popular thing. This resolves part of this for me (at least on the ftx as opposed to the CEA side).

comment by calebp · 2022-04-21T08:29:28.393Z · EA(p) · GW(p)

Broken into a different comment so people can vote more clearly

In many ways, if the outcome is that there isn't a clear/shared/approved expected value rationale being used internally to guide a given set of spending, that seems to validate some of the concerns that were expressed at EAG.

I think that there is likely different epistemic standards between cause areas such that this is a pretty complicated question and people underpreciate how much of a challenge this is for the EA movement.

comment by freedomandutility · 2022-04-19T08:20:14.840Z · EA(p) · GW(p)

I think it makes sense to have the burden of proof mostly on the funders given that they presumably have more info about all their activities, plus having the burden set this way has instrumental benefits of encouraging transparency which could lead to useful critiques, and extra reputation-related incentives to use good reasoning and do a good job of judging what grants do and do not meet a cost-effectiveness bar.

comment by Benjamin_Todd · 2022-04-29T15:03:51.433Z · EA(p) · GW(p)

Just wanted to add that I did a rough cost-effectiveness estimate of the average of all past movement building efforts using the EA growth figures here [EA(p) · GW(p)]. I found an average of 60:1 return for funding and 30:1 for labour. At equilibrium, anything above 1 is worth doing, so I expect that even if we 10x the level of investment, it would still be positive on average.

comment by Thomas Kwa (tkwa) · 2022-04-18T13:10:41.195Z · EA(p) · GW(p)

I've done informal BOTECs and it seems like the current funding amounts are roughly correct, though we need to be careful with deploying this funding due to concerns like optics and epistemics [EA · GW]. Regarding the example, spending $5k on EA group dinners is really not that much if it has even a 2% chance to cause one additional career change. This seems like a failure of communication, because funding dinners is either clearly good and students weren't doing the BOTEC, or it's bad due to some optics or other concerns that the students didn't communicate to CEA.

Replies from: jlewars, lukasberglund
comment by Jack Lewars (jlewars) · 2022-04-18T17:07:38.657Z · EA(p) · GW(p)

In the spirit of this post, maybe you could share these informal BOTECs?

'Here is a BOTEC' is going to help more than 'I've done a BOTEC and it checks out'.

(I appreciate the post isn't actually aimed at you)

Replies from: Mauricio
comment by Mauricio · 2022-04-18T21:02:30.958Z · EA(p) · GW(p)

That's fair - I'm not the earlier commenter but would suggest (as someone who's heard some of these conversations but isn't necessarily representative of others' thinking):

For dinners: Suppose offering to buy a $15 dinner for someone makes it 10% more likely than they'll go to a group dinner, and suppose that makes it 1% more likely that they'll have a very impactful career. Suppose that means counterfactually donating 10% of $100k for 40 years. Then on average the dinner costs $15 and yields $400.

For retreats: Suppose offering to subsidize a $4oo flight makes someone 40% more likely to go to a retreat and that this makes them 5% more likely to have a very impactful career. Again suppose that means counterfactually donating 10% of $100k for 40 years. Then on average the flight costs $400 and yields $8,000.

(And expected returns are 100x higher than that under bolder assumptions about how much impact people will have. Although they're negative if optics costs are high enough.)

Replies from: jlewars
comment by Jack Lewars (jlewars) · 2022-04-18T22:01:36.097Z · EA(p) · GW(p)

Thanks - this is exactly what I think is useful to have out there, and ideally to refine over time.

My immediate reaction is that the % changes you are assigning look very generous. I doubt a $15 dinner makes some 1% more likely to pursue an impactful career; and especially that a subsidised flight produced a 5% swing. I think these are likely orders of magnitude too high, especially when you consider that other places will also offer free dinners/retreats.

If a $400 investment in anything made someone 5% more likely to pursue an impactful career, that would be amazing.

But I guess what I'm really hoping is that CEA and FTX have exactly this sort of reasoning internally, with some moderate research into the assumptions, and could share that externally.

Replies from: Mauricio
comment by Mauricio · 2022-04-18T22:36:22.954Z · EA(p) · GW(p)

Thanks! Agree it's good to refine these and that these are very optimistic - I suspect the optimism is justified by the track record of these events. Anecdotally, it seems nontrivially common for early positive interactions to motivate new community members to continue/deepen their (social and/or motivational) engagement, and that seems to often lead to impactful career plan changes.

(I think there's steeply diminishing returns here--someone's first exposure to the community seems much more potentially impactful than later exposures. I tried to account for "how many participants will be having their first exposure" in the earlier estimate.)

In other words, we could (say) break down the ~1% estimate (which is already conditioned on counterfactual dinner attendance) into the following (ignoring benefits for people who are early on but not totally new):

  • 30% chance that this is their first exposure
  • conditional on the above, 10% chance that the experience kickstarts long/deep engagement
  • conditional on the above, 50% chance of an impactful career switch (although early exposures that aren't quite the first one also seem valuable)

If 1% is far too generous, which of the above factors are too high? (Maybe the second one?)

(Edited to add) And yup, I acknowledge this isn't the source you were looking for - hopefully still adds to the conversation.

comment by berglund (lukasberglund) · 2022-04-18T14:04:06.266Z · EA(p) · GW(p)

Regarding the example, spending $5k on EA group dinners is really not that much if it has even a 2% chance to cause one additional career change.

How much of the impact generated by the career change are you attributing to CEA spending here? I'm just wondering because counterfactuals run into the issue of double-counting (as discussed here [EA · GW]). 

Replies from: tkwa
comment by Thomas Kwa (tkwa) · 2022-04-18T15:58:26.149Z · EA(p) · GW(p)

Unsure but probably more than 20% if the person wouldn't be found through other means. I think it's reasonable to say there are 3 parties: CEA, the group organizers, and the person, and none is replaceable so they get 33% Shapley each. At 2% chance to get a career change this would be a cost of 750k per career which is still clearly good at top unis. The bigger issue is whether the career change is actually counterfactual because often it's just a speedup.

Replies from: Amalie Farestvedt, Fermi–Dirac Distribution
comment by Amalie Farestvedt · 2022-04-18T16:32:32.021Z · EA(p) · GW(p)

I do think you have to factor in the potential negative risk of spending too much in that estimate as some of the potential members might be turned of by what seems like inefficient use of money. I think this is especially crucial if you are in the process of explaining the EA principles or when relating to members who not yet are committed to the movement.

comment by Fermi–Dirac Distribution · 2022-04-18T19:50:13.707Z · EA(p) · GW(p)

Is $750k the market price for 1 expected career change from someone at a top school, excluding compensation costs? Alternatively, is there no cheaper way to cause such a career change? IMO, this is the important question here: if there is a cheaper way, then paying $750k has an opportunity cost of >1 career changes. 

Replies from: tkwa
comment by Thomas Kwa (tkwa) · 2022-04-18T20:42:46.857Z · EA(p) · GW(p)

edit: misinterpreted what comment above meant by "market price"

I think the market price is a bit higher than that. The mean impact from someone at a top school is worth over $750k/year, which means we should fund all interventions that produce a career change for $750k (unless they have large non-financial costs) since those have a <3 year payback period even if the students take a couple years to graduate or skill up.

In practice, dinners typically produce way more than 2% of a career change for $5k of dinners (33 dinners for 10 people at $15/serving). The situation at universities has non-monetary bottlenecks, like information transmission fidelity, qualified organizers, operational capacity, university regulations, etc., and most things that get you better use of those other resources and aren't totally extravagant are worth funding, unless they have a hidden cost like optics or attracting greedy people.

Replies from: Fermi–Dirac Distribution
comment by Fermi–Dirac Distribution · 2022-04-18T22:32:16.715Z · EA(p) · GW(p)

I think the market price is a bit higher than that.

Someone else in this thread [EA(p) · GW(p)] found a report claiming that employers spend an average of ~$6,100 to hire someone at a US university.  I also found this report saying that the average cost per hire in the United States is <$5,000, $15k for an executive. At 1 career = 10 jobs that's $150,000/career for executive-level talent, or $180,000/career adjusting for inflation since the report was released. 

I'm not sure how well those numbers reflect reality (the $15k/executive number looks quite low), but it seems at least fairly plausible that the market price is substantially less than $750k/career. 

The mean impact from someone at a top school is worth over $750k/year, which means we should fund all interventions that produce a career change for $750k (unless they have large non-financial costs) since those have a <2 year payback period.

This line of reasoning is precisely what I'm claiming to be misguided. Giving you a gallon of water to drink allows you to live at least two additional days (compared to you having no water), which at $750k of impact/year (~$2000/day ) means, by your reasoning, that EA should fund all interventions that ensure you have 1 gallon of water for <=$4000, up to the amount you need to survive. 

If water happened to be that expensive, that would be a worthwhile trade. But given the current market price of water (with the time cost of acquiring it included) being willing to pay anywhere near $4000/gallon is absurd. 

In general, if you value something at $x, and its market price is $y, x only matters for deciding whether you should pay for the thing or not, not for deciding how much you should pay for it. If x >= y, then you should pay $y, otherwise you should pay $0.

Replies from: tkwa
comment by Thomas Kwa (tkwa) · 2022-04-19T00:42:24.594Z · EA(p) · GW(p)

It looks like I misunderstood a comment above. I meant "market price" as the rate at which CEA should currently trade between money and marginal careers, which is >$750k. I think you mean the average price at which other companies "in the market for talent" buy career changes, which is <$750k.

I think there isn't really a single price at which we can buy infinite talent. We should do activities as cost-effective as other recruiters, but these can only be scaled up to a limited extent before we run into other bottlenecks. The existence of a cheaper intervention doesn't mean we shouldn't fund a more expensive intervention once the cheaper one is exhausted. And we basically want an infinite amount of talent, so in theory the activities that buy career changes at prices between $150k and $750k are also worth funding.

I think we can agree that

  • different activities have different cost-effectiveness, some of them substantially cheaper than $750k/career
  • we can use a basically infinite amount of talent, and the supply curve for career changes slopes upwards
  • we shouldn't pay more than the market price for any intervention e.g. throw $100k at a university group for dinners when it produces the same effect as $5k spent on dinners
  • we should fund every activity that has a cost-effectiveness of better than $750k per career change (or whatever the true number is), unless we saturate our demand for talent and lower the marginal benefit of talent, or deplete much of our money and increase the marginal benefit of money
  • we are unlikely to saturate our demand for talent by throwing more money at EA groups because there are other bottlenecks
  • Because most of the interventions are much cheaper than $750k/career change, our average cost will be much less than $750k/career change
comment by Guy Raveh · 2022-04-18T17:24:49.430Z · EA(p) · GW(p)

I strongly agree we need transparency. In lieu of democracy in funding, orgs need to be accountable to the movement in some way.

Also, what's a BOTEC?

Replies from: jlewars
comment by Jack Lewars (jlewars) · 2022-04-18T17:31:32.277Z · EA(p) · GW(p)

I've updated this now: it's a Back Of The Envelope Calculation.

comment by Holly Morgan (Holly) · 2022-04-22T01:32:45.047Z · EA(p) · GW(p)

Just noticed Sam Bankman-Fried's 80,000 Hours podcast episode where he sheds some light on his thinking in this regard.

I think the excerpt below is not far from the OP's request that "if there is no BOTEC and it's more 'this seems plausibly good and we have enough money to throw spaghetti at the wall', please say that clearly and publicly."

Sam:

I think that being really willing to give significant amounts is a real piece of this. Being willing to give 100 million and not needing anything like certainty for that. We’re not in a position where we’re like, “If you want this level of funding, you better effectively have proof that what you’re going to do is great.” We’re happy to give a lot with not that much evidence and not that much conviction — if we think it’s, in expectation, great. Maybe it’s worth doing more research, but maybe it’s just worth going for. I think that is something where it’s a different style, it’s a different brand. And we, I think in general, are pretty comfortable going out on a limb for what seems like the right thing to do.

Rob:

I guess you might bring a different cultural aspect here because you come from market trading, where you have to take a whole lot of risk and you’ve just got to be comfortable with that or there’s not going to be much out there for you. And also the very risk-taking attitude of going into entrepreneurship — like double-or-nothing all the time in terms of growing the business.

I’ve had a worry that’s been developing over the last year that the effective altruism community might be a bit too conservative about its giving at this point. Because many of us, including me, got our start when our style of giving was pretty cash-starved — it was pretty niche, and so we developed a frugal mindset, an “I’ve got to be careful” mindset.

And on top of that, to be honest, as a purely aesthetic matter, I like being careful and discerning, rather than moving fast and doing lots of stuff that I expect in the future is going to look foolish, or making a lot of bets that could make me look like an idiot down the road. My colleague, Benjamin Todd, estimated last year that there’s $46 billion committed to effective altruist–style philanthropy — of course that figure is flying around all the time, but it’s probably something similar now — and according to his estimates, that figure had been growing at 35% a year over the last six years. So increasingly, it’s been growing much faster than we’ve been able to disburse these funds to really valuable stuff.

So I guess me and other people might want to start thinking that maybe the big risk that we should be worried about is not about being too careless, but rather not giving enough to what look like questionable projects to us now — because the marginal project in 10 years’ time is going to be noticeably more mediocre or noticeably less promising. Or alternatively, we might all be dead from x-risk already because we missed the boat.

Sam:

Completely agree. That is roughly my instinct: that there are a lot of things that you have to go out on a limb for. I think it’s just the right thing to do, and that probably as a movement, we’ve been too conservative on that front. A lot of that is, as you said, coming from a place where there’s a lot less funding and where it made sense to be more conservative.

I also just think, as you said, most people don’t like taking risks. And especially, it’s often a really bad look to say you’re trying to do something great for the world and then you have no impact at all. I think that feels really demoralizing to a lot of people. Even if it was the right thing to do in expectation, it still feels really demoralizing. So I think that basically fighting against that instinct is the right thing to do, and trying to push us as a community to try ambitious things nonetheless.

Replies from: jlewars
comment by Jack Lewars (jlewars) · 2022-04-22T09:21:19.610Z · EA(p) · GW(p)

Very interesting, thanks. I read this as more saying 'we need to be prepared to back unlikely but potentially impactful things', and acknowledging the uncertainty in longtermism, rather than saying 'we don't think expected value is a good heuristic for giving out grants', but I'm not confident in that reading. Probably reflects my personal framing more than anything else.

Replies from: Holly
comment by Holly Morgan (Holly) · 2022-04-22T12:27:17.836Z · EA(p) · GW(p)

Oh, I read it as more the former too!

I read your post as:

  1. Asking if FTX have done something as explicit as a BOTEC for each grant or if it's more a case of "this seems plausibly good" (where both use expected value as a heuristic)
  2. If there are BOTECs, requesting they write them all up in a publicly shareable form
  3. Implying that the larger the pot, the more certain you should be ("these things have a much higher than average chance of doing harm. Most mistaken grants will just fail. These grants carry reputational and epistemic risks to EA.")

I thought Sam's comments served as partial responses to each of these points. You seem to be essentially challenging FTX to be a lot more certain about the impact of their grants (tell us your reasoning so we can test your assumptions and help you be more sure you're doing the right thing, hire more staff like Open Phil so you can put a lot more work into these evaluations, reduce the risk of potential downsides because they're pretty bad) and Sam here essentially seems to be responding "I don't think we need to be that certain." I can't see where the expected value heuristic was ever called into question? Sorry if you thought that's how I was reading this.

[Edit: Maybe when you say "plausibly good" you mean "negative in expectation but a decent chance of being good", whereas I read it as "good in expectation but not as the result of an explicit BOTEC"? That might be where the confusion lies. If so, with my top-level comment I was trying to say "This is why FTX might be using heuristics that are even rougher than BOTECs and why they have a much smaller team than Open Phil and why they may not take the time to publish all their reasoning" rather than "This is why they might not be that bothered about expected value and instead are just funding things that might be good". Hope that makes sense.]

comment by David_Moss · 2022-04-19T14:10:32.898Z · EA(p) · GW(p)

Back when LEAN was a thing we had a model of the value of local groups based on the estimated # of counterfactual actively engaged EAs, GWWC pledges and career changes, taking their value from 80,000 Hour $ valuations of career changes of different levels. 

The numbers would all be very out of date now though,  and the EA Groups Surveys post 2017 didn't gather the data that would allow this to be estimated.

comment by Kerkko Pelttari · 2022-04-18T22:15:09.853Z · EA(p) · GW(p)

Good questions, I have ended up thinking about many of these topics ofren.

Something else where I would find improved transparency valuable would be what are the back of envelope calcs and statistics for denied fundings. Reading EA funds reports for example doesn't give a total view into where the current bar for interventions is, because we're only seeing the project distribution from above the cutoff point.

comment by Bluefalcon · 2022-04-22T00:54:22.741Z · EA(p) · GW(p)

I would prefer that they be less transparent so they don't have to waste their valuable time.

comment by NegativeNuno · 2022-04-18T14:06:20.443Z · EA(p) · GW(p)

Downvoted because of the clickbait title and the terrible formatting

Replies from: BenSchifman, lukasberglund, Guy Raveh, calebp, JackRyan
comment by BenSchifman · 2022-04-18T14:22:21.612Z · EA(p) · GW(p)

I know this isn't the central part of the post but I'm not sure the title is really clickbait.  It seems like an accurate headline to me? I understand clickbait to be "the intentional act of over-promising or otherwise misrepresenting — in a headline, on social media, in an image, or some combination — what you’re going to find when you read a story on the web."  Source.

A real clickbait title for this would be something like "The one secret fact FTX doesn't want you to know" or "Grantmakers hate him! One weird trick to make spending transparent" 

comment by berglund (lukasberglund) · 2022-04-18T14:11:59.011Z · EA(p) · GW(p)

Personally, I don't have a problem with the title. It clearly states the central point of the post. 

comment by Guy Raveh · 2022-04-18T17:23:29.291Z · EA(p) · GW(p)

Not long enough for the formatting to matter in my opinion. We can, and should, encourage people to post some low-effort posts, as long as they're an original thought.

comment by calebp · 2022-04-19T08:14:46.488Z · EA(p) · GW(p)

One of the the EA forum norms that I like to see is people explaining why they downvoted a post/comment so I'm a bit annoyed that NegativeNuno's comment that supported this norm was fairly heavily downvoted (without explanation).

comment by Jack R (JackRyan) · 2022-04-19T10:59:51.343Z · EA(p) · GW(p)

I disagree with your reasons for downvoting the post, since I generally judge posts on their content, but I do appreciate your transparency here and found it interesting to see that you disliked a post for these reasons. I’m tempted to upvote your comment, though that feels weird since I disagree with it