Posts

Holly Morgan's Shortform 2022-06-04T11:22:15.060Z
To PA or not to PA? 2022-04-15T15:29:43.316Z

Comments

Comment by Holly Morgan (Holly) on "Agency" needs nuance · 2022-09-12T21:42:00.203Z · EA · GW

Incidentally, I also appreciate comments like the first quote - not only have you given a summary, you've also given an indication of how much of the value of the post is contained in the summary 🙏 

Comment by Holly Morgan (Holly) on "Agency" needs nuance · 2022-09-12T21:39:57.217Z · EA · GW

If you’ve read the summary, I’m not sure how much benefit you’ll get from the rest of the post. Consider not reading it.

Okay. Still upvoting though for this general thing:

...things I’ve changed my mind on since my last post.

Comment by Holly Morgan (Holly) on Could it be a (bad) lock-in to replace factory farming with alternative protein? · 2022-09-11T09:38:36.255Z · EA · GW

I had to re-read too, but I read it as "Slavery was not primarily abolished for economic reasons."

Comment by Holly Morgan (Holly) on Could it be a (bad) lock-in to replace factory farming with alternative protein? · 2022-09-11T09:28:43.668Z · EA · GW

I feel like the reductio ad abusurdum of your argument then is "Never encourage (maybe even discourage) anything that helps someone unless that thing is moral reasoning."

Comment by Holly Morgan (Holly) on Could it be a (bad) lock-in to replace factory farming with alternative protein? · 2022-09-10T22:02:40.779Z · EA · GW

"Why can't attitude change / moral progress still happen later?" E.g. when we're advocating for concern for wild animal suffering?

Comment by Holly Morgan (Holly) on Some advice the CEA groups team gives to new university group organizers · 2022-09-07T14:06:26.264Z · EA · GW

I know that authors sometimes forget to check comments on their posts, so in case you haven't received an answer and you're still looking for one, you might have more luck using Jessica's email which is listed here.

Comment by Holly Morgan (Holly) on Criticism of the main framework in AI alignment · 2022-09-06T08:20:34.233Z · EA · GW

I dunno, I still think my summary works. (To be clear, I wasn't trying to be like, "You must be exaggerating, tsk tsk," - I think you're being honest and for me it's the most important part of your post so I wanted to draw attention to it.)

Comment by Holly Morgan (Holly) on Criticism of the main framework in AI alignment · 2022-09-02T04:59:16.967Z · EA · GW

Tl;dr As far as you know, you're the only person in the world directly working on how to build AI that's capable of making moral progress i.e. thinking critically about goals as humans do.

(I find this pretty surprising and worrying so wanted to highlight.)

Comment by Holly Morgan (Holly) on What We Owe The Future is out today · 2022-08-18T19:17:02.021Z · EA · GW

Alright Henry, don't get carried away. The Very Hungry Caterpillar was the best thing to happen to What We Owe The Future.

Comment by Holly Morgan (Holly) on What We Owe The Future is out today · 2022-08-17T09:29:11.781Z · EA · GW

Currently at #52 on Amazon's Best Sellers list!

I imagine it's particularly good to get it to #50 so that it appears on the first page of results?

Comment by Holly Morgan (Holly) on Announcing the Longtermism Fund · 2022-08-11T13:32:02.543Z · EA · GW

Tl;dr the Longtermism Fund aims to be a widely accessible call-to-action to accompany longtermism becoming more mainstream 😍

Comment by Holly Morgan (Holly) on Recommendations for the lending library at a technology strategy consulting firm · 2022-08-10T23:20:28.545Z · EA · GW

More inspo here: https://forum.effectivealtruism.org/posts/TzooJmtZvxtK2kBQi/co-creation-of-the-library-of-effective-altruism-information

Comment by Holly Morgan (Holly) on Leaning into EA Disillusionment · 2022-07-24T21:44:00.808Z · EA · GW

Helen's post also resonated a lot with me. But this comment even more so. Thank you, geoffrey, for reminding me that I want to lean away from disillusionment à la your footnote :-)

(A similar instance of this a few months back: I was describing these kinds of feelings to an EA-adjacent acquaintance in his forties and he said, "That doesn't sound like a problem with EA. That sounds like growing up." And despite being a 30-year-old woman, that comment didn't feel at all patronising, it felt spot on.)

Comment by Holly Morgan (Holly) on Leveling-up Impartiality · 2022-07-14T21:43:43.234Z · EA · GW

Love your conclusion. I think for me it's importantly true, useful to convey/remember, and beautifully put.

Comment by Holly Morgan (Holly) on Why EAs should normalize using Glassdoor · 2022-06-23T16:10:33.949Z · EA · GW

Nice solution.

In a similar vein, I'd like to see more people asking "Can anyone DM me a quick review of [EA org] as a place to work / service provider?"

Comment by Holly Morgan (Holly) on You don’t have to respond to every comment · 2022-06-21T14:32:58.352Z · EA · GW

If you don’t want to engage with comments but feel awkward saying nothing, you can also share a link to this post and leave a comment response that just reads: 

Thank you for your comment. I appreciate it, but will not engage further.

 

You can also add a similar note to the end of a post, e.g. "Note: I may not respond to all comments but at least intend to read them all."

Comment by Holly Morgan (Holly) on Space Exploration & Satellites on Our World in Data · 2022-06-14T19:10:08.410Z · EA · GW

Interactive graph previews when you hover over each link! 😍

Comment by Holly Morgan (Holly) on EA Survey 2019 Series: Community Demographics & Characteristics · 2022-06-09T13:51:00.927Z · EA · GW

Thanks for doing this! Quick question: Are the survey questions still available somewhere?

Comment by Holly Morgan (Holly) on Four Concerns Regarding Longtermism · 2022-06-07T11:41:45.624Z · EA · GW

I think occasionally I hear people argue that others focus on longtermist issues in large part because it's more exciting/creative/positive etc to think about futuristic utopias, then some of those people reply "Actually I really miss immediate feedback, tangible results, directly helping people etc, it's really hard to feel motivated by all this abstract stuff" and the discussion kind of ends there.

But the broader Social Capital Concern is something that deserves more serious attention I think. The 'core' of the EA community seems to be pretty longtermist (whether that's because it is sexier, or because these people have thought about / discussed / researched it a lot, whatever reason) and so you would expect this phenomenon of people acting more longtermist than they actually are in order to gain social capital within the community.

Marisa encourages neartermist EAs to hold on to their values here. Luke Freeman encourages EA to stay broad here. Owen Cotton-Barratt says "Global health is important for the epistemic foundations of EA, even for longtermists". [Edit: These are all community leaders (broadly defined), so as well as the specific arguments they make, I think the very fact that they're more prominent members of the community expressing these views is particularly useful when the issue at hand is social capital.]

I also kinda get the sense that many EA orgs/groups cater to the neartermist side of EA mainly out of epistemic humility / collaborative norms etc rather than personally prioritising the associated causes/projects. E.g. I'm pretty longtermist, but I still make some effort to help the more neartermist EAs find PAs - it felt like that was the default for a new community-focused organisation/project. And I remember some discussion around some of CEA's projects being too focused on longtermism a few years back and things seem to be more evenly distributed now.

(I think there are probably many more examples of public and private discussion along these lines, apologies for not giving a more comprehensive response - it's hard from this selection to get a sense of if we're doing enough or even too much to correct for the Social Capital Concern. My intention wasn't actually to be like "Yeah, heard it all before" otherwise I expect I would have included some links to similar discussions to start with. I was more theorising as to what others might be thinking and explaining my own upvote. Sorry for not making this clearer - I'm just re-reading my first comment now and it seems a bit rude!)

Comment by Holly Morgan (Holly) on Four Concerns Regarding Longtermism · 2022-06-06T09:26:27.741Z · EA · GW

I like this. I was surprised it hasn't received more upvotes yet.

I suspect what's going on is that most people here are focused on the arguments in the post - and quite rightly so, I suppose, for a red teaming contest - and are thinking, "Meh, nothing I haven't heard before." Whereas I'm a bit unusual in that I almost always habitually focus on the way someone presents an argument and the wider context, so I read this and am like, "Omg EA-adjacent person making an effort to share their perspective and offering a sensible critique seemingly from a place of trying to help rather than to take the piss or vent their anger - this stuff is rare and valuable and I'm grateful to you for it (and to the contest organisers) and I want to encourage more of it."

Comment by Holly Morgan (Holly) on Holly Morgan's Shortform · 2022-06-04T11:26:05.872Z · EA · GW

Following my own advice: I will not be offended if I see someone asking "Has anyone used Pineapple Operations who can send me a quick review in DM?" on the Forum or on Slack etc (although I think we're pretty low-cost to use at the moment, so maybe not the best example).

Comment by Holly Morgan (Holly) on Holly Morgan's Shortform · 2022-06-04T11:22:15.307Z · EA · GW

It's OK to ask "Who can DM me a quick review of [EA-run service]?"

Problem: It's costly for EAs to find out which EA-run services will actually help them.

  1. An increasing number of EAs are offering services to the EA community
    • E.g. coaching, therapy, research, training, recruiting, tech support, consulting
  2. And it's useful to read honest reviews of services before using them
    • Especially when it's costly to trial/use them
  3. But it's awkward saying negative things about each other's work in public
    • Sometimes people do anyway, but usually wrt large orgs where it's less personal

Proposed partial solution: Normalise asking for private reviews of each other's services.

  • This seems like a relatively low-cost way to access honest reviews that has a good shot at not being too socially awkward with some effort
  • I think it's generally too awkward at the moment to ask in spaces where the service providers themselves might also be hanging out
  • So potential users, try it and feel free to link to this Shortform to explain why you're doing A Bit Of An Awkward Thing; service providers, encourage potential users to do it

Thanks to Jennifer Waldmann and Ozzie Gooen for helping me think this through.

Comment by Holly Morgan (Holly) on Revisiting the karma system · 2022-05-29T15:51:45.720Z · EA · GW

From the Forum user manual:

Posts that focus on the EA community itself are given a "community" tag. By default, these posts will have a weighting of "-25" on the Forum's front page (see below), appearing only if they have a lot of upvotes.

I wonder if this negative weighting for the Frontpage should be greater and/or used more, as I worry that the community looks too gossip-y/navel-gazing to newer users. E.g. I'd classify the 13 posts currently on the Frontpage (when not logged in) as only talking about more object-level stuff around half of the time:

Tagged 'Community'

  1. On funding, trust relationships, and scaling our community [PalmCone memo]
  2. Some unfun lessons I learned as a junior grantmaker
  3. On being ambitious: failing successfully & less unnecessarily

About the community but not tagged as such

  1. Revisiting the karma system
  2. High Impact Medicine, 6 months later - Update & Key Lessons [tagged as 'Building effective altruism']

Unclear

  1. Introducing Asterisk [tagged as 'Community projects']
  2. Open Philanthropy's Cause Exploration Prizes: $120k for written work on global health and wellbeing
  3. Monthly Overload of EA - June 2022

Not about the community

  1. Types of information hazards
  2. Will there be an EA answer to the predictable famines later this year?
  3. Energy Access in Sub-Saharan Africa: Open Philanthropy Cause Exploration Prize Submission
  4. Quantifying Uncertainty in GiveWell's GiveDirectly Cost-Effectiveness Analysis
  5. Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg
Comment by Holly Morgan (Holly) on Request: feedback on my EAIF application · 2022-05-29T04:51:21.550Z · EA · GW

Upvoted for your perseverance in trying again, the levelheadedness with which you seem to be taking feedback into account, and the courage it takes to invite public suggestions for improvement.

Comment by Holly Morgan (Holly) on Request: feedback on my EAIF application · 2022-05-29T04:08:30.302Z · EA · GW

Data point: I understood "Any feedback on my EAIF application?" as intended.

Comment by Holly Morgan (Holly) on Advice on how to get a remote personal/executive assistant · 2022-05-25T16:47:10.541Z · EA · GW

Do you use Virtalent UK?

Comment by Holly Morgan (Holly) on Advice on how to get a remote personal/executive assistant · 2022-05-25T03:19:42.852Z · EA · GW

[Edit: The following aren't exactly "good leads," but I thought it would still be useful to share some of the comments I've heard on options people in this community have tried.]

 

Fancy Hands

"Fancy Hands is a team of US-based virtual assistants".

Comments I've heard on them from a couple of EAs:

I used various assistants over a couple of months for in total maybe 30 tasks, each about 20 minutes long (their limit per one credit). I would say that the quality varied a lot and most of the time, they did not save me any time if included coordination cost, etc. If it was a straightforward task that could be done in 20 min, they performed okay, but even in that case I still found it to be large time-saving. I know one EA who likes it and uses it somewhat regularly. Maybe I just didn't figure out a way to utilize them well. Feel free to share it with other people under "an EA told me"

And then I wasn't sure if this EA has actually used Fancy Hands, but they summarised it thus:

FancyHands function almost as a platform, where they remain in the middle and take a cut. It's easy to use a small # of hours/per week, the provider will swap you out with other assistants relatively quickly, and the people you work with are relatively junior.

 

[Edit: Athena 

Athena set you up with full-time Philippines-based PAs. One EA said the PAs nevertheless work to US timezones; another said that being in the Philippines was a problem when their colleague used Athena. Some other comments from EAs:

A couple people I know have been using them recently and seem happyish with them.

 

We have used them previously but found the person was too junior to be helpful - not sure if this is the norm of if we were paired with the wrong person.

 

I think there was an initial $1500 signing fee for equipment and now we pay $2500 per month [~£10/h]. If there are not enough tasks you can share a PA with multiple people, but ideally only one person would be coordinating and assigning them tasks.

My PA is good, but not super excellent. I'd recommend them for tasks such as:
- booking travel insurance, travel, and accommodation
- calling doctors/Covid centers, making appointments
- researching flight regulations
- transcribing messages (but not really drafting them from scratch (at least not with EA context and slang)
- sending out presents/gifts
- scheduling things (if all preferences are clear and it's not a high stakes meeting)
- researching things such as: "What is the newest HP laptop model with xyz features" -- needs to be pretty specific
- accountability things...

I'd recommend it for people who just don't like doing these things themselves. But you probably won't get a person who actively has your back, thinks about deadlines and is super proactive.

That being said, [another EA] also had an assistant from Athena but decided to switch to another assistant (also from Athena), because she made a bunch of smaller mistakes and was not able to do tasks.

 

I...talked to a few EAs who've used it and found they've had widely-ranging experiences

 

I'm trialing a full-time assistant via AthenaGo who has been pretty helpful, but I don't have a great reference point, since this is my first time around.


Another EA told me that they've found Athena to be "medium-y" so far.

And then in a phone call with another EA in March 2022, I was told that Athena is good but there's a waitlist, and that the first assistant they tried wasn't good enough, the second was good, and the third was okay.]

 

Upwork

When I was doing more PA work for EAs myself, I briefly tried experimenting with re-delegating anonymised tasks to Upwork, but I couldn't find any takers for the first task I tried. Another EA I know uses them for PA tasks though.

 

Assistant headhunters

One EA recommended US-based Pocketbook Agency...

https://www.pocketbookagency.com has been good to work with...it's low-commitment (paid on contingency, so you'll just fwd your description and do a 30m call and they send you candidates)

...and another EA said...

I’ve tried to use a domestic staffing agency in the Bay Area before but didn’t have much luck and they weren’t great to deal with.

Comment by Holly Morgan (Holly) on Advice on how to get a remote personal/executive assistant · 2022-05-25T02:46:32.776Z · EA · GW

Mati Roy is an EA with some US-timezone friendly VAs: https://bit.ly/PantaskServices (on the website it says "We hire mainly in North America and Europe" but I think they still generally prefer to share the Google doc).

[Edit: And before anyone wastes time on CampusPA - another EA-run PA agency that I sometimes hear mentioned - while their website is still up, the CEO's LinkedIn says "I closed CampusPA in February of 2022."]

Comment by Holly Morgan (Holly) on Advice on how to get a remote personal/executive assistant · 2022-05-25T02:29:03.489Z · EA · GW

So pleased that you've started this conversation, james! I'm really keen to see more EAs publicly sharing their experiences with various PA services.

I’ve started using 3 remote personal/executive assistants for my work projects. Our remote assistants have been awesome and super useful...Happy to answer any questions

Do you know if these 3 have more capacity and if Virtalent UK allows clients to request specific VAs? (You're the only person I've come across so far who's given a completely positive review of a VA service - reviews tend to be pretty mixed and mildly positive/negative overall. Maybe Virtalent UK is just generally excellent or maybe you've found some especially great VAs - if it's the latter, it would be awesome if others in the community could hire them!)

If working with a remote assistant doesn’t work out for you I think you’ll lose around £300 and 12 hours of your time in 1 month....Most remote assistant setups have very flexible monthly plans. You can start with just a few hours a week and scale up from there.

I just want to highlight to everyone that with Virtalent UK, the minimum you need to pay to try it out is indeed only £270 for 10 hours (I'd previously thought it was 10 hours a week i.e. more like £1150 and I think I told a few people as much - sorry!)

Comment by Holly Morgan (Holly) on Advice on how to get a remote personal/executive assistant · 2022-05-25T02:12:13.127Z · EA · GW

Thanks, Lee!

Currently ~half of the PAs we list publicly or suggest privately are in the US and every one is open to working remotely.

The main differentiators from standard VA services are currently that:

  • almost all of our PAs are existing members of the EA community (some thoughts on the value of EA vs non-EA assistants here)
  • many are open to in-person work, with some even open to relocating
  • most lack PA experience
  • it's a 'matchmaking' rather than an agency model - users hire the PAs directly
Comment by Holly Morgan (Holly) on Some unfun lessons I learned as a junior grantmaker · 2022-05-24T22:51:33.804Z · EA · GW

A closely related idea that seems slightly more promising to me: asking other EAs, other grantmakers and other relevant experts for feedback - at conferences or via other means - rather than the actual grantmakers who rejected your application. Obviously the feedback will usually be less relevant, but it could be a way to talk to less busy people who could still offer a valuable perspective and avoid the "I don't want to be ambushed by people who are annoyed they didn't get money, or prospective applicants who are trying to network their way into a more favourable decision" problem that Larks mentions.

I had one group ask me for feedback on their rejected grant proposal at a recent EAG and I was confused why they were asking me at the time, but I now think it's not a bad idea if you can't get the time/energy of the grantmakers in question.

(Apologies if this is what you were suggesting, PabloAMC, I just thought from the thread on this comment so far you were suggesting meeting the grantmakers who rejected the proposal.)

Comment by Holly Morgan (Holly) on Announcing the Future Fund · 2022-05-17T00:25:14.190Z · EA · GW

Upvoted because this comment was on -1 karma, I suspect unfairly given that the FTX Future Fund website says "Please post any questions you might have as public comments here" in lieu of a contact form.

Comment by Holly Morgan (Holly) on Bad Omens in Current Community Building · 2022-05-16T18:24:29.686Z · EA · GW

Oh yes I know - with my reply I was (confusingly) addressing the unreceptive people more than I was addressing you. I'm glad that you're keen :-)

Comment by Holly Morgan (Holly) on Bad Omens in Current Community Building · 2022-05-13T14:50:00.634Z · EA · GW

Nice. And when it comes to links, ~half the time I'll send someone a link to the Wikipedia page on EA or longtermism rather than something written internally.

Comment by Holly Morgan (Holly) on Bad Omens in Current Community Building · 2022-05-13T14:43:52.200Z · EA · GW

Maybe you want to select for the kind of people who don't find it too boring! My guess, though, is that the project idea as currently stated is actually a bit too boring for even most of the people that you'd be trying to reach. And I guess groups aren't keen to throw money at trying to make it more fun/prestigious in the current climate... I've updated away from thinking this is a good idea a little bit, but would still be keen to see several groups try it.

Comment by Holly Morgan (Holly) on Bad Omens in Current Community Building · 2022-05-13T14:33:32.298Z · EA · GW

Agreed, hence "I don't even think the main aim should be to produce novel work". Imagine something between a Giving Game and producing GiveWell-standard work (much closer to the Giving Game end). Like the Model United Nations idea - it's just practice.

Comment by Holly Morgan (Holly) on Bad Omens in Current Community Building · 2022-05-13T14:27:02.548Z · EA · GW

Aye and EA London did a smaller version of something in this space focused on equality and justice.

Comment by Holly Morgan (Holly) on Bad Omens in Current Community Building · 2022-05-13T00:09:34.719Z · EA · GW

I wonder if the suggestion here to replace some student reading groups with working groups might go some way to demonstrating that EA is a question.

I don't even think the main aim should be to produce novel work (as suggested in that post); I'm just thinking about having students practice using the relevant tools/resources to form their own conclusions. You could mentor individuals through their own minimal-trust investigations. Or run fact-checking groups that check both EA and non-EA content (which hopefully shows that EA content compares pretty well but isn't perfect...and if it doesn't compare pretty well, that's very useful to know!)

Comment by Holly Morgan (Holly) on Bad Omens in Current Community Building · 2022-05-12T23:54:18.339Z · EA · GW

| I think the solution here is to create boundaries so you're not optimizing against people.

I prefer 80,000 Hours' 'plan changes' metric to the 'HEA' one for this reason (if I've understood you correctly).

Comment by Holly Morgan (Holly) on Bad Omens in Current Community Building · 2022-05-12T23:30:18.727Z · EA · GW

| Separation from friends and loved ones: Happens accidentally due to value changes.

I hope by this you mean something like "People in general tend to feel a bit more distant from friends when they realise they have different values and EA values are no exception." But if you've actually noticed much more substantial separation tending to happen, I personally think this is something we should push back against, even if it does happen accidentally. Not just for optics' sake ("Mentioning other people and commitments in your life other than EA might go a long way"), but for not feeling socially/professionally/spiritually dependent on one community, for avoiding groupthink, for not feeling pressure to make sacrifices beyond your 'stretch zone.'

Comment by Holly Morgan (Holly) on Bad Omens in Current Community Building · 2022-05-12T22:45:37.925Z · EA · GW

When I was working for EA London in 2018, we also had someone tell us that the free books thing made us look like a cult and they made the comparison with free Bibles.

Comment by Holly Morgan (Holly) on Increasing Demandingness in EA · 2022-05-06T10:11:42.874Z · EA · GW

assuming this is constrained by the number of PAs, though I have no idea whether it is

It is.

Comment by Holly Morgan (Holly) on Brief Presentation and Considerations for an EA Common Application · 2022-05-04T21:19:25.894Z · EA · GW

EA values talent more broadly and valuable candidates should be developed and supported beyond any one hiring cycle.

 

Quick wins for EA hiring managers:

  1. In application forms, include a request for permission to share the candidate's application with people who are hiring for similar roles.
  2. Search for recently closed hiring rounds for similar roles and ask the hiring managers if they have permission to share any strong applications with you (or if they could share your ad with the strong applicants they didn't hire).

(This does often happen, but often doesn't. I imagine it's easy to forget and hard to come across the idea in the first place because it's particular to collaborative communities rather than standard business practice.)

Comment by Holly Morgan (Holly) on Three intuitions about EA: responsibility, scale, self-improvement · 2022-04-29T11:14:11.649Z · EA · GW

I loved this post but ignored it the first time I saw it because I had a poor sense of what it would be about. But the title does act as a nice summary after someone's read the post if they're trying to find it again. Have you considered adding a tl;dr? E.g.

  1. In a global sense, there are no “adults in the room,” but EA is starting to change that
  2. It's easier to achieve big change with a startup investor mindset than a marginalist mindset
  3. EA should prioritise personal growth e.g. replace some local reading groups with working groups
Comment by Holly Morgan (Holly) on To PA or not to PA? · 2022-04-27T11:52:12.472Z · EA · GW

I've only just seen this Forum Question from Sep 2020: Has anyone gone into the 'High-Impact PA' path?

Some highlights:

  • CarolineJ found in her own case that PA work looked more like project management over time, she called it "a tough and high-impact job, that is often undervalued compared to what the person brings", and she said that important skills include organisation, communication, analytical and generalist
  • matthew.vandermerwe talks about his time as a Research Assistant and Project Manager for Toby Ord, estimating that "I think I (very roughly) added 5–25% to the book’s impact, and freed up 10–33% of Toby's time", but notes re career capital of an RA/PA/etc that "while these jobs are relatively highly regarded in EA circles, they can sound a bit baffling to anyone else."
  • Tanya was an Executive Assistant (ExA) to Nick Bostrom and then became Director of Strategy and Operations at the Future of Humanity Institute
  • A couple of PAs/ExAs mentioned saving the person they were supporting around 10 hours a week
  • Someone who has been an ExA to several EAs said that they reckon the most impactful tasks/responsibilities are:
    • inbox management
    • calendar management (more as a gatekeeper than a calendly)
    • deadline management
    • prioritisation support ("a voice of reason when the EA/researcher is led towards spending time on something less important")
    • taking small annoying tasks plus the occasional big project off their plate
Comment by Holly Morgan (Holly) on FTX/CEA - show us your numbers! · 2022-04-22T12:27:17.836Z · EA · GW

Oh, I read it as more the former too!

I read your post as:

  1. Asking if FTX have done something as explicit as a BOTEC for each grant or if it's more a case of "this seems plausibly good" (where both use expected value as a heuristic)
  2. If there are BOTECs, requesting they write them all up in a publicly shareable form
  3. Implying that the larger the pot, the more certain you should be ("these things have a much higher than average chance of doing harm. Most mistaken grants will just fail. These grants carry reputational and epistemic risks to EA.")

I thought Sam's comments served as partial responses to each of these points. You seem to be essentially challenging FTX to be a lot more certain about the impact of their grants (tell us your reasoning so we can test your assumptions and help you be more sure you're doing the right thing, hire more staff like Open Phil so you can put a lot more work into these evaluations, reduce the risk of potential downsides because they're pretty bad) and Sam here essentially seems to be responding "I don't think we need to be that certain." I can't see where the expected value heuristic was ever called into question? Sorry if you thought that's how I was reading this.

[Edit: Maybe when you say "plausibly good" you mean "negative in expectation but a decent chance of being good", whereas I read it as "good in expectation but not as the result of an explicit BOTEC"? That might be where the confusion lies. If so, with my top-level comment I was trying to say "This is why FTX might be using heuristics that are even rougher than BOTECs and why they have a much smaller team than Open Phil and why they may not take the time to publish all their reasoning" rather than "This is why they might not be that bothered about expected value and instead are just funding things that might be good". Hope that makes sense.]

Comment by Holly Morgan (Holly) on FTX/CEA - show us your numbers! · 2022-04-22T01:32:45.047Z · EA · GW

Just noticed Sam Bankman-Fried's 80,000 Hours podcast episode where he sheds some light on his thinking in this regard.

I think the excerpt below is not far from the OP's request that "if there is no BOTEC and it's more 'this seems plausibly good and we have enough money to throw spaghetti at the wall', please say that clearly and publicly."

Sam:

I think that being really willing to give significant amounts is a real piece of this. Being willing to give 100 million and not needing anything like certainty for that. We’re not in a position where we’re like, “If you want this level of funding, you better effectively have proof that what you’re going to do is great.” We’re happy to give a lot with not that much evidence and not that much conviction — if we think it’s, in expectation, great. Maybe it’s worth doing more research, but maybe it’s just worth going for. I think that is something where it’s a different style, it’s a different brand. And we, I think in general, are pretty comfortable going out on a limb for what seems like the right thing to do.

Rob:

I guess you might bring a different cultural aspect here because you come from market trading, where you have to take a whole lot of risk and you’ve just got to be comfortable with that or there’s not going to be much out there for you. And also the very risk-taking attitude of going into entrepreneurship — like double-or-nothing all the time in terms of growing the business.

I’ve had a worry that’s been developing over the last year that the effective altruism community might be a bit too conservative about its giving at this point. Because many of us, including me, got our start when our style of giving was pretty cash-starved — it was pretty niche, and so we developed a frugal mindset, an “I’ve got to be careful” mindset.

And on top of that, to be honest, as a purely aesthetic matter, I like being careful and discerning, rather than moving fast and doing lots of stuff that I expect in the future is going to look foolish, or making a lot of bets that could make me look like an idiot down the road. My colleague, Benjamin Todd, estimated last year that there’s $46 billion committed to effective altruist–style philanthropy — of course that figure is flying around all the time, but it’s probably something similar now — and according to his estimates, that figure had been growing at 35% a year over the last six years. So increasingly, it’s been growing much faster than we’ve been able to disburse these funds to really valuable stuff.

So I guess me and other people might want to start thinking that maybe the big risk that we should be worried about is not about being too careless, but rather not giving enough to what look like questionable projects to us now — because the marginal project in 10 years’ time is going to be noticeably more mediocre or noticeably less promising. Or alternatively, we might all be dead from x-risk already because we missed the boat.

Sam:

Completely agree. That is roughly my instinct: that there are a lot of things that you have to go out on a limb for. I think it’s just the right thing to do, and that probably as a movement, we’ve been too conservative on that front. A lot of that is, as you said, coming from a place where there’s a lot less funding and where it made sense to be more conservative.

I also just think, as you said, most people don’t like taking risks. And especially, it’s often a really bad look to say you’re trying to do something great for the world and then you have no impact at all. I think that feels really demoralizing to a lot of people. Even if it was the right thing to do in expectation, it still feels really demoralizing. So I think that basically fighting against that instinct is the right thing to do, and trying to push us as a community to try ambitious things nonetheless.

Comment by Holly Morgan (Holly) on FTX/CEA - show us your numbers! · 2022-04-22T01:06:11.959Z · EA · GW

Relevant comment from Sam Bankman-Fried in his recent 80,000 Hours podcast episode: "In terms of staffing, we try and run relatively lean. I think often people will try to hire their way out of a problem, and it doesn’t work as well as they’re hoping. I’m definitely nervous about that." (https://80000hours.org/podcast/episodes/sam-bankman-fried-high-risk-approach-to-crypto-and-doing-good/#ftx-foundation-002022)

Comment by Holly Morgan (Holly) on To PA or not to PA? · 2022-04-21T06:03:01.303Z · EA · GW

Yeah hopefully the "low status" aspect is starting to change, but I think it's a reality of operations work in general that the crew will never get the glory of the cast, no matter how important they are to the final outcome (...which is sometimes a relief to those of us who don't like the pressure of being in the limelight!).

Comment by Holly Morgan (Holly) on FTX/CEA - show us your numbers! · 2022-04-19T20:23:57.802Z · EA · GW

| Out of interest, did you read the post as emotional? I was aiming for brevity and directness

Ah, that might be it. I was reading the demanding/requesting tone ("show us your numbers!", "could FTX and CEA please publish" and  "If this is too time-consuming...hire some staff" vs "Here's an idea/proposal") as emotional, but I can see how you were just going for brevity/directness, which I generally endorse (and have empathy for emotional FWIW, but generally don't feel like I should endorse as such).