Probability distributions of Cost-Effectiveness can be misleading 2022-07-18T17:42:06.577Z
Who wants to be hired? (May-September 2022) 2022-05-27T09:49:53.065Z
Who's hiring? (May-September 2022) 2022-05-27T09:49:35.554Z
What are your giving recommendations to non-EA friends interested in supporting Ukraine? 2022-03-05T13:05:25.829Z


Comment by Lorenzo (Lorenzo Buonanno) on "Defective Altruism" by Nathan J. Robinson in Current Affairs · 2022-09-26T14:48:48.454Z · EA · GW

You might be interested in the discussion here

Comment by Lorenzo (Lorenzo Buonanno) on 9/26 is Petrov Day · 2022-09-26T12:01:52.549Z · EA · GW

And apparently the movie did come out (in 2013)

Comment by Lorenzo (Lorenzo Buonanno) on Assessing Cost Effectiveness: malnutrition, famine, and cause prioritization · 2022-09-25T18:00:58.658Z · EA · GW

Thank you so much for doing this research! This indeed seems to be a common topic we don't yet have a "standard" answer to.

You might be interested in this post which has more discussion than the ones you linked

Comment by Lorenzo (Lorenzo Buonanno) on Open Thread: June — September 2022 · 2022-09-24T18:32:07.937Z · EA · GW

Hi Sibo!

You might be interested in talking with and applying for career coaching at 

Comment by Lorenzo (Lorenzo Buonanno) on Announcing the Future Fund's AI Worldview Prize · 2022-09-24T14:20:55.917Z · EA · GW

Protonmail and Signal are e2e encrypted messaging mediums.

But depending on how paranoid the users need to be these systems might not provide enough guarantees, since you would need to trust the servers not to MITM. Unless you do some sort of in-person key-exchange.

But I'm definitely not an expert. In general I think there are plenty of experts that know exactly how to handle these things and they're pretty easy to contact.

Edit: I agree with acylhalide comment, if you have government-level actors this is potentially not enough.

Comment by Lorenzo (Lorenzo Buonanno) on Announcing the Future Fund's AI Worldview Prize · 2022-09-24T12:47:50.730Z · EA · GW

Thank you! And doubly thank you for the topic link. In case others are confused, I found the end of this post particularly clear

it may be worth dividing out existential risk into extinction risk, collapse risk, flawed realisation risk and plateauing risk

Comment by Lorenzo (Lorenzo Buonanno) on What regrets have you had regarding past donations? · 2022-09-24T12:33:08.296Z · EA · GW

I regret not donating more and not donating earlier. I have way too much savings, and my family is very supportive and would be happy to host me if I end up unable to pay rent.

I regret donating directly to GiveWell's top charities instead of their "all grants fund" (then Maximum Impact Fund). Especially since many of those charities have programs of varying cost-effectiveness.

Contradicting my first point, I regret donating to various random EA charities instead of focusing my donations on the most promising fund after a lot of research. I don't think I ever was at a scale where splitting made sense.

Lastly, I regret not networking more and earlier with EAs doing exciting stuff that might need some liquidity or fallback options, in case some promised small (<5000€) grant doesn't work out or takes months. Or if they can't afford to pay for coaching/counseling.

Comment by Lorenzo (Lorenzo Buonanno) on Announcing the Future Fund's AI Worldview Prize · 2022-09-24T12:17:59.878Z · EA · GW

Narrowing your question to whether extinction risk is a concern ignores various existential [...] risks associated with AGI development.

What's the difference between extinction risk and existential risk?

Comment by Lorenzo (Lorenzo Buonanno) on Announcing the Future Fund's AI Worldview Prize · 2022-09-24T11:01:34.452Z · EA · GW

Probably missing something obvious, but could they either:

  • PGP encrypt it with the reviewer's public key, and send it via email?
  • Use an e2e encrypted messaging medium? (Don't know which are trustworthy, but I'm sure there's an expert consensus)

Or are those not user friendly enough?

I think this is a solved problem in infosec (but am probably missing something)

Comment by Lorenzo (Lorenzo Buonanno) on Fixed: The EA related was expiring. Is anyone doing something about it? · 2022-09-24T08:24:49.663Z · EA · GW

I would put the domain name in the title, so people managing one of the hundreds of EA websites don't skip a heartbeat

Comment by Lorenzo (Lorenzo Buonanno) on What are people's thoughts on working for DeepMind as a general software engineer? · 2022-09-23T20:17:12.477Z · EA · GW

Have you considered comparing the role at DeepMind with similar roles at e.g. Redwood and Anthropic? I think there is no consensus on which one would be most impactful, but I think both are less controversial than DeepMind (but could be very wrong, definitely not an expert, just would suggest considering >=3 jobs before picking one) 

Comment by Lorenzo (Lorenzo Buonanno) on Guesstimate Algorithm for Medical Research · 2022-09-23T18:06:58.379Z · EA · GW

Thanks for writing this! I'm really curious about your thoughts on using Guesstimate vs Squiggle

Squiggle models from the linked guesstimate models:

(Generated using )

I'm thinking of making an alternative to Guesstimate that's more scalable and more easily integrated with Google Sheets, but I'm unsure about what would be the actual value for researchers. Especially now that QURI is focusing on Squiggle.

Comment by Lorenzo (Lorenzo Buonanno) on The Unweaving of a Beautiful Thing · 2022-09-23T12:04:08.397Z · EA · GW

Any updates on this?

Comment by Lorenzo (Lorenzo Buonanno) on Software Engineer: what to do with 3 days of volunteering? · 2022-09-21T16:18:28.164Z · EA · GW

What cause area are you most interested in?
Would you spend only those 3 days, or would you be interested in using those days to familiarize yourself with a project and be open to contributing more in the future?

You might be interested in , (AI Alignment, looking for 3 hours per week), and 

Comment by Lorenzo (Lorenzo Buonanno) on ChanaMessinger's Shortform · 2022-09-14T22:12:57.451Z · EA · GW

Thanks so much for writing this! I think it could be a top-level post, I'm sure many others would find it very helpful.

My 2 cents:

2 is complicated - when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?

I think it's definitely bad to "Use framings, arguments and examples that you don't think hold water but work at getting people to join your group". If I understand correctly it can cause point 5. Also "getting people to join your group" is rarely an instrumental goal, and "getting people to join your group for the wrong reasons" is probably not that useful in the long term.

Something that I think is very important that seems missing from this is that there's a significant probability that we're wrong about important things (i.e. EA as a question).
We could be wrong about the impact of bednets, wrong about AI being the most important thing, wrong about population ethics, etc. I think it's a huge difference from the "cult" mindset.

I think I want to say something like "are you acting in the interests of the people you're talking to", but that doesn't work either - I'm not! being an EA has a decent chance of being less pleasant than the other thing they were doing, and either way it's not a crux.

The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I don't think EA can help them. If they value the wellbeing of others then EA can help them achieve their values better.
From my personal perspective this is strongly related to the point on uncertainty: I don't want to push other people to work on my values because from an outside view I don't think my values are more important than their values, or more likely to be "correct".
I don't know if it makes any sense, really curious to hear your thoughts, you have certainly thought about this more than I.

Comment by Lorenzo (Lorenzo Buonanno) on EA Organization Updates: September 2022 · 2022-09-14T20:43:28.758Z · EA · GW

They work for me: and 

Comment by Lorenzo (Lorenzo Buonanno) on Who's hiring? (May-September 2022) · 2022-09-14T09:52:03.900Z · EA · GW

Added, thanks!
Keep in mind this thread is going to end this week, but will still be useful for future people stumbling on it

Comment by Lorenzo (Lorenzo Buonanno) on New Faunalytics' Study On The Barriers And Strategies For Success When Going Vegan or Vegetarian · 2022-09-08T10:05:15.959Z · EA · GW

You can see the full list in the linked article. In order of importance:

Feeling unhealthy on the veg*n diet
Low identification with veg*nism
Believing society perceives veg*nism negatively
Low autonomy support
Cultural influence making it more difficult to go veg*n
Weak habit formation around choosing veg*n food
Difficulty finding or preparing veg*n food
Feeling ashamed of one’s veg*n diet
Low personal control over food
Small veg*n network
Feeling that veg*nism hasn’t positively impacted one’s health goals
Low motivation
Frequent cravings for animal products

Specifically, people who felt unhealthy on their veg*n diet were more than three times as likely to abandon it within the first six months (30% vs. 8%). People who did not see veg*nism as part of their personal identity were about twice as likely as others to abandon it (16% vs. 8%). And people who thought society perceives veg*nism negatively were about 1.5 times as likely as others to abandon their diet (13% vs. 8%)

Comment by Lorenzo (Lorenzo Buonanno) on Yonatan Cale's Shortform · 2022-09-07T09:21:05.409Z · EA · GW

This talk is also pretty good!

Comment by Lorenzo (Lorenzo Buonanno) on Who are some less-known people like Petrov? · 2022-09-06T16:13:57.340Z · EA · GW

Maybe Vavilov and his colleagues?

(Have not fact checked this account, but the Wikipedia page seems to broadly agree)

Comment by Lorenzo (Lorenzo Buonanno) on Earn To Give $1M/year or Work Directly? · 2022-09-06T13:26:29.264Z · EA · GW

At (roughly) what amount would you be undecided between the Earning to Give option and getting the first choice?

Comment by Lorenzo (Lorenzo Buonanno) on Celebrations and gratitude thread · 2022-09-04T14:41:23.593Z · EA · GW

I'm immensely grateful for Yonatan Cale's work: posts and comments on the forum, his coaching, posts on Facebook, Twitter polls, curated list software jobs, and I'm probably forgetting tons of things!

Also, he's helping me a ton in private conversations. I strongly empathize with the comments on his post on coaching

Comment by Lorenzo (Lorenzo Buonanno) on Human Capital · 2022-08-30T17:57:28.655Z · EA · GW

I would say is the main one, there's also and you might also include

You might be interested in the career choice and career advising forum topics

Comment by Lorenzo (Lorenzo Buonanno) on Software Developers: How to have Impact? A Software Career Guide · 2022-08-29T21:12:33.141Z · EA · GW

This post is getting better and better! Thanks so much for it!

Comment by Lorenzo (Lorenzo Buonanno) on Any EA focus/funding on lymphatic filariasis? · 2022-08-28T22:17:23.722Z · EA · GW

I assume you already looked at

But just in case, there seem to be a bunch of interesting posts on the topic:

This post has some links to sources.

Mass Drug Administration to combat Lymphatic filariasis (MDA LF): MDA LF is much less cost-effective (unpublished CEA estimate) compared to other intervention areas we looked at. Since the program is a mass drug administration rather than a targeted approach, the prevalence has to be above a certain rate for it to be cost-effective. The microfilaria prevalence is too low for MDA LF to be cost-effective in India (9). Infection rates in other countries are also not sufficiently high. The majority of those infected never exhibit symptoms (10), of those who do, only a small percentage develop severe symptoms that cause large problems like social ostracization and depression. Furthermore, crowdedness in this intervention is fairly high. There are already eight charities active in the area, and the crowdedness in the areas of greater microfilaria prevalence is especially high. The Indian government claims to cover 85% (11) of the country with preventative medication. While there is a problem of people not taking the medication once receivedthey receive it (12), this is not a straightforward or cost-effective problem to solve.

And this post

There has been a large decline in the number of people with lymphatic filariasis, a neglected tropical disease. From ~200 million people in 2000 to ~50 million people in 2018

And this

GSK recently announced its commitment to eradicating lymphatic filariasis (GSK Announces £1 Billion, 2022)

Comment by Lorenzo (Lorenzo Buonanno) on Open Thread: June — September 2022 · 2022-08-27T13:36:28.462Z · EA · GW

Hi Emre!

You might be interested in the Land Use Reform forum topic, and the posts at the bottom of that page

Comment by Lorenzo (Lorenzo Buonanno) on Donating crypto effectively? · 2022-08-24T09:54:59.858Z · EA · GW

GiveWell has a page "Making a Donation of Cryptocurrency to GiveWell" which might be useful.

Comment by Lorenzo (Lorenzo Buonanno) on Open Phil is seeking bilingual people to help translate EA/EA-adjacent web content into non-English languages · 2022-08-24T07:56:47.789Z · EA · GW

I was also surprised to read the section "how do EAs compare to professional translators on the quality of the products they produce?" which makes me update slightly towards it not being that much of a deal.

the quality of the best products produced by EAs and the best products produced by professionals seemed to be about the same, on average, as assessed (blinded) by Guille. This was a small sample assessed by one person, so it doesn’t constitute much evidence.

Comment by Lorenzo (Lorenzo Buonanno) on Open Phil is seeking bilingual people to help translate EA/EA-adjacent web content into non-English languages · 2022-08-24T07:49:10.281Z · EA · GW

see below for an explanation of what this might look like

Is this supposed to link to this section?

Comment by Lorenzo (Lorenzo Buonanno) on Open Thread: June — September 2022 · 2022-08-23T20:25:23.539Z · EA · GW

Hi Drew! You might want to post in the Who's hiring thread for more visibility

Comment by Lorenzo (Lorenzo Buonanno) on How many EA billionaires five years from now? · 2022-08-20T11:42:54.189Z · EA · GW

You can paste it directly into a Squiggle playground and play around with the parameters yourself.

You might already know this, but you can link directly to the Squiggle Model, without the need for copy-pasting.

Comment by Lorenzo (Lorenzo Buonanno) on Rhodri Davies on why he's not an EA · 2022-08-19T20:02:46.379Z · EA · GW

Hi Hannah! My very personal perspective, I'm still relatively new to EA.

On  "uncertainty in general", I see lots of posts on Moral Uncertainty, Cluelessness, Model Uncertainty, "widen your confidence intervals", "we consider our cost-effectiveness numbers to be extremely rough", and so on, even after spending tens of millions/year in research.
I think this is very different from the attitude of the Scientific Charity movement.

On "beneficiaries preferences" I agree with you that the vast majority of EA in practice discounts them heavily, probably much more than when the post I linked to was written.

They are definitely taken into account though. I really like this document from a GiveWell staff member, and I think it's representative of how a large part of EA not focused on x-risk/longermism thinks about these things. Especially now that GiveDirectly has been removed from GiveWell recommended charities, which I think aura-wise is a big change.
But lots of EAs still donate to GiveDirectly, and GiveDirectly still gives talks in EA conferences and is on EA job boards.

I personally really like the recent posts and comments advocating for more research, and I think taking into account beneficiaries preferences is a tricky moral problem for interventions targeting humans.

Also probably worth mentioning "Big Tent EA" and "EA as a question".

Comment by Lorenzo (Lorenzo Buonanno) on Rhodri Davies on why he's not an EA · 2022-08-18T13:00:00.600Z · EA · GW

Thanks for posting this, super interesting! I was nicely surprised to find this forum post in the Wikipedia page on the "Scientific Charity Movement".

I think that post highlights some important differences. Some interesting quotes:
> EAs are also much less confident that they know what people need better than they do.
> To many EAs, dividing the poor into deserving and undeserving groups just doesn't make sense
> standards of evidence are much better now than they were over a century ago
> errors of charity that EA is a response to generally include the errors of SC
> The main way I see the comparison as a warning is that EA could end up somewhere where EA continues to talk in a scientific way, confidence goes up, standards of evidence fall, and EA ends up pushing hard on things that aren't actually that important.

Comment by Lorenzo (Lorenzo Buonanno) on What happens on the average day? · 2022-08-17T14:58:39.173Z · EA · GW

one trillion wild birds

This seems high, where does it say so in the paper? The Tomasik article you use for wild mammals estimates 0.1 to 0.4 trillion wild birds.

I don’t think it makes sense to say that on a given day there are, say, 26 billion poultry alive though, given death rates in farms. You’d need to do more stats to get an estimate of the number of poultry birds alive right now.

You can find some more estimates here

Comment by Lorenzo (Lorenzo Buonanno) on What We Owe The Future is out today · 2022-08-17T10:17:12.863Z · EA · GW

Clickable link: 

Comment by Lorenzo (Lorenzo Buonanno) on The Credibility of Apocalyptic Claims: A Critique of Techno-Futurism within Existential Risk · 2022-08-17T05:33:26.899Z · EA · GW

You can find it on sci-hub:

Comment by Lorenzo (Lorenzo Buonanno) on Structuring new charities (even more) like tech startups · 2022-08-17T05:31:52.759Z · EA · GW

The career coaching seems different from the incubation program, as far as I can tell your points apply mostly to the latter, right?

Comment by Lorenzo (Lorenzo Buonanno) on The Credibility of Apocalyptic Claims: A Critique of Techno-Futurism within Existential Risk · 2022-08-16T20:44:14.203Z · EA · GW

You might find the thread "The AI messiah" and the comments there interesting.

You quote AI results from the 70s and 90s as examples of overly optimistic AI predictions.

In recent years there are many many examples of predictions being too conservative (e.g. Google beating Lee Sedol at Go in 2016, GPT-3, Minerva, Imagen ...).
Self-driving seems to be the only field where progress has been slower than some expected. See e.g. "progress on ML benchmarks happened significantly faster than forecasters expected" (even if it was sensitive to the exact timing of a single paper, I think it's a useful data point).

Would that make you increase the importance of AI risk as a priority?

Comment by Lorenzo (Lorenzo Buonanno) on To WELLBY or not to WELLBY? Measuring non-health, non-pecuniary benefits using subjective wellbeing · 2022-08-16T12:50:08.012Z · EA · GW

But, if your philanthropy is explicitly going against what the recipient would choose for themself, well... From my perspective (as Vanessa this time),  this is not even altruism anymore. This is imposing your own preferences on other people

Would this also apply to e.g. funding any GiveWell top charity besides GiveDirectly, or would that fall into "in practice, this is the best way to maximize the recipient's decision-utility"?

I don't think most recipients would buy vitamin supplementation or bednets themselves, given cash.
I guess you could say that it's because they're not "well informed", but then how could you predict their "decision utility when well informed" besides assuming it would correlate strongly with maximizing their experience utility?

A bit off-topic, but I found GiveWell's staff documents on moral weights fascinating for deciding how much to weigh beneficiaries' preferences, from a very different angle.

Comment by Lorenzo (Lorenzo Buonanno) on Historical EA funding data · 2022-08-16T07:56:37.847Z · EA · GW

Thanks for doing and sharing this, really interesting!

Random curiosity, how did your spreadsheet make it into the article about EA?

Comment by Lorenzo (Lorenzo Buonanno) on Structuring new charities (even more) like tech startups · 2022-08-13T19:05:15.677Z · EA · GW

Super excited to see more interest in this space, and people starting things in general, kudos!

Have you talked with the people working on Mind Ease and/or Canopie? (As far as I understand, Canopie was originally a Charity Entrepreneurship incubated charity, then became a for-profit).
Also might be interesting to talk with the people that worked on hippo.

Did you consider applying to Charity Entrepreneurship career coaching?

Curious about what resource specifically you have in mind!

Comment by Lorenzo (Lorenzo Buonanno) on EA 1.0 and EA 2.0; highlighting critical changes in EA's evolution · 2022-08-13T18:07:34.413Z · EA · GW GiveWell apparently has different (higher) numbers

Comment by Lorenzo (Lorenzo Buonanno) on AMA: Ought · 2022-08-11T06:04:51.508Z · EA · GW

What are your views on whether speeding up technological development is, in general, a good thing?

I'm thinking of arguments like, that make me wonder if we should try to slow research instead of speeding it up.

Or do you think that Elicit will not speed up AGI capabilities research in a meaningful way? (Maybe because it will count as misuse)

It's something I'm really uncertain about personally, that's going to heavily influence my decisions/life, so I'm really curious about your thoughts!

Comment by Lorenzo (Lorenzo Buonanno) on EA 1.0 and EA 2.0; highlighting critical changes in EA's evolution · 2022-08-10T17:36:09.782Z · EA · GW

That's an amazing spreadsheet you linked there! Did you collect the data yourself?

Comment by Lorenzo (Lorenzo Buonanno) on Are "Bad People" Really Unwelcome in EA? · 2022-08-10T10:10:19.476Z · EA · GW

Some quotes helping other altruists:

by helping other people as much as possible, without any expectation of your favours being returned in the near future — you end up being much more successful, in a wide variety of settings, in the long run.

This is what you mention, and I agree with it.

if you and I share the same values, the social situation is very different: if I help you achieve your aims, then that’s a success, in terms of achieving my aims too. Titting constitutes winning in and of itself — there’s no need for a tat in reward. For this reason, we should expect very different norms than we are used to be optimal: giving and helping others will be a good thing to do much more often than it would be if we were all self-interested.

One of the incredible strengths of the EA community is that we all share values and share the same end-goals. This gives us a remarkable potential for much more in-depth cooperation than is normal in businesses or other settings where people are out for themselves. So next time you talk to another effective altruist, ask them how you can help them achieve their aims. It can be a great way of achieving what you value.

I really think altruism/value-alignment is a strength, and a group would lose a lot of efficiency by not valuing it.

(Of course, it's not the only thing that matters)

Comment by Lorenzo (Lorenzo Buonanno) on Are "Bad People" Really Unwelcome in EA? · 2022-08-09T18:52:05.190Z · EA · GW

Rather than say I'm not altruistic, I mostly mean that: *I'm not impartial to my own welfare/wellbeing/flourishing

To me, those are very different claims!

10% is not that big an ask (I can sacrifice that much personal comfort)

That's very relative! It's more than what the median EA gives, it's way more than what the median non-EA gives. When I talk to non-EA friends/relatives about giving, the thought of giving any% is seen as unimaginably altruistic.

Even people donating 50% are not donating 80%, and some would say it's not that big of an ask.
IMHO, claiming that only people making huge sacrifices and valuing their own wellbeing at 0 can be considered "altruists" is a very strong claim that doesn't match how the word is used in practice.

As Wikipedia says:

Altruism is the principle and moral practice of concern for happiness of other human beings or other animals ...

Comment by Lorenzo (Lorenzo Buonanno) on AMA: Ought · 2022-08-09T18:34:43.355Z · EA · GW

Wow, super happy to hear that, thanks!

Comment by Lorenzo (Lorenzo Buonanno) on Are "Bad People" Really Unwelcome in EA? · 2022-08-09T18:09:52.528Z · EA · GW

But I would reorient my career to work on the most pressing challenges confronting humanity given my current/accessible skill set. I quit my job as a web developer, I'm going back to university for graduate study and plan to work on AI safety and digital minds.


I think this is very admirable and wish you success!
If indeed you're acting exactly like someone who straightforwardly wanted to improve the world altruistically, that's what matters :)

Edit: oh I see you were also donating 10%, that's also very altruistic! (At least from an outside view, I trust you on your motivations)

Comment by Lorenzo (Lorenzo Buonanno) on AMA: Ought · 2022-08-09T18:05:37.498Z · EA · GW

Thanks so much and kudos for sharing the LessWrong post, even if it's unjustifiably uncharitable it's an interesting perspective.

Comment by Lorenzo (Lorenzo Buonanno) on Are "Bad People" Really Unwelcome in EA? · 2022-08-09T17:53:43.228Z · EA · GW

If you were, for instance, a grantmaker, these might look very different.

Strongly upvoted, I would say that for most roles these do look very different.
The "altruism" part of "effective altruism" is something I really value.
I would much rather collaborate with someone that wants to do the most good, than with someone that wants to get the most personal glory or status.
For example, someone that cares mostly about personal status will spend much less time helping others, especially in non-legible ways.