Posts
Comments
Extremely cringe article.
The argument that AI will inevitably kill us has never been well-formed and he doesn't propose a good argument for it here. No-one has proposed a reasonable scenario by which immediate, unpreventable AI doom will happen (the protein nanofactories-by-mail idea underestimates the difficulty of simulating quantum effects on protein behaviour).
A human dropped into a den of lions won't immediately become its leader just because the human is more intelligent.
The way you describe WELLBYs - as being heavily influenced by the hedonic treadmill and so potentially unable to distinguish between the wellbeing of the Sentinelese and the modern Londoner - seems to highlight their problems. There's a good chance a WELLBY analysis would have argued against the agricultural revolution, which doesn't seem like a useful opinion.
No it's not obvious, but the implications are absurd enough (agricultural revolution was a mistake, cities were a mistake) that I think it's reasonable to discard the idea
I encourage you to publish that post. I also feel that the AI safety argument leans too heavily on the DNA sequences -> diamondoid nanobots scenario
Consider entering your post in this competition: https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize
I agree that revealed preference and survey responses can differ. Unless WELLBYs take account of revealed preferences they'll fail to predict what people actually want
"ingestion of said natural sources does not seem to include the side effects from their synthesized forms"
Can you provide a source for this?
I think this is a great question. The lack of clear, demonstrable progress in reducing existential risks, and the difficulty of making and demonstrating any progress, makes me very skeptical of longtermism in practice.
I think shifting focus from tractable, measurable issues like global health and development to issues that - while critical - are impossible to reliably affect, might be really bad.
Thanks for this. It's important to give to rescue and relief efforts when disasters happen in addition to giving to development efforts in the good times so that communities are less vulnerable to disasters.
The information you've provided here is really valuable. Thank you. It will inform how I donate.
I don't like this post and I don't think it should pinned to the forum front page.
A few reasons:
-
The general message of: "go and spread this message, this is the way to do it" is too self-assured, and unquestioning. It appears cultish. It's off-putting to have this as the first thing that forum visitors will see.
-
The thesis of the post is that a useful thing for everyone to do is to spread a message about AI safety, but it's not clear what messages you think should be being spread. The only two I could see are "relate it to Skynet" and "even if AI looks safe it might not be".
-
Too many prerequisites: this post refers to five or ten others posts as a "this concept is properly explained here" thing. Many of these posts reference further posts. This is a red flag to me of poor writing and/or poor ideas. Either a) your ideas are so complex that they do indeed require many thousands of words to explain (in which case, fine), or b) they're not that complex, just aren't being communicated well or c) bad ideas are being obscured in a tower of readings that gatekeep the critics away. I'd like to see the actual ideas you're referring to expressed clearly, instead of referring to other posts.
-
Having this pinned to the front page further reinforces the disproportionate focus that AI Safety gets on the forum

My gut feeling is that LessWrong is cringe and the heavy link to the Effective Altruism forum is making the forum cringe.
Trying to explain this feeling I'd say some features I don't like are:
- Ignoring emotional responses and optics in favour of pure open dialogue. Feels very New Atheist.
- The long pieces of independent research that are extremely difficult to independently verify and which often defer to other pieces of difficult-to-verify independent research.
- Heavy use of expected value calculations rather than emphasising the uncertainty and cluelessness around a lot of our numbers.
- The more-karma-more-votes system that encourages an echo chamber.
Can you say why you feel that longtermism suffers from less cluelessness that what you argue the GiveWell charities do? The main limitation of longtermism is that affecting the future is riddled with cluelessness.
You mention Hilary Greaves' talk, but it doesn't seem to address this. She refers to "reducing the chance of premature human extinction" but doesn't say how.
Putting a hold on helping people in poverty because of concern about insect rights is insulting to people who live in poverty and epitomises ivory-tower thinking that gets the Effective Altruism community so heavily criticised.
Saying "further research would be good" is easy because it is always true. Doing that research or waiting for it to be done is not always practical. I think you are being extremely unreasonable if, before helping someone die of malaria you ask for research to be done on:
- the long term impacts of bednets on population growth
- the effects of population growth on deforestation
- the effects of deforestation on insect populations and welfare
- specific quantification of insect suffering
IPA and J-PAL are underrated. They've had a hand in producing the evidence for many of GiveWell's recommendations. They seem to be significantly better at cause discovery than the Effective Altruism community.

The use of expected value doesn't seem useful here. Your confidence intervals are huge (95% confidence interval for pig suffering capacity relative to humans is between 0.005 to 1.031). Because the implications are so different across that spectrum (varying from basically "make the cages even smaller, who cares" at 0.005 to "I will push my nan down the stairs to save a pig" at 1.031) it really doesn't feel like I can draw any conclusions from this.
Re. 3, I prefer giving now. I think there's a logic to giving later in that money can accrue interest and you can set yourself up to donate more later, but doing good accrues its own interest: helping someone out of poverty today is better than helping them 10 years from now, as it gives them an extra 10 years of better life and 10 years to pay it forward to their community.
A few things that stand out to me that seem dodgy and make me doubt this analysis:
One of the studies you included with the strongest effect (Araya et al. 2003 in Chile with an effect of 0.9 Cohens d) uses antidepressants as part of the intervention. Why did you include this? How many other studies included non-psychotherapy interventions?
Some of the studies deal with quite specific groups of people eg. survivors of violence, pregnant women, HIV-affected women with young children. Generalising from psychotherapy's effects in these groups to psychotherapy in the general population seems unreasonable.
Similarly, the therapies applied between studies seem highly variable including "Antenatal Emotional Self-Management Training", group therapy, one-on-one peer mentors. Lumping these together and drawing conclusions about "psychotherapy" generally seems unreasonable.
With the difficulty of blinding patients to psychotherapy, there seems to be room for the Hawthorne effect to be skewing the results of each of the 39 studies: with patients who are aware that they've received therapy feeling obliged to say that it helped.
Other minor things:
- Multiple references to Appendix D. Where is Appendix D?
- Maybe I've missed it but do you properly list the studies you used somewhere. "Husain, 2017" is not enough info to go by.

Most of these seem intractable and many have lots of people working on them already.
The benefit of bed nets and vitamin A supplementation is that they are proven solutions to neglected problems.
"Subjecting countless animals to a lifetime of suffering" probably describe the life of the average bird in the amazon (struggling to find food, shelter, avoid predators, protect its children) or the average fish/shrimp in the ocean.
If you argue that introducing animals to other planets will cause net suffering then it seems to follow that we should eliminate natural ecosystems here on earth
I think this was a terrible idea
I think you've overestimated the value of a dedicated conference centre. The important ideas in EA so far haven't come from conversations over tea and scones at conference centres but are either common sense ("do the most good", "the future matters") or have come from dedicated field trials and RCTs.
I also think you've underestimated the damage this will do to the EA brand. The hummus and baguettes signal an earnestness. Abbey signals scam.
I'm confident that this will be remembered as one of CEA's worst decisions.
Strong agree. I think the EA community far overestimates its ability to predictably affect the future, particularly the far future.
Opportunities that development economists have missed?
The general ideas that Hauke suggests in the appendix are things like liberalisation, freeing trade, more open migration. They're ideas that have been fiercely studied and debated before. Organisations like the World Trade Organisation and The World Bank are built around these ideas. The difficulty in testing and implementing these ideas is part of what drove the rise of the randomistas.
I think the "~4 person-years" idea is delusional and arrogant.
This is very inspiring. I think you're making an incredibly positive impact on the world, not just through charity but also by inspiring those around you. Brilliant!
Really cool idea. I'll be watching eagerly
Good portrait of the problem. The solution isn't obvious to me.
I'm very skeptical of the suggestions from the Halstead and Hillebrandt post. It seems unlikely that a "~4 person-year research effort" could discover the key to economic growth in developing countries when the entire field of development economics has been trying to solve this problem for decades.
I agree with the general premise of earning to give through entrepreneurship.
I've never been very convinced by the talent-constraint concept. With the right wage you can hire talent. I think the push from earning to give has been a mistake.
Great!
I think that the allocation of government aid doesn't get enough attention from effective altruists. Government aid budgets are an enormous pool of money and often don't seem to be spent in an evidence-based way. Huge potential for positive change here.
It seems like every now and again someone suggests cardiovascular disease as a potential high-impact cause area on the EA forum. The problem is tractability. It's really hard to convince people to eat better, exercise more and stop smoking. Doctors spend a lot of time trying to do this and billions have been spent on public health campaigns trying to convince people to do this. The medications that treat cholesterol, hypertension, and diabetes are among the most commonly prescribed in the world already.
You've identified a serious problem but I don't see a cost-effective solution
Great work. I got malaria in 2016 for a clinical trial of a novel anti-malarial (results published here). I was paid AUD$2880 and gave it all to the Against Malaria Foundation. It's one of the best things I've ever done.
Love this. I also give large amounts to GiveWell charities and smaller amounts to local charities. Good charities deserve funds. Great charities just deserve them more.
Thanks for doing this. Post is too long, could have been dot points. I want to see more TL;DRs like this
Thanks for saying this. One complexity is that there are people who will have already rearranged their lives to work of Future-Fund-funded projects, so handing back the money will leave them worse off than when they started. I can understand then why they'd be unwilling to give the money back.
But I share your distaste for the dialogue and posts I've seen around the issue. People obviously have a heavy vested interest in keeping this money and this will bias some people's moral reasoning. I think it's a bad look for the community. Disappointing.
Extremely skeptical of this. Reading through Strong Minds' evidence base it looks like you're leaning very heavily on 2 RCTs of a total of about 600 people in Uganda, in which the outcomes were a slight reduction in a questionable mental health metric that relies on self-reporting.
You're making big claims about your intervention including that it is a better use of money than saving the lives of children. I hope you're right, otherwise you might be doing a lot of harm.
The idea that keeping 5500 crickets in farmed insect conditions then killing them is equivalent to ~1500 days of a human's suffering (the expected value from your analysis) seems very high and seems wildly incompatible with the behaviors and beliefs of everyone I've ever met, but that doesn't mean it's not true.
My guess is that your suffering/day of life is too high (expected value of 0.7!, they seem to have a chill life just bouncing around and eating) and that the sentience multiplier is too high (I think most people would give up a lot more than the torture of 6000 crickets for a day to prevent the torture of a human for a day)
This is a really interesting idea. There's a Saturday Morning Breakfast Cereal comic with a similar premise: https://www.smbc-comics.com/comic/2011-07-13
My impressions was that the top charities are scaling as fast as they can. Hopefully soon they'll have room for hundred of millions of dollars of more funding.
This is a great question, and the same should be asked of governments (as in: "why doesn't the UK aid budget simply all go to mosquito nets?")
A likely explanation for why the Gates Foundation doesn't give to GiveWell's top charities is that those charities don't currently have much room for more funding (GiveWell had to rollover funding last year because they couldn't spend it all. A recent blog posts suggests they may have more room for funding soon https://blog.givewell.org/2022/07/05/update-on-givewells-funding-projections/)
A likely explanation for why the Gates Foundation doesn't give to GiveDirectly is that they don't see strong enough evidence yet for the effectiveness (particularly in the long-term or at the societal level) of unconditional cash transfers (A Cochrane review from this year suggests slight short-term benefits: https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD011135.pub3/full)
Your idea that "cost-effectiveness doesn't drop off sharply after GiveWell's top charities are funded" depends heavily on the effectiveness and scalability of GiveDirectly's unconditional cash transfers.
I think EAs tend to be overly certain about GiveDirectly's effectiveness and scalability, given that the Cochrane review that you mention is unable to conclude much about unconditional cash transfers.
Would be worth using Disability-Adjusted Life Years rather than deaths in this sort of analysis because influenza is most deadly in the elderly and those with end-stage lung disease or severe immunocompromise, who may not have many DALYs left even if they are protected from influenza.
Any way you cut it, though, there are probably great cost-effective interventions we could do so save precious DALYs like expanding access to flu vaccinations.
Inspiring to see someone taking such an organised, systematic approach to their health. Inspiring also to see someone sharing their mental health struggles openly and in a constructive way.
I feel like I can't draw many conclusions from this data because of the big risk of confounding factors (maybe an external event happened while you were on one medication and not when on another), because drugs tend to work quite differently in different people, because N=1 and because there didn't appear to be blinding.
Anectodal evidence has its place, though. Please keep up the good work.
Whatever the benefit of this is, it needs to be weighed against the potential for negative PR. I can imagine the articles and posts discussing EA's movement into eugenics and I don't think they'd be very kind
It's optimistic to hope that chronic pain can be cured as easily as by writing problems on a piece of paper and ripping it up. This probably only works for some people, though, and for many others the suggestion to do this would come across as condescending and probably make matters worse.
This might be a useful tool in the chronic pain management arsenal, along with CBT (which is already a staple chronic pain treatment) and other mindset-based approaches like that of Dr John Sarno.
I'd be pretty cautious about putting much weight on those experiences of yourself or your friend that you've mentioned. Doctors see people everyday who swear that crystals cured their arthritis or that medication A works for them but not medication B (when B is just A with a different brand name). I've learnt to become very skeptical of the patterns I recognise in my own health. I've had experiences where I could have sworn that A was causing B that later turned out to be wrong.
As it is I don't see why your prior would be "vegan diets make thinking worse" rather than "vegan diets don't affect thinking" (my own suspicion) or even "vegan diets make thinking better" (I'm sure someone out there has their own anecdotes supporting this)
Probability that this is useful = Prob(that you can domesticate these zebras) * Prob(that a disaster happens that knocks out humanity's ability to make mechanised vehicles while this set of zebras exists) * Prob(that that disaster doesn't also kill these zebras) * Prob(that the humans find and figure out how to use the zebras) * Prob(that these zebras meaningfully increase industrial progress)
Upvote for creativity
This is really inspiring. I love the openness about your income and spending. The amount you donate is incredible. I don't know how you lived on $688 for food for a year.
One problem with averaging all R&D in the world is that some of it might be much more effective at improving economic growth than other research. I don't know where you get the $2 trillion global annual research spend value but I wonder how much of this is things like:
- focus group research on whether people prefer breakfast cereal A to B
- market research on which social media algorithm is best for marketing product X
- research on producing better tank armour, then research on how to produce missiles to pierce that armour and make it redundant and so on.
It seems like a more targeted approach to R&D specifically aiming at improving economic growth could be magnitudes more effective than "average" R&D
In the scenario where AGI won't be malevolent, slowing progress is bad.
In the scenario where AGI will be malevolent but we can fix that with more research, slowing progress is very good.
In the scenario where AGI will be malevolent and research can't do anything about that, it's irrelevant.
What's the hedge position?
I don't take these polls very seriously because it's ambiguous if the "ideal number of children" includes the financial/work/time costs.
Lots of people have a different idea of what the "ideal" is vs. what they want in practice.
And the GWWC pledge seems to fit at a nice balance point between those two, where the cost is not so off-putting that no-one takes the pledge and not so non-committal that it's meaningless