Why I'm suss on wellbeing surveys 2023-03-18T07:07:01.261Z
Dean Karlan is now Chief Economist of USAID 2023-01-24T09:19:36.839Z
Why I gave AUD$12,573 to Innovations For Poverty Action 2022-11-29T00:56:20.347Z
The act of giving itself has positive impact 2021-07-17T16:27:09.236Z


Comment by Henry Howard on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T21:44:17.006Z · EA · GW

Extremely cringe article.

The argument that AI will inevitably kill us has never been well-formed and he doesn't propose a good argument for it here. No-one has proposed a reasonable scenario by which immediate, unpreventable AI doom will happen (the protein nanofactories-by-mail idea underestimates the difficulty of simulating quantum effects on protein behaviour).

A human dropped into a den of lions won't immediately become its leader just because the human is more intelligent.

Comment by Henry Howard on Why I'm suss on wellbeing surveys · 2023-03-20T21:20:12.887Z · EA · GW

The way you describe WELLBYs - as being heavily influenced by the hedonic treadmill and so potentially unable to distinguish between the wellbeing of the Sentinelese and the modern Londoner - seems to highlight their problems. There's a good chance a WELLBY analysis would have argued against the agricultural revolution, which doesn't seem like a useful opinion.

Comment by Henry Howard on Why I'm suss on wellbeing surveys · 2023-03-20T21:09:14.553Z · EA · GW

No it's not obvious, but the implications are absurd enough (agricultural revolution was a mistake, cities were a mistake) that I think it's reasonable to discard the idea

Comment by Henry Howard on A personal take on whether AGI would lead to an existential catastrophe · 2023-03-20T02:49:02.182Z · EA · GW

I encourage you to publish that post. I also feel that the AI safety argument leans too heavily on the DNA sequences -> diamondoid nanobots scenario

Consider entering your post in this competition:

Comment by Henry Howard on Why I'm suss on wellbeing surveys · 2023-03-18T23:00:59.984Z · EA · GW

I agree that revealed preference and survey responses can differ. Unless WELLBYs take account of revealed preferences they'll fail to predict what people actually want

Comment by Henry Howard on Cholesterol and Blood Pressure as Neglected Dietary Interventions? · 2023-02-27T00:01:50.546Z · EA · GW

"ingestion of said natural sources does not seem to include the side effects from their synthesized forms"

Can you provide a source for this?

Comment by Henry Howard on What are the best examples of object-level work that was done by (or at least inspired by) the longtermist EA community that concretely and legibly reduced existential risk? · 2023-02-12T12:29:15.507Z · EA · GW

I think this is a great question. The lack of clear, demonstrable progress in reducing existential risks, and the difficulty of making and demonstrating any progress, makes me very skeptical of longtermism in practice.

I think shifting focus from tractable, measurable issues like global health and development to issues that - while critical - are impossible to reliably affect, might be really bad.

Comment by Henry Howard on Massive Earthquake in Turkey: Comments on the situation from the EA Community in Turkey · 2023-02-10T13:23:35.669Z · EA · GW

Thanks for this. It's important to give to rescue and relief efforts when disasters happen in addition to giving to development efforts in the good times so that communities are less vulnerable to disasters.

The information you've provided here is really valuable. Thank you. It will inform how I donate.

Comment by Henry Howard on Spreading messages to help with the most important century · 2023-01-29T11:56:56.185Z · EA · GW

I don't like this post and I don't think it should pinned to the forum front page.

A few reasons:

  1. The general message of: "go and spread this message, this is the way to do it" is too self-assured, and unquestioning. It appears cultish. It's off-putting to have this as the first thing that forum visitors will see.

  2. The thesis of the post is that a useful thing for everyone to do is to spread a message about AI safety, but it's not clear what messages you think should be being spread. The only two I could see are "relate it to Skynet" and "even if AI looks safe it might not be".

  3. Too many prerequisites: this post refers to five or ten others posts as a "this concept is properly explained here" thing. Many of these posts reference further posts. This is a red flag to me of poor writing and/or poor ideas. Either a) your ideas are so complex that they do indeed require many thousands of words to explain (in which case, fine), or b) they're not that complex, just aren't being communicated well or c) bad ideas are being obscured in a tower of readings that gatekeep the critics away. I'd like to see the actual ideas you're referring to expressed clearly, instead of referring to other posts.

  4. Having this pinned to the front page further reinforces the disproportionate focus that AI Safety gets on the forum

Comment by Henry Howard on Demodex mites: Large and neglected group of wild animals · 2023-01-27T11:07:39.312Z · EA · GW
Comment by Henry Howard on Forum + LW relationship: What is the effect? · 2023-01-23T23:02:12.335Z · EA · GW

My gut feeling is that LessWrong is cringe and the heavy link to the Effective Altruism forum is making the forum cringe.

Trying to explain this feeling I'd say some features I don't like are:

  • Ignoring emotional responses and optics in favour of pure open dialogue. Feels very New Atheist.
  • The long pieces of independent research that are extremely difficult to independently verify and which often defer to other pieces of difficult-to-verify independent research.
  • Heavy use of expected value calculations rather than emphasising the uncertainty and cluelessness around a lot of our numbers.
  • The more-karma-more-votes system that encourages an echo chamber.
Comment by Henry Howard on Finding bugs in GiveWell's top charities · 2023-01-23T22:48:57.849Z · EA · GW

Can  you say why you feel that longtermism suffers from less cluelessness that what you argue the GiveWell charities do? The main limitation of longtermism is that affecting the future is riddled with cluelessness.
You mention Hilary Greaves' talk, but it doesn't seem to address this. She refers to "reducing the chance of premature human extinction" but doesn't say how.

Comment by Henry Howard on Finding bugs in GiveWell's top charities · 2023-01-23T22:43:50.563Z · EA · GW

Putting a hold on helping people in poverty because of concern about insect rights is insulting to people who live in poverty and epitomises ivory-tower thinking that gets the Effective Altruism community so heavily criticised.

Saying "further research would be good" is easy because it is always true. Doing that research or waiting for it to be done is not always practical. I think you are being extremely unreasonable if, before helping someone die of malaria you ask for research to be done on:

  • the long term impacts of bednets on population growth
  • the effects of population growth on deforestation
  • the effects of deforestation on insect populations and welfare
  • specific quantification of insect suffering
Comment by Henry Howard on Livelihood interventions: overview, evaluation, and cost-effectiveness · 2023-01-23T13:46:39.378Z · EA · GW

IPA and J-PAL are underrated. They've had a hand in producing the evidence for many of GiveWell's recommendations. They seem to be significantly better at cause discovery than the Effective Altruism community.

Comment by Henry Howard on Rethink Priorities’ Welfare Range Estimates · 2023-01-23T13:36:46.727Z · EA · GW

The use of expected value doesn't seem useful here. Your confidence intervals are huge (95% confidence interval for pig suffering capacity relative to humans is between 0.005 to 1.031). Because the implications are so different across that spectrum (varying from basically "make the cages even smaller, who cares" at 0.005 to "I will push my nan down the stairs to save a pig" at 1.031) it really doesn't feel like I can draw any conclusions from this.

Comment by Henry Howard on [deleted post] 2023-01-19T22:58:10.615Z

Re. 3, I prefer giving now. I think there's a logic to giving later in that money can accrue interest and you can set yourself up to donate more later, but doing good accrues its own interest: helping someone out of poverty today is better than helping them 10 years from now, as it gives them an extra 10 years of better life and 10 years to pay it forward to their community.

Comment by Henry Howard on Evaluating StrongMinds: how strong is the evidence? · 2023-01-19T11:06:43.588Z · EA · GW

A few things that stand out to me that seem dodgy and make me doubt this analysis:

One of the studies you included with the strongest effect (Araya et al. 2003 in Chile with an effect of 0.9 Cohens d) uses antidepressants as part of the intervention. Why did you include this? How many other studies included non-psychotherapy interventions?

Some of the studies deal with quite specific groups of people eg. survivors of violence, pregnant women, HIV-affected women with young children. Generalising from psychotherapy's effects in these groups  to psychotherapy in the general population seems unreasonable. 

Similarly, the therapies applied between studies seem highly variable including "Antenatal Emotional Self-Management Training", group therapy, one-on-one peer mentors. Lumping these together and drawing conclusions about "psychotherapy" generally seems unreasonable.

With the difficulty of blinding patients to psychotherapy, there seems to be room for the Hawthorne effect to be skewing the results of each of the 39 studies: with patients who are aware that they've received therapy feeling obliged to say that it helped.


Other minor things:
- Multiple references to Appendix D. Where is Appendix D?
- Maybe I've missed it but do you properly list the studies you used somewhere. "Husain, 2017" is not enough info to go by.

Comment by Henry Howard on How many people are working (directly) on reducing existential risk from AI? · 2023-01-18T06:51:55.423Z · EA · GW
Comment by Henry Howard on List of cause areas that EA should potentially prioritise more · 2022-12-20T00:19:14.038Z · EA · GW

Most of these seem intractable and many have lots of people working on them already.

The benefit of bed nets and vitamin A supplementation is that they are proven solutions to neglected problems.

Comment by Henry Howard on Kurzgesagt's most recent video promoting the introducing of wild life to other planets is unethical and irresponsible · 2022-12-11T22:28:33.936Z · EA · GW

"Subjecting countless animals to a lifetime of suffering" probably describe the life of the average bird in the amazon (struggling to find food, shelter, avoid predators, protect its children) or the average fish/shrimp in the ocean.

If you argue that introducing animals to other planets will cause net suffering then it seems to follow that we should eliminate natural ecosystems here on earth

Comment by Henry Howard on Why did CEA buy Wytham Abbey? · 2022-12-10T23:57:01.538Z · EA · GW

I think this was a terrible idea

I think you've overestimated the value of a dedicated conference centre. The important ideas in EA so far haven't come from conversations over tea and scones at conference centres but are either common sense ("do the most good", "the future matters") or have come from dedicated field trials and RCTs.

I also think you've underestimated the damage this will do to the EA brand. The hummus and baguettes signal an earnestness. Abbey signals scam.

I'm confident that this will be remembered as one of CEA's worst decisions.

Comment by Henry Howard on Longtermism and Uncertainty · 2022-12-08T13:52:23.097Z · EA · GW

Strong agree. I think the EA community far overestimates its ability to predictably affect the future, particularly the far future.

Comment by Henry Howard on Visualizing the development gap · 2022-12-08T11:07:47.815Z · EA · GW

Opportunities that development economists have missed?

The general ideas that Hauke suggests in the appendix are things like liberalisation, freeing trade, more open migration. They're ideas that have been fiercely studied and debated before. Organisations like the World Trade Organisation and The World Bank are built around these ideas. The difficulty in testing and implementing these ideas is part of what drove the rise of the randomistas.

I think the "~4 person-years" idea is delusional and arrogant.

Comment by Henry Howard on Open Thread: October — December 2022 · 2022-12-08T02:52:58.054Z · EA · GW

This is very inspiring. I think you're making an incredibly positive impact on the world, not just through charity but also by inspiring those around you. Brilliant!

Comment by Henry Howard on Open Thread: October — December 2022 · 2022-12-08T02:36:09.060Z · EA · GW

Really cool idea. I'll be watching eagerly

Comment by Henry Howard on Visualizing the development gap · 2022-12-08T00:52:32.273Z · EA · GW

Good portrait of the problem. The solution isn't obvious to me.

I'm very skeptical of the suggestions from the Halstead and Hillebrandt post. It seems unlikely that a "~4 person-year research effort" could discover the key to economic growth in developing countries when the entire field of development economics has been trying to solve this problem for decades.

Comment by Henry Howard on [deleted post] 2022-12-08T00:44:14.748Z

I agree with the general premise of earning to give through entrepreneurship.

I've never been very convinced by the talent-constraint concept. With the right wage you can hire talent. I think the push from earning to give has been a mistake.

Comment by Henry Howard on Why development aid is a really exciting field · 2022-12-08T00:37:35.010Z · EA · GW


I think that the allocation of government aid doesn't get enough attention from effective altruists. Government aid budgets are an enormous pool of money and often don't seem to be spent in an evidence-based way. Huge potential for positive change here.

Comment by Henry Howard on Preventing atherosclerosis, the easiest way to improve your life expectancy? · 2022-11-30T11:52:26.937Z · EA · GW

It seems like every now and again someone suggests cardiovascular disease as a potential high-impact cause area on the EA forum. The problem is tractability. It's really hard to convince people to eat better, exercise more and stop smoking. Doctors spend a lot of time trying to do this and billions have been spent on public health campaigns trying to convince people to do this. The medications that treat cholesterol, hypertension, and diabetes are among the most commonly prescribed in the world already.

You've identified a serious problem but I don't see a cost-effective solution

Comment by Henry Howard on Come get malaria with me? · 2022-11-30T11:39:44.761Z · EA · GW

Great work. I got malaria in 2016 for a clinical trial of a novel anti-malarial (results published here). I was paid AUD$2880 and gave it all to the Against Malaria Foundation. It's one of the best things I've ever done.

Comment by Henry Howard on Where are you donating this year, and why? (Open thread) · 2022-11-29T01:18:27.251Z · EA · GW

Love this. I also give large amounts to GiveWell charities and smaller amounts to local charities. Good charities deserve funds. Great charities just deserve them more.

Comment by Henry Howard on Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight · 2022-11-28T21:51:05.360Z · EA · GW

Thanks for doing this. Post is too long, could have been dot points. I want to see more TL;DRs like this

Comment by Henry Howard on If you received FTX grant money you should return it · 2022-11-28T09:11:58.621Z · EA · GW

Thanks for saying this. One complexity is that there are people who will have already rearranged their lives to work of Future-Fund-funded projects, so handing back the money will leave them worse off than when they started. I can understand then why they'd be unwilling to give the money back.

But I share your distaste for the dialogue and posts I've seen around the issue. People obviously have a heavy vested interest in keeping this money and this will bias some people's moral reasoning. I think it's a bad look for the community. Disappointing.

Comment by Henry Howard on Don’t just give well, give WELLBYs: HLI’s 2022 charity recommendation · 2022-11-24T12:22:19.188Z · EA · GW

Extremely skeptical of this. Reading through Strong Minds' evidence base it looks like you're leaning very heavily on 2 RCTs of a total of about 600 people in Uganda, in which the outcomes were a slight reduction in a questionable mental health metric that relies on self-reporting.

You're making big claims about your intervention including that it is a better use of money than saving the lives of children. I hope you're right, otherwise you might be doing a lot of harm.

Comment by Henry Howard on Insect farming might cause more suffering than other animal farming · 2022-11-01T14:36:58.200Z · EA · GW

The idea that keeping 5500 crickets in farmed insect conditions then killing them is equivalent to ~1500 days of a human's suffering (the expected  value from your analysis) seems very high and seems wildly incompatible with the behaviors and beliefs of everyone I've ever met, but that doesn't mean it's not true.

My guess is that your suffering/day of life is too high (expected value of 0.7!, they seem to have a chill life just bouncing around and eating) and that the sentience multiplier is too high (I think most people would give up a lot more than the torture of 6000 crickets for a day to prevent the torture of a human for a day)

Comment by Henry Howard on Teaching EA through superhero thought experiments · 2022-10-29T05:25:02.060Z · EA · GW

This is a really interesting idea. There's a Saturday Morning Breakfast Cereal comic with a similar premise:

Comment by Henry Howard on Ask (Everyone) Anything — “EA 101” · 2022-10-26T22:13:05.608Z · EA · GW

My impressions was that the top charities are scaling as fast as they can. Hopefully soon they'll have room for hundred of millions of dollars of more funding.

Comment by Henry Howard on Ask (Everyone) Anything — “EA 101” · 2022-10-26T21:52:27.583Z · EA · GW

This is a great question, and the same should be asked of governments (as in: "why doesn't the UK aid budget simply all go to mosquito nets?")

A likely explanation for why the Gates Foundation doesn't give to GiveWell's top charities is that those charities don't currently have much room for more funding (GiveWell had to rollover funding last year because they couldn't spend it all. A recent blog posts suggests they may have more room for funding soon

A likely explanation for why the Gates Foundation doesn't give to GiveDirectly is that they don't see strong enough evidence yet for the effectiveness (particularly in the long-term or at the societal level) of unconditional cash transfers (A Cochrane review from this year suggests slight short-term benefits:

Comment by Henry Howard on Ask (Everyone) Anything — “EA 101” · 2022-10-26T21:33:02.746Z · EA · GW

Your idea that "cost-effectiveness doesn't drop off sharply after GiveWell's top charities are funded" depends heavily on the effectiveness and scalability of GiveDirectly's unconditional cash transfers.

I think EAs tend to be overly certain about GiveDirectly's effectiveness and scalability, given that the Cochrane review that you mention is unable to conclude much about unconditional cash transfers.

Comment by Henry Howard on How many people die from the flu? (OWID) · 2022-10-25T07:15:18.523Z · EA · GW

Would be worth using Disability-Adjusted Life Years rather than deaths in this sort of analysis because influenza is most deadly in the elderly and those with end-stage lung disease or severe immunocompromise, who may not have many DALYs left even if they are protected from influenza.

Any way you cut it, though, there are probably great cost-effective interventions we could do so save precious DALYs like expanding access to flu vaccinations.

Comment by Henry Howard on My experience experimenting with a bunch of antidepressants I'd never heard of · 2022-10-23T01:36:20.916Z · EA · GW

Inspiring to see someone taking such an organised, systematic approach to their health. Inspiring also to see someone sharing their mental health struggles openly and in a constructive way.

I feel like I can't draw many conclusions from this data because of the big risk of confounding factors (maybe an external event happened while you were on one medication and not when on another), because drugs tend to work quite differently in different people, because N=1 and because there didn't appear to be blinding.

Anectodal evidence has its place, though. Please keep up the good work.

Comment by Henry Howard on [deleted post] 2022-10-13T23:09:46.018Z

Whatever the benefit of this is, it needs to be weighed against the potential for negative PR. I can imagine the articles and posts discussing EA's movement into eugenics and I don't think they'd be very kind

Comment by Henry Howard on EA and the Chronic Pain problem/solution · 2022-10-11T07:04:34.040Z · EA · GW

It's optimistic to hope that chronic pain can be cured as easily as by writing problems on a piece of paper and ripping it up. This probably only works for some people, though, and for many others the suggestion to do this would come across as condescending and probably make matters worse.

This might be a useful tool in the chronic pain management arsenal, along with CBT (which is already a staple chronic pain treatment) and other mindset-based approaches like that of Dr John Sarno.

Comment by Henry Howard on Can we actually confirm that a vegan diet is compatible with high intellectual output? · 2022-10-05T07:53:34.515Z · EA · GW

I'd be pretty cautious about putting much weight on those experiences of yourself or your friend that you've mentioned. Doctors see people everyday who swear that crystals cured their arthritis or that medication A works for them but not medication B (when B is just A with a different brand name). I've learnt to become very skeptical of the patterns I recognise in my own health. I've had experiences where I could have sworn that A was causing B that later turned out to be wrong.

As it is I don't see why your prior would be "vegan diets make thinking worse" rather than "vegan diets don't affect thinking" (my own suspicion) or even "vegan diets make thinking better" (I'm sure someone out there has their own anecdotes supporting this)

Comment by Henry Howard on The Domestication of Zebras · 2022-09-09T11:56:47.076Z · EA · GW

Probability that this is useful = Prob(that you can domesticate these zebras) * Prob(that a disaster happens that knocks out humanity's ability to make mechanised vehicles while this set of zebras exists) * Prob(that that disaster doesn't also kill these zebras) * Prob(that the humans find and figure out how to use the zebras) * Prob(that these zebras meaningfully increase industrial progress)

Upvote for creativity

Comment by Henry Howard on Spending Update 2022 · 2022-07-20T01:01:28.280Z · EA · GW

This is really inspiring. I love the openness about your income and spending. The amount you donate is incredible. I don't know how you lived on $688 for food for a year.

Comment by Henry Howard on Report on Social Returns to Productivity Growth · 2022-07-16T00:37:47.989Z · EA · GW

One problem with averaging all R&D in the world is that some of it might be much more effective at improving economic growth than other research. I don't know where you get the $2 trillion global annual research spend value but I wonder how much of this is things like:

  • focus group research on whether people prefer breakfast cereal A to B
  • market research on which social media algorithm is best for marketing product X
  • research on producing better tank armour, then research on how to produce missiles to pierce that armour and make it redundant and so on.

It seems like a more targeted approach to R&D specifically aiming at improving economic growth could be magnitudes more effective than "average" R&D

Comment by Henry Howard on Slowing down AI progress is an underexplored alignment strategy · 2022-07-13T08:59:02.382Z · EA · GW

In the scenario where AGI won't be malevolent, slowing progress is bad.

In the scenario where AGI will be malevolent but we can fix that with more research, slowing progress is very good.

In the scenario where AGI will be malevolent and research can't do anything about that, it's irrelevant.

What's the hedge position?

Comment by Henry Howard on One Million Missing Children · 2022-07-13T03:27:38.822Z · EA · GW

I don't take these polls very seriously because it's ambiguous if the "ideal number of children" includes the financial/work/time costs.

Lots of people have a different idea of what the "ideal" is vs. what they want in practice.

Comment by Henry Howard on Marriage, the Giving What We Can Pledge, and the damage caused by vague public commitments · 2022-07-12T09:13:17.434Z · EA · GW

And the GWWC pledge seems to fit at a nice balance point between those two, where the cost is not so off-putting that no-one takes the pledge and not so non-committal that it's meaningless