Posts

Let's Fund: Better Science impact evaluation. Registered Reports now available in Nature 2023-02-26T15:05:18.850Z
Cause area: Developmental Cognitive Neuroepidemiology 2022-08-11T20:47:07.117Z
Critique of OpenPhil's macroeconomic policy advocacy 2022-03-24T22:03:16.062Z
Decreasing populism and improving democracy, evidence-based policy, and rationality 2021-07-27T18:14:51.484Z
An evaluation of Mind Ease, an anti-anxiety app 2021-07-26T11:35:30.500Z
Let's Fund Living review: 2020 update 2021-02-12T18:49:52.075Z
Int'l agreements to spend % of GDP on global public goods 2020-11-22T10:33:39.039Z
HaukeHillebrandt's Shortform 2020-04-17T10:13:42.853Z
Growth and the case against randomista development 2020-01-16T10:11:51.136Z
Dataset of Trillion Dollar figures 2020-01-13T13:33:25.067Z
Let’s Fund: annual review / fundraising / hiring / AMA 2019-12-31T14:54:35.968Z
[updated] Global development interventions are generally more effective than climate change interventions 2019-10-02T08:36:27.444Z
New popular science book on x-risks: "End Times" 2019-10-01T07:18:10.789Z
Corporate Global Catastrophic Risks (C-GCRs) 2019-06-30T16:53:31.350Z
Crowdfunding for Effective Climate Policy 2019-05-25T18:17:05.070Z
Nick Bostrom on Sam Harris' podcast 2019-03-19T11:21:09.483Z
[EAGx Talk] Considerations for Fundraising in Effective Altruism 2019-01-15T11:20:46.237Z
EA orgs are trying to fundraise ~$10m - $16m 2019-01-06T13:51:03.483Z
New web app for calibration training funded by the Open Philanthropy Project 2018-12-15T15:18:54.905Z
Impact investing is only a good idea in specific circumstances 2018-12-06T12:13:46.544Z
Effective Altruism in non-high-income countries 2018-11-15T17:18:42.761Z
“The Vulnerable World Hypothesis” (Nick Bostrom’s new paper) 2018-11-09T11:20:42.330Z
Why donate to meta-research? 2018-11-08T09:29:58.740Z
[link] Why donate to (scientific) research? 2018-10-29T11:13:25.026Z
Announcing: "Lets-Fund.org: High-Impact Crowdfunding campaigns" & "Let's Fund #1: A (small) scientific Revolution" 2018-10-25T21:22:14.605Z
A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good 2018-02-18T17:41:31.873Z
69 things that might be pretty effective to fund 2018-01-21T22:47:32.094Z
Some objections and counter arguments against global poverty/health interventions 2015-08-05T09:44:11.863Z
Giving What We Can's response to recent deworming studies 2015-07-23T18:19:59.535Z
Long-lasting insecticide treated nets: $3,340 per life saved, $100 per DALY averted. How is this calculated? 2015-07-13T16:08:20.169Z
An update on Project Healthy Children 2015-06-08T13:36:16.414Z
Room for more funding: Why doesn’t the Gates foundation just close the funding gap of AMF and SCI? 2015-06-03T14:48:07.317Z
Feedback and $2k in funding needed for EA essay competition 2015-05-13T15:13:29.362Z

Comments

Comment by Hauke Hillebrandt (HaukeHillebrandt) on More Centralisation? · 2023-03-06T20:48:15.056Z · EA · GW

Mergers and Acquisitions seems underexplored in EA.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Some Things I Heard about AI Governance at EAG · 2023-03-01T10:10:48.302Z · EA · GW

Very useful overview- thanks for writing this!

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Open Thread: January — March 2023 · 2023-02-25T16:45:07.425Z · EA · GW

Stop free-riding! voting on new content is a public good, Misha ;P

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Nathan Young's Shortform · 2023-02-17T12:32:25.658Z · EA · GW

Daniel's Heavy Tail Hypothesis (HTH) vs. this recent comment from Brian saying that he thinks that classic piece on 'Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness' is still essentially valid.

Seems like Brian is arguing that there are at most 3-4 OOM differences between interventions whereas Daniel seems to imply there could be 8-10 OOM differences?

Similarly here: Valuing research works by eliciting comparisons from EA researchers - EA Forum (effectivealtruism.org)

And Ben Todd just tweeted about this as well.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on HaukeHillebrandt's Shortform · 2023-02-10T17:35:15.630Z · EA · GW

How can you get that new toggle feature / use collapsible content as in this post

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Jonas Vollmer's Shortform · 2023-01-31T17:36:11.149Z · EA · GW

Relevant calibration game that was recently posted -  I found it surprisingly addictive - maybe they'd be interested in implementing your ideas.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Stephen Clare's Shortform · 2022-12-29T08:56:12.908Z · EA · GW

Meta-level: Great comment- I think we should be starting more of a discussion around theoretical high-level mechanisms of why charities would be effective in the first place - I think there's too much emphasis on evidence of 'do they work'.

I think the main driver of the effectiveness of infectious disease prevention charities like AMF and deworming might be that they solve coordination/ public goods problems, because if everyone in a certain region uses a health intervention it is much more effective in driving down overall disease incidence. Because of the tragedy of the commons, people are less likely to buy bed nets themselves. 

For micronutrient charities it is lack of information and education - most people  don't know about and don't understand micronutrients.

Lack of information / markets

Flagging that that there were charities - DMI and Living Goods - which address these issues, and so, if these turn out to explain most of the variance in differences in cost-effectiveness you highlight then these need to be scaled up. I never quite understood why a DMI-like charity with ~zero marginal cost-per-user couldn't be scaled up more until it's much more cost-effective than all other charities.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Read The Sequences · 2022-12-23T15:35:58.199Z · EA · GW

There is a good Cold Takes blog on the 'Bayesian mindset' - which gets at something related to this as '~20-minute read rather than the >1000 pages of Rationality: A-Z (aka The Sequences).'

Summary:

This piece is about the in-practice pros and cons of trying to think in terms of probabilities and expected value for real-world decisions, including decisions that don’t obviously lend themselves to this kind of approach.

The mindset examined here is fairly common in the “effective altruist” and “rationalist” communities, and there’s quite a bit of overlap between this mindset and that of Rationality: A-Z (aka The Sequences), although there are some differing points of emphasis.1 If you’d like to learn more about this kind of thinking, this piece presents a ~20-minute read rather than the >1000 pages of Rationality: A-Z.

This piece is a rough attempt to capture the heart of the ideas behind rationalism, and I think a lot of the ideas and habits of these communities will make more sense if you’ve read it, though I of course wouldn’t expect everyone in those communities to think I’ve successfully done this.

If you’re already deeply familiar with this way of thinking and just want my take on the pros and cons, you might skip to Pros and Cons. If you want to know why I'm using the term "Bayesian mindset" despite not mentioning Bayes's rule much, see footnote 3.

 

This piece is about the “Bayesian mindset,” my term for a particular way of making decisions. In a nutshell, the Bayesian mindset is trying to approximate an (unrealistic) ideal of making every decision based entirely on probabilities and values, like this:

Should I buy travel insurance for $10? I think there's about a 1% chance I'll use it (probability - blue), in which case it will get me a $500 airfare refund (value - red). Since 1% * $500 = $5, I should not buy it for $10.

(Two more examples below in case that’s helpful.)

The ideal here is called expected utility maximization (EUM): making decisions that get you the highest possible expected value of what you care about.2 (I’ve put clarification of when I’m using “EUM” and when I’m using “Bayesian mindset” in a footnote, as well as notes on what "Bayesian" refers to in this context, but it isn’t ultimately that important.3)

It’s rarely practical to literally spell out all the numbers and probabilities like this. But some people think you should do so when you can, and when you can’t, use this kind of framework as a “North Star” - an ideal that can guide many decisions even when you don’t do the whole exercise.

Others see the whole idea as much less promising.

I think it's very useful to understand the pros and cons, and I think it's good to have the Bayesian Mindset as one option for thinking through decisions. I think it's especially useful for decisions that are (a) important; (b) altruistic (trying to help others, rather than yourself); (c) “unguided,” in the sense that normal rules of thumb aren’t all that helpful.

In the rest of this piece, I'm going to walk through:

  • The "dream" behind the Bayesian mindset.
    • If we could put the practical difficulties aside and make every decision this way, we'd be able to understand disagreements and debates much better - including debates one has with oneself. In particular, we'd know which parts of these disagreements and debates are debates about how the world is (probabilities) vs. disagreements in what we care about (values).
    • When debating probabilities, we could make our debates impersonal, accountable, and focused on finding the truth. Being right just means you have put the right probabilities on your predictions. Over time, it should be possible to see who has and has not made good predictions. Among other things, this would put us in a world where bad analysis had consequences.
    • When disagreeing over values, by contrast, we could all have transparency about this. If someone wanted you to make a certain decision for their personal benefit, or otherwise for values you didn’t agree with, they wouldn’t get very far asking you to trust them.
  • The "how" of the Bayesian mindset - what kinds of practices one can use to assign reasonable probabilities and values, and (hopefully) come out with reasonable decisions.
  • The pros and cons of approaching decisions this way.
Comment by Hauke Hillebrandt (HaukeHillebrandt) on Effective altruism is worse than traditional philanthropy in the way it excludes the extreme poor in the global south. · 2022-12-18T09:08:21.364Z · EA · GW

I really don't understand how distributing nets can keep people in poverty. 

 

There is one  paper from 2009 suggesting that, in the short run, eradicating malaria can lower income per capita slightly but only by a few percentage points and in the longer run it raises income. This is because it doesn't affect prime age workers so much.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on MacKenzie Scott's grantmaking data · 2022-12-16T11:16:58.763Z · EA · GW

Question for economists: Would this drive up the prize of the remaining bad debt in expectation, so that so that the marginal utility created by this go down significantly?

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Why did CEA buy Wytham Abbey? · 2022-12-07T12:24:53.534Z · EA · GW

Agreed.. a good way to think about this is that since you get ~5% annual returns on stocks, annual rent equivalent is ~5% of the property value, and so the opportunity cost is spending ~$750k/y or $62.5k per month on conference accommodation.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on HaukeHillebrandt's Shortform · 2022-12-04T19:31:49.443Z · EA · GW

My opinionated and annotated summary / distillation of the SBF’s account of the FTX crisis based on recent articles and interviews (particularly this Bloomberg article).

Over the past year, the macroeconomy changed and central banks raised their interest rates which led to crypto losing value. Then, after a crypto crash in May, Alameda needed billions, fast, to repay its nervous lenders or would go bust.

According to sources, Alameda’s CEO Ellison said that she, SBF, Gary Wang and Nishad Singh had a meeting re: the shortfall and decided to loan Alameda FTX user funds. If true, they knowingly committed fraud.

SBF’s account is different:

Generally, he didn’t know what was going on at Alameda anymore, despite owing 90% of it. He disengaged because he was busy running FTX and for 'conflict of interest reasons'.[1] 

He didn’t pay much attention during the meeting and it didn’t seem like a crisis, but just a matter of extending a bit more credit to Alameda (from $4B by $6B[2] to ~$10B[3]). Alameda already traded on margin and still had collateral worth way more than enough to cover the loan, and, despite having been the liquidity provider historically, seemed to be less important over time, as they made up an ever smaller fraction of all trades.

Yet they still had larger limits than other users, who’d get auto-liquidated if their positions got too big and risky. He didn’t realize that Alameda’s position on FTX got much more leveraged, and thought the risk was much smaller. Also, a lot of Alameda’s collatoral was FTT, ~FTX stock, which rapidly lost value.

If FTX had liquidated, Alameda and maybe even their lenders, would’ve gone bust. And even if FTX didn’t take direct losses, users would’ve lost confidence, causing a hard-to-predict cascade of events.

If FTX hadn’t margin-called there was ~70% chance everything would be OK, but even if not, downside and risk would have been much smaller, and the hole more manageable.

SBF thought FTX and Alameda’s combined accounts were:

  1. Debt: $8.9B
  2. Assets:
  3. Cash: $9B
  4. 'Less liquid': $15.4B
  5. 'Illiquid': $3.2B

Naively, despite some big liabilities, they should be able to cover it.

But crucially, they actually had $8B less cash, since FTX didn’t have a bank account when they first started, users sent >$5B[4] to Alameda, and then their bad accounting double-counted by crediting both. Many users’ funds never moved from Alameda, and FTX users' accounts were credited with a notional balance that did not represent underlying assets held by FTX—users traded with crypto that did not actually exist.

This is why Alameda invested so much, while FTX didn’t have enough money when users tried to withdraw.[5]

They spent $10.75B on:[6]

  1. $4B for VC investments
  2. $2.5B to Binance buy out its investment in FTX (another figure is $3B)
  3. $1.5B for expenses
  4. $1.5B for acquisitions
  5. $1B labeled 'fuckups’[7]
  6. $0.25B for real estate

Even after FTX/Alameda profits (at least $10B[8]) and the VC money they raised ($2B[9] - aside: after raising $400M in Jan, they tried to raise money again in July[10] and then again in Sept.[11])—all this adds to minus $6.5B. The FT says FTX is short of $8B[12] of ~1M users’[13] money. In sum, this was because he didn’t realize that they spent way more than they made, paid very little attention to expenses, was really lazy about mental math, and there was a diffusion of responsibility amongst leadership.

While FTX.US was more like a bank and highly regulated and had as much reserves as users put in, FTX int’l was an exchange. Legally, exchanges don’t lend out users' funds, but users themselves lend out their funds to other users (of which Alameda was just one of). FTX just facilitated this. An analogy: file-sharing platforms like Napster never upload music themselves illegally, but just facilitate peer-to-peer sharing.

Much more than $1B (SBF ‘~$8B-$10B at its peak’[14]) of user funds opted into peer-to-peer lending / order book margin trading (others say that this was less than $4B[15]; all user deposits were $16B[16]). Also, while parts of the terms of service say that FTX never lends out users' assets, those are overridden by other parts of the terms of service and he isn’t aware that FTX violated the terms of use (see FTX Terms of Service).

For me, the key remaining questions are:

  1. Did many users legally agree to their crypto being lent out without meaning to, by accepting the terms of service, even if they didn’t opt into the lending program? If so, it might be hard to hold FTX legally accountable, especially since they’re in the Bahamas.  
  2. If they did effectively lend out customer funds, did they do it multiple times (perhaps repeatedly since the start of FTX), or just once?
  3. Did FTX make it look like users' money were very secure like a highly regulated bank and that their money wasn’t at risk e.g. by partnering with Visa for crypto debit cards[17] or by blurring the line between FTX.us (‘A safe and easy way to get into crypto’) and FTX.com?
  4. Did FTX sweep users to opt into peer-to-peer lending?
  1. ^
  2. ^
  3. ^
  4. ^
  5. ^
  6. ^

     ht/ to Ryan Carey: ‘notably some of this could be consistent with macro conditions crushing their financial position, especially the VC investments in crypto.’

  7. ^

     I think he might refer to this: archive.ph/ATPHq#selection-1981.172-1981.301 

  8. ^
  9. ^
  10. ^
  11. ^
  12. ^
  13. ^
  14. ^
  15. ^
  16. ^
  17. ^
Comment by Hauke Hillebrandt (HaukeHillebrandt) on alexrichard's Shortform · 2022-11-10T09:47:34.409Z · EA · GW

Protect our Future PAC spent unprecedented levels on Carrick's campaign, and they seem to have spent $1.75M on attack ads against Salinas, which maybe biggest 'within party' attack ad budget in a primary. Seems understandable this can be seen as a norm violation (attack ads are more sticky) and perhaps it's poor 'cooperation with other value systems'.

On the other hand, SBF donated to the House Majority PAC, which financed John Fetterman's campaign.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Cause area: Developmental Cognitive Neuroepidemiology · 2022-10-17T15:14:24.956Z · EA · GW

It received the honorable mention prize and the winner of the contest had a similar proposal and also commented in this thread. So it's on Openphil's radar.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Ask (Everyone) Anything — “EA 101” · 2022-10-08T15:37:01.490Z · EA · GW

One recent paper  suggests that an estimated additional $200–328 billion per year is required for the various measures of primary care and public health interventions from 2020 to 2030 in 67 low-income and middle-income countries and this will save 60 million lives. But if you look at just the amount needed in low-income countries for health care - $396B - and divide by the total 16.2 million deaths averted by that, it suggests an average cost-effectiveness of ~$25k/death averted. 

Other global health interventions can be similarly or more effective: a 2014 Lancet article estimates that, in low-income countries, it costs $4,205 to avert a death through extra spending on health[22]. Another analysis suggests that this trend will continue and from 2015-2030 additional spending in low-income countries will avert a death for $4,000-11,000[23].

For comparison, in high-income countries, the governments spend $6.4 million to prevent a death (a measure called “value of a statistical life”)[24]. This is not surprising given the poorest countries spend less than $100 per person per year on health on average, while high-income countries almost spend $10,000 per person per year[25].

GiveDirectly is a charity that can productively absorb very large amounts of donations at scale, because they give unconditional cash transfers to extremely poor people in low-income countries. A Cochrane review suggests that such unconditional cash-transfers “probably or may improve some health outcomes.[21]  One analysis suggests that cash-transfers are roughly equivalent as effective as averting a death on the order of $10k .

So essentially  cost-effectiveness doesn't drop off sharply after Givewell's top charities are 'fully funded', and one could spend billions and billions at similar cost-effectiveness, Gates only has ~$100B and only spends~$5B a year.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Ask (Everyone) Anything — “EA 101” · 2022-10-08T13:54:03.092Z · EA · GW

Yes, that's fine. 

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Room for more funding: Why doesn’t the Gates foundation just close the funding gap of AMF and SCI? · 2022-10-05T14:18:17.069Z · EA · GW

Both percentages converge towards zero because of aestivating aliens, which create an ineliminable, immutable, minimum chance of extinction per century that continues through the aeons and produce a permanent exponential discount rate, thus falsifying longtermism... jk Fin I think you replied to the wrong thread ;P 

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Ask (Everyone) Anything — “EA 101” · 2022-10-05T13:30:27.239Z · EA · GW

I wrote a post about this 7 years ago! Still roughly valid.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on To those who contributed to Carrick Flynn in Oregon CD-6 - Please help now · 2022-10-03T14:15:45.999Z · EA · GW

Maybe it's just a matter of degree but the Protect our Future PAC spent unprecedented levels on Carrick's campaign, and, maybe this more of a principled distinguishing feature, they seem to have spent $1.75M on attack ads against Salinas, which maybe biggest 'within party' attack ad budget in a primary. Seems understandable this can be seen as a norm violation (attack ads are more sticky) and perhaps it's poor 'cooperation with other value systems'.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Quantified Intuitions: An epistemics training website including a new EA-themed calibration app · 2022-10-03T11:07:14.831Z · EA · GW

I might be biased because I had an idea for something very similar, but I think this is amazing and I think hit on something very, very interesting. I found the calibration training game very addictive (in a good way)  and actually played it for for a few hours.  

I think it might be because I play it in particular way though:

  • I always set it to 90%.
  • Then, I only put in orders of magnitudes, even when the prompt and mask doesn't force the user to do this.  So for instance, 'What percent of the world's population was killed by the 1918 flu pandemic?' I put in: 90% Confidence Interval, Lower Bound: 1%, Upper Bound: 10%. This has two advantages: 
  1. I can play the game very quickly - I  can do a rough BOTEC in my head. 
  2. I'm almost always accurate but not very precise but when I'm not, I'm literally orders of magnitude off and I get this huge prediction error signal - and that is very memorable (and I feel a bit dumb! :D). This might also guide people towards those parts of my model of the world, where I have biggest gaps in my knowledge (certain scientific subjects). 'It's better to be roughly right than precisely wrong'. I think you could implement a spaced repetition feature based on how many orders of magnitude you’re off, where the more OOMs you're off, the earlier it prompts you with the same question again (so if you're say >3 orders of magnitude off it prompts you within the same session, if you're 2 orders of magnitude of within 24 hours, 1 within in 3 days (from Remnote)). You could preferentially prioritize displaying questions that people often get wrong, perhaps even personalize it using ML.

With that in mind, here are some feature suggestions:

  1. You're already pretty good at getting people to make rough orders of magnitude estimations, by often using scientific notation, but you could zero in on this aspect of the game.
  • Add even higher confidence setting like 95% and 99%, and perhaps make that the default. This will get users to answer questions faster.
  • Restrict the input to orders of magnitude or make that the default. It might also be good to select million, 10 million, 100M from a drop down menu, so that people gets faster and is more reinforcing.
  • While I appreciate that I got more of an intuitive grasp of scientific notation playing the game (how many 0s does a trillion have again?), have the word 'billion' displayed when putting in the 10^12. 
  • When possible, try to contextualize where possible (I do this in this post on trillion dollar figures: 'So how can you conceptualize $1 trillion? 1 trillion is 1,000 billion. 1 billion is 1,000 million. Houses often costs ~1 million. So 1 trillion ≈ 1 million houses—a whole city.')
  • I like the timer feature, but perhaps consider either reducing the time per question even further or give more point if one answers faster.

If you gamify this properly, I think this could be the next Sporcle (but much more useful better).

Comment by Hauke Hillebrandt (HaukeHillebrandt) on HaukeHillebrandt's Shortform · 2022-10-02T17:46:24.278Z · EA · GW

I created a Zapier to post Pablo's ea.news feed of EA blogs and website to this subreddit:

https://reddit.com/r/eackernews

I wonder how much demand there'd be for a 'Hackernews' style high-frequency link only subreddit. I feel there's too much of a barrier to post links on the EA forum. Thoughts? 

Comment by Hauke Hillebrandt (HaukeHillebrandt) on technicalities's Shortform · 2022-10-02T17:38:18.250Z · EA · GW

Also  might be worth paging radiobostrom.com

Comment by Hauke Hillebrandt (HaukeHillebrandt) on technicalities's Shortform · 2022-10-02T17:36:50.706Z · EA · GW

crossposted from my blog

 'Nick Bostrom's 'Future of Humanity' papers'

In 2018, Nick Bostrom published an anthology of his papers in German under “The Future of Humanity”:

  1. The Future of Humanity
  2. Existential Risk Prevention as Global Priority
  3. In Defense of Posthuman Dignity
  4. Dignity and Enhancement
  5. Why I Want to be a Posthuman When I Grow Up
  6. Are You Living In A Computer Simulation?

Some other good papers by him:

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Cause area: Developmental Cognitive Neuroepidemiology · 2022-10-01T10:11:58.140Z · EA · GW

I'm not an expert on this but here are a few links:

https://www.cambridge.org/core/journals/proceedings-of-the-nutrition-society/article/is-iodine-deficiency-still-a-problem-in-subsaharan-africa-a-review/6C87E944AF05DEE3B7821D986D2F1B77

https://www.givewell.org/international/technical/programs/salt-iodization 

https://www.openphilanthropy.org/grants/iodine-global-network-general-support-december-2020/

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Acceptance and Commitment Therapy (ACT) 101 · 2022-09-25T14:50:01.043Z · EA · GW

I did a shallow review of the evidence for ACT last year: "Anxiety defusion and acceptance (acceptance and commitment therapy) Mind Ease’s anxiety defusion exercise is based on acceptance and commitment therapy (ACT), which is backed by the following evidence: Traditional ACT with a therapist: A 2017 review of RCTs of ACT to treat anxiety and depression shows that ACT improves depression relative to no treatment up to 6-months follow-up. (ds = 0.32 to 1.18). Two studies compared ACT with minimally active comparison conditions (expressive writing and minimal support group) and found ACT outperformed comparison conditions on depression at post, but were equivalent at follow-up.[74] A 2020 meta-analysis of 18 studies with 1,088 participants showed that ACT significantly reduced depression as compared with the control group (d= 0.59, 95% CI [0.38, 0.81]).[75] Self-help: Traditionally face-to-face, ACT is also delivered in self-help formats. A meta-analysis shows that ACT self-help showed significant small effect sizes favoring intervention for depression (g=0.34; 95% CIs [0.07, 0.61]; Z=2.49, p=0.01) and anxiety (g=0.35; 95% CIs [0.09, 0.60]; Z=2.66, p=0.008). Higher levels of clinician guidance improved outcomes but intervention format (e.g. book/computer) was unlikely to moderate results.[76] Internet-based ACT (iACT): A systematic review of internet-delivered ACT (iACT) for anxiety[77] showed that 18 out 20 studies reported significant anxiety reduction after treatment. This was observed in studies that delivered iACT with (n=13) or without (n=5) therapist guidance. The average attrition rate during treatment was 19%. In 13 studies participants on average rated their iACT experience with above average to high treatment satisfaction. App-based ACT: A recent RCT of ACT in an app form showed that help-seeking individuals vs. waitlist increased well-being with moderate effect sizes.[78] In aggregate, anxiety defusion and acceptance (acceptance and commitment therapy) seems effective with small to medium effect sizes."

https://docs.google.com/document/u/0/d/1Y0Mc0pI-pDMQMPg8M4F0zA1KYiXuvW5q7MPXRH9sX7k/mobilebasic#h.3u6ryras7n0z

Comment by Hauke Hillebrandt (HaukeHillebrandt) on HaukeHillebrandt's Shortform · 2022-08-10T16:45:42.502Z · EA · GW

The Global Catastrophic Risk Management Act of 2022 is a new bipartisan bill that was proposed recently and is going to be voted on in the US. There's another bill on WMDs.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on If we ever want to start an “Amounts matter” movement, covid response might be a good flag/example · 2022-07-29T21:14:22.482Z · EA · GW

Thanks for the link - I think the economists surveyed were not unanimous in saying that it's a slam dunk win, and as I wrote 'might' and 'big, if true' - also note that I'm citing a link from the very left-wing think tank associated with the German Green party. 

Also see that while the case for immigration boosting the economy in the long-run is strong based on economic theory, there might still be upfront cost that could have bad effect such as displacing traditional aid:

https://www.givingwhatwecan.org/blog/using-aid-to-finance-the-refugee-crisis-a-worrying-trend 

It could also be that, a la David Autor's China shock literature, while the  average economic effects of migration are positive some low-skilled domestic workers might have increased competition, which can cause populism. For instance, immigration can predict Brexit votes.

Again: big, if true and there should be more analysis. The main lesson here is if you're dealing with trillion dollar numbers, it might be very important.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on HaukeHillebrandt's Shortform · 2022-07-25T15:36:25.817Z · EA · GW

"star-manning is to not only engage with the most charitable version of your opponent’s argument, but also with the most charitable version of your opponent, by acknowledging their good intentions and your shared desires despite your disagreements. In our UBI example, star-manning would be to amend the steel man with something like, “…and you’re in favor of this because you think it will help people lead safer, freer, and more fulfilled lives—which we both want.” If used properly, star-manning can serve as an inoculant against our venomous discourse and a method for planting disputes on common ground rather than a fault line."

https://centerforinquiry.org/blog/how-to-star-man-arguing-from-compassion/

Comment by Hauke Hillebrandt (HaukeHillebrandt) on If we ever want to start an “Amounts matter” movement, covid response might be a good flag/example · 2022-07-24T09:31:24.085Z · EA · GW

Thanks for posting this.

“amounts matter” or “let’s actually do cost-benefit analysis” [...] To be clear, "amounts matter" is the usual EA stance already

I think EA is still overemphasizing high benefit-cost ratios, but it's now better to find high benefit minus cost interventions. In other words, it used to be the case that we wanted to find a way to fill a $1m funding gap to save 1000 lives, and save a life for $1k, but now even though these small funding gaps still exist it's quite hard to find them at scale, and we might rather want to find a billion dollar funding gap that saves lives at only $10k, but then save 100k lives which is better since amounts matter as you say.

In contrast to a movement trying to push high B/C interventions like EA, an 'amounts matter' / B-C movement would have much higher popular appeal as it would directly and personally affect many more people.

A few things that this movement might highlight (all 'big, if true'):

There are some public health things probably roughly on a similar level (smoking, obesity, etc.). 

On some level, politics is already doing this, but I think there's still a lot of scope insensitivity and not concrete focus on these issues.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on arxiv.org - I might work there soon · 2022-07-19T08:33:11.794Z · EA · GW

These are good references - I'm especially interested in arxiv-vanity, are you talking about Ben Firshman? I'll reach out once I start working there

Yes

Comment by Hauke Hillebrandt (HaukeHillebrandt) on arxiv.org - I might work there soon · 2022-07-18T17:07:36.600Z · EA · GW

Congrats on the job! Seems really high impact. A few thoughts:

  1. Scientometrics

Scientometrics is the field of study which concerns itself with measuring and analysing scientific literature such as the impact of research papers and academic journals. “Of course, such tools cannot substitute for substantive knowledge of human experts, but they can be used as powerful decision support systems to structure humans’ effort and augment their capabilities/efficiency to handle the enormous volume of data on research input and output” Finding rising stars in bibliometric networks | SpringerLink  Scientometric indicators and machine learning-based models can be used to predict the ‘rising stars’ in academia , —identifying these junior researchers and awarding them prizes would greatly improve research output. https://ieeexplore.ieee.org/abstract/document/8843686/.

2. https://allenai.org/ has both semantic scholar and an NLP AI team - I think there are overlaps wrt using the arxiv corpus for language models.

3. The creator of https://www.arxiv-vanity.com/ is also really interested in EA- maybe get in tocuh

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Cause Exploration Prizes: Reducing Suffering and Long Term Risk in Common Law Nations via Strategic Case Law Funding · 2022-07-12T16:19:23.997Z · EA · GW

Very cool - law seems underexplored.

Gates has a legal fund to help countries fight big tobacco - towards legal advice for nations whose health measures are challenged by tobacco industry, as in Uruguay and Australia. The mere existence of such funds might have be a credible threat or deterrence.

There might be other examples of high-impact law e.g. scientists are sometimes sued by companies for publishing the truth: e.g.  "In the 1980s, various interests tried to suppress the  work of Dr. Herbert Needleman on the effects of lead exposure.' c.f. EA project arising as a result 

Or: One of the company selling stimultants sued a Harvard Medical School Prof for "$200m in damages for libel, alleging that statements in the peer review article, and subsequent interviews with the media, were false." more here see 'Why a Lot of Important Research Is Not Being Done

As a layperson, common law seems generally much more elegant legal system and well-suited for EA and unknown emerging risks. 

My naive simplistic view  of civil law is: rules are written down quite explicitly and it's a bit more deontological (and that's why you have it in say Germany where Kant came from) e.g. the law says specifically 'You aren't allowed to use an algorithm that discriminates based on age'. But if you use an black box algorithm that discriminates based on something else, or outsource your hiring to a foreign firm that does the discriminating for you, then you're off the hook. When that behavior gets out of hand, the law has to be painfully rewritten, they try to generalize but it's hard, resulting in a crazy complicated legal corpus, and EVEN if you're found guilty you get a fine must have been stipulated in advance in the law, which is often not proportional to the crime and there's little deterrent effect. For instance, German courts don't use punitive damages and people seem confused by them and their usefulness (see McDonald's lawsuit).

Whereas in common law seems more consequentialist / utilitarian (and comes from the UK where Bentham comes from): if you show that there's precedent of someone having done ~similar harm to before, then there'll often be punitive damages in proportion to the crime for deterrence

Similar to large settlements in Big  Pharma, Big Tech has been fined >$30bn in recent years. Consider that the EU has fined Google $10bn. The EU seems to use ~case law and there's the Brussel's Effect, which might be very high leverage (A new UK regulator is said that it will also have "the power" to fine tech companies up to 10% of their global turnover if they fail to comply). This is interesting for slow take-off / ~prosaic AI safety / risks from malevolent actors reasons.

It's more elegant as it makes people and corporations generally be more on guard about misbehaving for fear of being sued.  Punitive damages are also theoretically equivalent to specific kind of Pigovian tax on externalities (which seems much better than traditional corporate tax and I'm against tort reform arguments and I'd hate to see caps on damages "Many state statutes are the result of insurance industry lobbying to impose "caps" on punitive damages; however, several state courts have struck down these statutory caps as unconstitutional.")

I guess currently this is disbursed to go into the general budget. But maybe one could fine corporations in stock and use the dividends to fund the regulators, so they're incentivized to reduce negative externalities (through fining companies), but then also they'd be held back to completely wreck industries or companies, because they're financed by the overall health of the industry after the fine.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Should I invest my runway? · 2022-07-09T12:35:20.303Z · EA · GW

The following is not financial advice. 

Sam Bowman who's EA adjacent has written about this. 

Wise actually has a bank account that you can invest in an index for UK customers only w/ 0.5% fees.

Note that investing in the stock market is generally seen as riskier than holding a major country's currency.

Also inflation is unusually high and markets expect it to come down and currencies are unusually volatile currently:

"Rock-bottom inflation and interest rates over the past decade helped smother swings in exchange rates. Deutsche Bank’s cvix index, a gauge of forex volatility, has been above its current level more than 90% of the time over the past 20 years. By contrast, the vix, which measures expected volatility for America’s s&p 500 index of stocks and is often used as a measure of overall market sentiment, has so far spent October at roughly its long-term average." [src]

Comment by Hauke Hillebrandt (HaukeHillebrandt) on When Giving People Money Doesn't Help · 2022-07-08T14:56:36.165Z · EA · GW

the problems of the world actually will just not support giving at that scale in a super cost-effective way

Policy makers use both fiscal and monetary policy to improve domestic welfare:

  1. Fiscal policy: An example is increasing tax on the rich people who earn >$100k/y, and then redistribute it to the poor people who earn $10k/y as welfare payment (or earned income tax breaks). At a first approximation, the cost of this policy might be $100B, but the utility costs are less as of the diminishing returns to utility and the benefits are larger. This is crazy scalable - you can easily send every person a stimulus check and spend a lot of money that way. However, generally, the fiscal multiplier is small, because people can’t use capital as effectively as firms. Transfer payments multiplier are usually smaller than government spending multipliers. Money can also be spent on public goods like health or education with higher sROI, but less scalability and steeper diminishing returns.
  2. Monetary policy: An example is printing $1T, then loaning it out to private banks at a below-market interest rate (say 0% though it could be negative). If the government would invest in an index, it could get a risk-adjusted return of 5% per year, which it is losing out of. Thus, the cost of the policy is $50B. The direct cost being distributed across the whole population due to inflation, which hits the poorest disproportionately, and thus having high cost in terms of utility. However, the fiscal multiplier of such a policy might be very high: banks are incentivized to recoup their investment and will loan to firms that succeed. The fiscal multiplier is generally higher, with higher social surplus, which creates jobs, and increases growth. Also, lower interest rates generally lower unemployment and increase wages. However, people who are not very rich only benefit slowly through trickle down effects.

Why would the state make concessional loans and lose money? Why not loan at market rates? For this would have no additionality, the state would just be another lender. These concessional loans pay for the service that banks provide to the state. What is the service? Banks find companies that will perform well, so that the state is not in the business of ‘picking winners’.

Analogously, policy makers use both aid and development finance to improve foreign welfare:

  1. Aid: An example of UNICEF’s humanitarian aid (~$26B/y) like disaster relief in cash or in kind (e.g. food aid, short-term reconstruction relief). Or they can spend on public goods like social infrastructure and services ($65B/y) to develop the human resource potential and improve living conditions. Health aid like malaria nets is $12.9bn, $6.2bn was spent on perhaps higher leverage technical assistance. Half of all $180B/y in aid is spent like this.[1] But the fiscal multiplier is generally considered small (see General equilibrium effects of cash transfers).
  2. Development finance: An example of development finance is giving out loans through development banks. For instance, the World Bank’s International Finance Corporation,[2] loans to foreign private firms and public private partnerships. Development finance institutions give out ~$50B/y.[3],[4] 

Like central banks, development banks also give loans to poor countries' governments, like the World Bank’s International Development Association, which offers concessional loans to poor countries.[5] These concessional loans have the benefit of feeling altruistic, creating strategic alliances, or spurring (global) growth.

The later (growth-friendly spending) is mostly through IMF and World Bank loans for countries that they need to pay back (analogous to impact investing), whereas aid is analogous to donating to nonprofits. The great thing about loans is that you need to pay them back with interest and so only if you’re certain you’ll create growth and a multiplier, you will take them, loans are the most direct way to subsidize business activity. They’re also very scalable. Sure you can subsidize the latest thing like chickens or therapy or whatever, but even Blattman admits money for entrepreneurs is best, so why not add skin in the game and make it a loan and go directly to the source of the problem (lack of growth)?

There is an allure to policy coherence and optimizing for several objectives at once, finding an intervention that has the best of both worlds, but it would be a suspicious convergence if the best poverty reduction methods happened to be the effective at creating growth and entrepreneurship as well. Giving directly to the poorest at scale AND having a high fiscal multiplier i.e. making people productive to create growth. The Tinbergen Rule is a basic principle of effective policy, which states that to achieve n independent policy targets you need at at least n independent policy instruments. And so people want microfinance to work at scale for poor people in poor countries, and think UBI will create many entrepreneurs and you get massive productivity at scale and a massive fiscal multiplier, but some European welfare states have de facto UBI and cuddly capitalism hasn’t created crazy growth. Not saying it’s bad for welfare reasons.

If you want anything outside this continuum, then I’d be more excited about subsidizing highly scalable, zero marginal cost (global) public goods. These could even be for entrepreneurship, like Moskovitz & Collison subsidizing Asana and Stripe Atlas for poor countries—that'd be  really effective.


 

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Doom Circles · 2022-07-08T13:47:08.022Z · EA · GW

Seems like the opposite of crewing where "each person would bring some problem they were working on to the group, and receive 90 minutes of undivided attention from their peers" - maybe it could be combined.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on An update on GiveWell’s funding projections · 2022-07-06T12:02:31.952Z · EA · GW
  1. You allocate 1/4 to new interventions (presumably higher-risk/higher-reward) - do you agree with OpenPhil's post you link to and are you going to fund those preferably given the funding shortfall?
  2. Do you agree OpenPhil that Givewell's interventions have ~the same CBA during  economic crises? For instance, AMF is now expanding to Nigeria, where  GDP/capita gone down to 2008 levels and generally Malaria deaths are up ~10% due to Covid. Does this increase the CBA of your core interventions? Relatedly: $150m seems quite a large reduction - OpenPhil considered using mission hedging at the last year's EAG - have you considered this for your assets?
Comment by Hauke Hillebrandt (HaukeHillebrandt) on Emphasizing emotional altruism in effective altruism · 2022-07-06T10:07:13.158Z · EA · GW

Cf. I always recommend this excellent philosophy paper "On the aptness of anger".

Comment by Hauke Hillebrandt (HaukeHillebrandt) on How to start a blog in 5 seconds for $0 · 2022-07-04T12:42:53.734Z · EA · GW

I blog using Google Docs + Google sites - I just write a Google doc, then push it into the public folder: 

hfh.pw/blog 

removing the trivial inconvenience of reformatting and hitting a publish button has definitely made me publish more posts.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Ben Garfinkel's Shortform · 2022-06-23T09:41:52.641Z · EA · GW

related: Imagen replicating DALL-E very well, seems like good evidence that there's healthy competition between big tech companies, which drives down profits.

One thing that might push against this are economies of scope and if data really does become the new oil and become more relevant over time.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Preventing a US-China war as a policy priority · 2022-06-22T19:29:23.185Z · EA · GW

Will China launch a full scale invasion of Taiwan before 2025? Currently at 9%

Will China launch a full-scale invasion of Taiwan before 2030? Currently at 25%

Will China launch a full-scale invasion of Taiwan before 2035? Currently at 39%

If China launches a full-scale invasion of Taiwan before 2035, will the US respond militarily? Currently at 66%

If China launches an invasion of Taiwan before 2035, and the US intervenes, will China attack the United States? Currently at 60%

If China launches a full-scale invasion of Taiwan before 2035, will they successfully control Taiwan within three years? Currently at 56% (I'm at 75% personally) 


Multiplying this out, the joint probabilities for hot US-Sino war in 3, 7, and 12 years, are thus, 4%, 10%, and 15% respectively.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Notes on "A World Without Email", plus my practical implementation · 2022-06-22T09:17:01.914Z · EA · GW

You could submit it as question to his podcast!

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Notes on "A World Without Email", plus my practical implementation · 2022-06-21T19:40:49.789Z · EA · GW

Thanks! Similarly, I'm enjoying https://simpl.fyi/ which simplifies gmails design.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Notes on "A World Without Email", plus my practical implementation · 2022-06-21T19:39:21.518Z · EA · GW

Newport's upcoming book will be on his ‘slow productivity’  philosophy which is:

  • Do fewer things 
  • work at a natural peace 
  • obsess over quality
Comment by Hauke Hillebrandt (HaukeHillebrandt) on HaukeHillebrandt's Shortform · 2022-06-17T09:35:08.218Z · EA · GW

The IGM booth survey of economists seems to suggests that there might be an recession next year in the US.

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Stephen Clare's Shortform · 2022-06-14T17:55:16.583Z · EA · GW

Agreed- people should look at https://forum.effectivealtruism.org/allPosts and sort by newest and then vote more as a public good to improve the signal to noise ratio.

We might also want to praise users to those who have a high ratio of highly upvoted comments to posts - here's a ranking:

1 khorton
2 larks
3 linch
4 max_daniel
5 michaela
6 michaelstjules
7 pablo_stafforini
8 habryka
9 peter_wildeford
10 maxra
11 jonas-vollmer
12 stefan_schubert
13 john_maxwell
14 aaron-gertler
15 carlshulman
16 john-g-halstead
17 benjamin_todd
18 greg_colbourn
19 michaelplant
20 willbradshaw
21 wei_dai
22 rohinmshah
23 buck
24 owen_cotton-barratt
25 jackm

https://docs.google.com/spreadsheets/d/1vew8Wa5MpTYdUYfyGVacNWgNx2Eyp0yhzITMFWgVkGU/edit#gid=0

Comment by Hauke Hillebrandt (HaukeHillebrandt) on New cause area: bivalve aquaculture · 2022-06-12T17:56:19.853Z · EA · GW

Great idea- in the UK, frozen mussels are just £3/kg vs. £1.75/kg for the cheapest frozen chicken

I wonder if they could be genetically engineered or breed to taste differently or be bigger, given that some techno economic assessments suggest that creating cultured meat is going to be expensive. 

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Cullen_OKeefe's Shortform · 2022-06-07T07:43:11.912Z · EA · GW

Oh I see- thanks for clarifying! This is a very interesting idea, but somehow it still seems counterintuitive... by the same logic, wouldn't you also want to overexpose yourself to e.g.  publicly traded real estate because most of it isn't public?

If true, and if most passive (institutional) investors aren't sufficiently exposed to PE (or real estate), wouldn't that suggest that the market undervalues this asset class and you can beat the market by investing in it? Honest question, I haven't thought this through very well, but something still feels counterintuitive that you could create a better passive global market portfolio... 

if you think public PE should perform similarly to PE broadly.

I think this might be another big if... though also one should be surprised if there'd be a big discontinuous jump in returns when going from non-traded to traded.


 

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Michael Nielsen's "Notes on effective altruism" · 2022-06-04T10:55:51.100Z · EA · GW

Trending on hacker news now https://news.ycombinator.com/item?id=31609325

Comment by Hauke Hillebrandt (HaukeHillebrandt) on EARadio Returns - Suggest Episodes and Shoutouts · 2022-06-03T09:07:56.402Z · EA · GW

Big fan of EA radio - thanks for working on this!

I would like an audiobook version for Inadequate Equilibria ( licensed under CC NC-BY-SA 4.0).

Comment by Hauke Hillebrandt (HaukeHillebrandt) on Cullen_OKeefe's Shortform · 2022-06-03T08:59:59.657Z · EA · GW

Couldn't you argue that you're actually moving towards the "true" global market portfolio, which would include many non-publicly-traded assets? (Similar for real estate: seems plausible that people should overweight REITs in their portfolios.)

 

Because the companies included in the private equity ETF are also already represented in the global market portfolio, the percentage of the market cap / total market cap is already includes what the market believes to be the discounted future profits of all non-publicly-traded assets that the private equity (e.g. Blackstone) invested in. 

Not sure this is true if the risk is also higher.

Yes, beta (and risk) are higher and so alpha and the expected returns are higher, but that's the trade-off so there's no free lunch. It's similar to foregoing bonds and only investing in stocks in one's portfolio, but even riskier.