Posts

The Precipice is out today in US & Canada! Audiobook now available 2020-03-28T04:00:48.857Z · score: 39 (11 votes)
Toby Ord’s ‘The Precipice’ is published! 2020-03-04T21:09:11.693Z · score: 136 (51 votes)
What ever happened to PETRL (People for the Ethical Treatment of Reinforcement Learners)? 2019-12-30T17:28:32.962Z · score: 29 (14 votes)

Comments

Comment by matthew-vandermerwe on Some thoughts on EA outreach to high schoolers · 2020-09-16T12:07:37.812Z · score: 42 (18 votes) · EA · GW

If there were more orgs doing this, there’d be the risk of abuse working with minors if in-person.

I think this deserves more than a brief mention. One of the two high school programs mentioned (ESPR) failed to safeguard students from someone later credibly accused of serious abuse, as detailed in CFAR's write-up:

Of the interactions CFAR had with Brent, we consider the decision to let him assist at ESPR—a program we helped run for high school students—to have been particularly unwise ... We do not believe any students were harmed. However, Brent did invite a student (a minor) to leave camp early to join him at Burning Man. Beforehand, Brent had persuaded a CFAR staff member to ask the camp director for permission for Brent to invite the student. Multiple other staff members stepped in to prevent this, by which time the student had decided against attending anyway.

This is a terrible track record for this sort of outreach effort. I think it provides a strong reason against pursuing it further without a high degree of assurance that the appropriate lessons have been learned — something which doesn't seem to have been addressed in the post or comments.

Comment by matthew-vandermerwe on Max_Daniel's Shortform · 2020-08-07T08:48:56.550Z · score: 3 (2 votes) · EA · GW

Nice post. I’m reminded of this Bertrand Russell passage:

“all the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man's achievement must inevitably be buried beneath the debris of a universe in ruins ... Only within the scaffolding of these truths, only on the firm foundation of unyielding despair, can the soul's habitation henceforth be safely built.” —A Free Man’s Worship, 1903

I take Russell as arguing that the inevitability (as he saw it) of extinction undermines the possibility of enduring achievement, and that we must therefore either ground life’s meaning in something else, or accept nihilism.

At a stretch, maybe you could run your argument together with Russell's — if we ground life’s meaning in achievement, then avoiding nihilism requires that humanity neither go extinct nor achieve total existential security.

Comment by matthew-vandermerwe on The Importance of Unknown Existential Risks · 2020-07-28T10:51:21.275Z · score: 3 (2 votes) · EA · GW

Thanks — I agree with this, and should have made clearer that I didn't see my comment as undermining the thrust of Michael's argument, which I find quite convincing.

Comment by matthew-vandermerwe on The Importance of Unknown Existential Risks · 2020-07-24T11:16:55.280Z · score: 6 (4 votes) · EA · GW

Great post!

But based on Rowe & Beard's survey (as well as Michael Aird's database of existential risk estimates), no other sources appear to have addressed the likelihood of unknown x-risks, which implies that most others do not give unknown risks serious consideration.

I don't think this is true. The Doomsday Argument literature (Carter, Leslie, Gott etc.) mostly considers the probability of extinction independently of any specific risks, so these authors' estimates implicitly involve an assessment of unknown risks. Lots of this writing was before there were well-developed cases for specific risks. Indeed, the Doomsday literature seems to have inspired Leslie, and then Bostrom, to start seriously considering specific risks.

Leslie explicitly considers unknown risks (p.146, End of the World):

Finally, we may well run a severe risk from something-we-know-not-what: something of which we can say only that it would come as a nasty surprise like the Antarctic ozone hole and that, again like the ozone hole, it would be a consequence of technological advances.

As does Bostrom (2002):

We need a catch-all category. It would be foolish to be confident that we have already imagined and anticipated all significant risks. Future technological or scientific developments may very well reveal novel ways of destroying the world.

Comment by matthew-vandermerwe on How Much Does New Research Inform Us About Existential Climate Risk? · 2020-07-23T10:17:46.866Z · score: 7 (4 votes) · EA · GW

Very useful comment — thanks.

Overall, I don't view this as especially good news ...

How do these tail values compare with your previous best guess?

Comment by matthew-vandermerwe on Objections to Value-Alignment between Effective Altruists · 2020-07-17T12:30:25.410Z · score: 47 (23 votes) · EA · GW

[ii] Some queries to MacAskill’s Q&A show reverence here, (“I'm a longtime fan of all of your work, and of you personally. I just got your book and can't wait to read it.”, “You seem to have accomplished quite a lot for a young person (I think I read 28?). Were you always interested in doing the most good? At what age did you fully commit to that idea?”).

I share your concerns about fandom culture / guru worship in EA, and am glad to see it raised as a troubling feature of the community. I don’t think these examples are convincing, though. They strike me as normal, nice things to say in the context of an AMA, and indicative of admiration and warmth, but not reverence.

Comment by matthew-vandermerwe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-18T05:39:47.611Z · score: 16 (6 votes) · EA · GW

Hayek's Road to Serfdom, and twentieth century neoliberalism more broadly, owes a lot of its success to this sort of promotion. The book was published in 1944 and initially quite successful, but print runs were limited by wartime paper rationing. In 1945, the US magazine Reader's Digest created a 20-page condensed version, and sold 1 million of these very cheaply (5¢ per copy). Anthony Fisher, who founded the IEA, came across Hayek's ideas through this edition.

Source: https://press.uchicago.edu/Misc/Chicago/320553.html

Comment by matthew-vandermerwe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T07:42:13.927Z · score: 34 (17 votes) · EA · GW

Great post — this is something EA should definitely be thinking more about as the canon of EA books grows and matures. Peter Singer has done it already, buying back the rights for TLYCS and distributing a free digital versions for its 10th anniversary.

I wonder whether most of the value of buying back rights could be captured by just buying books for people on request. A streamlined process for doing this could have pretty low overheads — it only takes a couple minutes to send someone a book via Amazon — and seems scalable. This should be easy enough for a donor or EA org to try.

I also imagine that for most publishers, profits are concentrated after release

I looked into this recently, using Goodreads data as a proxy for sales. My takeaway was that sales of these books have been surprisingly linear over time, rather than being concentrated early on: Superintelligence; Doing Good Better; TLYCS

Comment by matthew-vandermerwe on X-risks to all life v. to humans · 2020-06-04T07:28:12.196Z · score: 9 (5 votes) · EA · GW

Welcome to the forum!

Further development of a mathematical model to realise how important timelines for re-evolution are.

Re-evolution timelines have another interesting effect on overall risk — all else equal, the more confident one is that intelligence will re-evolve, the more confident one should be that we will be able to build AGI,* which should increase one’s estimate of existential risk from AI.

So it seems that AI risk gets a twofold ‘boost’ from evidence for a speedy re-emergence of intelligent life:

  • Relative AI risk increases, since risk from most other sources is discounted a bit.
  • Absolute AI risk increases, since it pushes towards shorter AGI timelines.

*Shulman & Bostrom 2012 discuss this type of argument, and some complexities in adjusting for observation selection effects

Comment by matthew-vandermerwe on How Much Leverage Should Altruists Use? · 2020-05-18T11:17:10.550Z · score: 6 (2 votes) · EA · GW

[disclosure: not an economist or investment professional]

emerging market bonds ... aren't (to my knowledge) distorted by the Fed buying huge amounts of bonds

This seems wrong — the spillover effects of 2008–13 QE on EM capital markets are fairly well-established (cf the 'Taper Tantrum' of 2013).

see e.g. Effects of US Quantitative Easing on Emerging Market Economies

"We find that an expansionary US QE shock has significant effects on financial variables in EMEs. It leads to an exchange rate appreciation, a reduction in long-term bond yields, a stock market boom, and an increase in capital inflows to these countries."
Comment by matthew-vandermerwe on EA Updates for April 2020 · 2020-05-02T06:33:19.348Z · score: 10 (4 votes) · EA · GW

My top picks for April media relating to The Precipice:

Comment by matthew-vandermerwe on How hot will it get? · 2020-04-25T07:41:08.849Z · score: 4 (1 votes) · EA · GW

I wasn't thinking about any implications like that really. My guess would be that the Kaya Identity isn't the right tool for thinking about either (i) extreme growth scenarios; or (ii) the fossil fuel endgame; and definitely not (iii) AI takeoff scenarios.

If I were more confident in the resource estimate, I would probably switch out the AI explosion scenario for a 'we burn all the fossil fuels' scenario. I'm not sure we can rule out the possibility that the actual limit is a few orders of magnitude more than 13.6PtC. IPCC cites Rogner 2014 for the figure. In personal communication, one scientist described Rogner's previous (1997) estimate as:

a mishmash of unreliable information, including self-reported questionnaires by individual governments

It would be great to better understand these estimates — I'm surprised there isn't more work on this. In particular, you'd think there would be geologically-based models of how much carbon there is, that aren't so strongly grounded in known-reserves + current/near-term technological capabilities.

Comment by matthew-vandermerwe on How hot will it get? · 2020-04-24T14:39:14.564Z · score: 6 (2 votes) · EA · GW

Also note that your estimate for emissions in the AI explosion scenario exceeds the highest estimates for how much fossil fuel there is left to burn. The upper bound given in IPCC AR5 (WG3.C7.p.525) is ~13.6 PtC (or ~5*10^16 tons CO2).

Awesome post!

Comment by matthew-vandermerwe on Toby Ord’s ‘The Precipice’ is published! · 2020-03-30T11:27:18.209Z · score: 2 (2 votes) · EA · GW

Yes I gave authorization!

Comment by matthew-vandermerwe on Toby Ord’s ‘The Precipice’ is published! · 2020-03-12T16:35:40.006Z · score: 15 (8 votes) · EA · GW

The audiobook will not include the endnotes. We really couldn't see any good way of doing this, unfortunately.

Toby is right that there's a huge amount of great stuff in there, particularly for those already more familiar with existential risk, so I would highly recommend getting your hands on a physical or ebook version (IMO ebook is the best format for endnotes, since they'll be hyperlinked).

Comment by matthew-vandermerwe on Toby Ord’s ‘The Precipice’ is published! · 2020-03-08T15:28:38.046Z · score: 2 (4 votes) · EA · GW

I will investigate this and get back to you!

Comment by matthew-vandermerwe on COVID-19 brief for friends and family · 2020-02-29T14:50:49.738Z · score: 9 (5 votes) · EA · GW

Thanks for writing this!

In the early stages, it will be doubling every week approximately

I’d be interested in pointers on how to interpret all the evidence on this:

  • until Jan 4: (Li et al) find 7.4 days
  • Jan 16–Jan 30: (Cheng & Shan) find ~1.8 days in China, before quarantine measures start kicking in.
  • Jan 20–Feb 6: (Muniz-Rodriguez et al) find 2.5 for Hubei [95%: 2.4–2.7], and other provinces ranging from 1.5 to 3.0 (with much wider error bars).
  • Eyeballing the most recent charts:
  • I’ve also seen it suggested that the outside-China growth might be inflated due to ‘catch up’ from slow roll-out of testing.

Altogether, what is our best guess, and what evidence should we be looking out?

Comment by matthew-vandermerwe on Should Longtermists Mostly Think About Animals? · 2020-02-04T10:19:51.480Z · score: 6 (7 votes) · EA · GW
In fact, x-risks that eliminate human life, but leave animal life unaffected would generally be almost negligible in value to prevent compared to preventing x-risks to animals and improving their welfare.

Eliminating human life would lock in a very narrow set of futures for animals - something similar to the status quo (minus factory farming) until the Earth becomes uninhabitable. What reason is there to think the difference between these futures, and those we could expect if humanity continues to exist, would be negligible?

As far as we know, humans are the only thing capable of moral reasoning, systematically pushing the world toward more valuable states, embarking on multi-generational plans, etc. etc. This gives very strong reasons for thinking the extinction of humanity would be of profound significance to the value of the future for non-humans.

Comment by matthew-vandermerwe on UK donor-advised funds · 2020-01-22T16:21:15.798Z · score: 7 (2 votes) · EA · GW

Yeah - plus the opportunity cost of having it in cash. Looks like a non-starter.

Comment by matthew-vandermerwe on UK donor-advised funds · 2020-01-22T16:18:55.192Z · score: 3 (2 votes) · EA · GW

Yeah, and how much you value the flexibility depends on what you expect to donate to.

EA Funds already allows you to donate to small/speculative projects, non-UK charities, etc, via a UK registered charity, so 'only ever donating to UK charities' is less restrictive than it sounds..

Comment by matthew-vandermerwe on UK donor-advised funds · 2020-01-22T14:41:22.141Z · score: 7 (2 votes) · EA · GW

Yes that's what I meant - will edit for clarity

Comment by matthew-vandermerwe on UK donor-advised funds · 2020-01-22T14:22:32.513Z · score: 10 (4 votes) · EA · GW

I spent an hour or so looking into this recently, and couldn't find any DAFs that were suitable for small donors. It's possible I missed one, though.

CAF offers a 'giving account', which is effectively a low-interest savings account. You can get immediate tax relief for deposits, but forgo returns, and can only donate to UK registered charities, so seems like a weak option: https://www.cafonline.org/docs/default-source/personal-giving/2573h_charityacc_web-app_pdf_220519.pdf

FWIW my tentative conclusion was that the best option for savings small-ish sums for giving later is a straightforward ISA. You don't get any immediate tax relief from paying into the ISA (you'll get this when you donate the money to charity) but you do get (tax-free) growth, and retain flexibility in when and how the money is donated. Returns are important if you think you might wait a long time before giving, flexibility seems important if one's motivation for giving later is the possibility of some high-leverage opportunities in the future. The main downside is that you can't easily bind yourself into future donations this way, but I thought this was outweighed by the other factors in my own case.

Comment by matthew-vandermerwe on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-14T12:27:53.484Z · score: 10 (5 votes) · EA · GW

Sorry I should have disclaimed that I don't think this is a sensible strategy, and that people should approach party membership in good faith (for roughly the reasons Greg outlines above). Thanks for prompting me to clarify this.

My comment was just to point out that timing is an important factor in leverage-per-member.

Comment by matthew-vandermerwe on Response to recent criticisms of EA "longtermist" thinking · 2020-01-13T13:44:24.161Z · score: 4 (3 votes) · EA · GW
The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes).

I find this surprising. Can you point to examples?

Comment by matthew-vandermerwe on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-13T11:20:59.558Z · score: 7 (7 votes) · EA · GW

This seems unlikely to be a useful tie-break in most cases, provided one can switch membership. UK party leadership elections are rarely contemporaneous [1] (unlike in the US), so the likelihood of a given party member being able to realise their leverage will generally differ by more than a factor of 4.5x at any given time.

[1] Conservatives: 1975, 1990, 1997, 2001, 2005, 2019

Labour: 1980, 1983, 1992, 1994, 2010, 2015, 2016, 2020

Comment by matthew-vandermerwe on What ever happened to PETRL (People for the Ethical Treatment of Reinforcement Learners)? · 2019-12-31T19:54:38.608Z · score: 2 (2 votes) · EA · GW

Thanks! Exactly the information I wanted.

Comment by matthew-vandermerwe on 8 things I believe about climate change · 2019-12-30T14:54:36.623Z · score: 16 (5 votes) · EA · GW

[I broadly agree with above comment and OP]

Something I find missing from the discussion of CC as an indirect existential risk is what this means for prioritisation. It's often used implicitly to support CC-mitigation as a high-priority intervention. But in the case of geoengineering, funding for governance/safety is probably on the order of millions (at most), making it many orders of magnitude more neglected than CC-mitigation, and this is similar for targeted nuclear risk mitigation, reducing risk of great power war, etc.

This suggests that donors who believe there is substantial indirect existential risk from CC are (all else equal) much better off funding the terminal risks, insofar as there are promising interventions that are substantially more underfunded.

Comment by matthew-vandermerwe on Notes on 'Atomic Obsession' (2009) · 2019-11-02T14:10:53.435Z · score: 4 (3 votes) · EA · GW

+1 to all of this, and thanks for the other excellent comments.

There were, however, several accidents where the conventional explosives (that would trigger a nuclear detonation in intended use cases) in a nuclear weapon detonated (but where safety features prevented a nuclear detonation)

It's probably worse than that - there is at least one incident where critical safety features failed, and it was luck that prevented a nuclear explosion

From a declassified report on a 1961 incident, in which a bomber carrying two 4MT warheads broke up over North Carolina [1]:

Weapon 1, which landed essentially intact, was in the "safe" position when it dropped, preventing detonation. The T-249 Arm/Safe switch worked exactly as it was supposed to, preventing a nuclear explosion.
...
[Weapon 2] landed in a free-fall. Without the parachute operating, the timer did not initiate the bomb's high voltage battery ("trajectory arming"), a step in the arming sequence. While the Arm/Safe switch was in the "safe" position, it had become virtually armed because the impact of the crash had rotated the indicator drum to the "armed" position. But the shock also damaged the switch contacts, which had to be intact for the weapon to detonate. While Weapon 2 was not close to detonation, the fact that the physical impact of a crash could activate the same arming mechanism that had kept Weapon 1 safe showed the danger of such accidents.

In other words - the critical safety mechanism that prevented one bomb from detonating failed on the other bomb (and detonation of this bomb was avoided due to contingent features of the crash).

[1] https://nsarchive2.gwu.edu/nukevault/ebb475/docs/doc%205%20AEC%20report%20Goldsboro%20accident.pdf

[2] More info on the incident: https://nsarchive2.gwu.edu/nukevault/ebb475/


Comment by matthew-vandermerwe on List of EA-related email newsletters · 2019-10-09T10:26:14.956Z · score: 5 (4 votes) · EA · GW

I would add Future Perfect, and Policy.AI (CSET's new AI policy newsletter)

Comment by matthew-vandermerwe on Are we living at the most influential time in history? · 2019-09-04T13:09:09.604Z · score: 11 (4 votes) · EA · GW
there are an expected 1 million centuries to come, and the natural prior on the claim that we’re in the most influential century ever is 1 in 1 million. This would be too low in one important way, namely that the number of future people is decreasing every century, so it’s much less likely that the final century will be more influential than the first century. But even if we restricted ourselves to a uniform prior over the first 10% of civilisation’s history, the prior would still be as low as 1 in 100,000. 

Half-baked thought: you might think that the very very long futures will mostly have been locked in very close to their start—i.e. that timescales for locking in the best futures are much much shorter than the maximum lifespan for civilisation. This would push you towards a prior over an even smaller chunk of the expected future.

Something like this view seems implicit in some ways of talking about the future, and feels plausible to me, though I’m not sure what the best arguments are.

Comment by matthew-vandermerwe on Which nuclear wars should worry us most? · 2019-06-18T14:57:01.697Z · score: 4 (3 votes) · EA · GW

I'm excited to read this series!

It would take a lot of nuclear weapons to produce nuclear winter climate effects, so if we’re particularly worried about nuclear winter, we should focus on nuclear exchange scenarios that would involve large nuclear arsenals.

I don't think this is quite right. Robock 2007 finds a severe nuclear winter effect from an exchange with just 100x 15kt bombs. AFAIK, the only country with an arsenal below that threshold today is North Korea, which would suggest that — on Robock's modelling at least—any bilateral exchange involving nuclear powers other than NK is large enough to pose a significant risk of nuclear winter.

Comment by matthew-vandermerwe on Alignment Newsletter One Year Retrospective · 2019-04-10T08:33:12.559Z · score: 10 (5 votes) · EA · GW

General comment: Huge fan of the newsletter, and think it's awesome you're doing this sort of review. I should also caveat that I'm not an AIS researcher, so not exactly target audience.

My first guess is that there's significant value in someone maintaining an open, exhaustive database of AIS research. My main uncertainty is whether you are the best positioned to do this as things ramp up. It is plausible to me that an org with a safety team (e.g. DeepMind/OpenAI) is already doing this in-house, or planning to do so. It's less clear that they would be willing to maintain a public resource. I'd want to verify this, and make sure that you're coordinating with them to avoid any unnecessary duplication. More broadly, these labs might have some good systems in place for maintaining databases of new research in areas with a much higher volume than AIS, so could potentially share some best-practices.

Comment by matthew-vandermerwe on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T09:31:47.747Z · score: 25 (12 votes) · EA · GW

Thanks for clarifying, that seems reasonable.

FWIW I share the view that sending all 4 volumes might not be optimal. I think I'd find it a nuisance to receive such a large/heavy item (~3 litres/~2kg by my estimate) unsolicited.

Comment by matthew-vandermerwe on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T13:31:35.982Z · score: 16 (14 votes) · EA · GW

$43/unit is still quite high - could you elaborate a bit more?