Posts

Future Matters #8: Bing Chat, AI labs on safety, and pausing Future Matters 2023-03-21T14:50:03.593Z
Future Matters #7: AI timelines, AI skepticism, and lock-in 2023-02-03T11:47:12.037Z
Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk 2022-12-30T13:10:54.583Z
Future Matters #5: supervolcanoes, AI takeover, and What We Owe the Future 2022-09-14T13:02:10.621Z
Future Matters #4: AI timelines, AGI risk, and existential risk from climate change 2022-08-08T11:00:51.546Z
Future Matters #3: digital sentience, AGI ruin, and forecasting track records 2022-07-04T17:44:29.866Z
Future Matters #2: Clueless skepticism, 'longtermist' as an identity, and nanotechnology strategy research 2022-05-28T06:25:45.625Z
Future Matters #1: AI takeoff, longtermism vs. existential risk, and probability discounting 2022-04-23T23:32:24.945Z
Future Matters #0: Space governance, future-proof ethics, and the launch of the Future Fund 2022-03-22T21:15:24.331Z
The Precipice is out today in US & Canada! Audiobook now available 2020-03-28T04:00:48.857Z
Toby Ord’s ‘The Precipice’ is published! 2020-03-04T21:09:11.693Z
What ever happened to PETRL (People for the Ethical Treatment of Reinforcement Learners)? 2019-12-30T17:28:32.962Z

Comments

Comment by matthew.vandermerwe on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-18T22:17:01.915Z · EA · GW

At EAG London 2022, they distributed hundreds of flyers and stickers depicting Sam on a bean bag with the text "what would SBF do?". 

These were not an official EAG thing — they were printed by an individual attendee.

To my knowledge, never before were flyers depicting individual EAs at EAG distributed. (Also, such behavior seems generally unusual to me, like, imagine going to a conference and seeing hundreds of flyers  and stickers all depicting one guy. Doesn't that seem a tad culty?

Yeah it was super weird. 

Comment by matthew.vandermerwe on Someone should write a detailed history of effective altruism · 2023-01-16T10:00:44.092Z · EA · GW

See also: Thomas Young's history of abolitionism, Friedrich Engels' history of Marxism.

Comment by matthew.vandermerwe on Why did CEA buy Wytham Abbey? · 2022-12-07T11:33:23.417Z · EA · GW

This break even analysis would be more appropriate if the £15m had been ~burned, rather than invested in an asset which can be sold.

If I buy a house for £100k cash and it saves me £10k/year in rent (net costs), then after 10 years I've broken even in the sense of [cash out]=[cash in], but I also now have an asset worth £100k (+10y price change), so I'm doing much better than 'even'.

Comment by matthew.vandermerwe on Why did CEA buy Wytham Abbey? · 2022-12-06T18:40:30.667Z · EA · GW

Agreed.  And from perspective of the EA portfolio, the investment has some diversification benefits. YTD Oxford property prices are up +8% , whereas the rest of the EA portfolio (Meta/Asana/crypto) has dropped >50%.

Comment by matthew.vandermerwe on Will MacAskill's role in connecting SBF to Elon Musk for a potential Twitter deal · 2022-11-13T15:16:03.835Z · EA · GW

More and more media outlets are reporting [...]

I think the use of present tense here is a bit misleading, since almost all of these articles are from 5 or 6 weeks ago. 

Comment by matthew.vandermerwe on Roodman's Thoughts on Biological Anchors · 2022-10-31T10:37:36.033Z · EA · GW

I'd love to see the Guesstimate model linked in the report, but the link doesn't work for me.

Comment by matthew.vandermerwe on Future Matters #5: supervolcanoes, AI takeover, and What We Owe the Future · 2022-09-16T09:25:14.130Z · EA · GW

Hi Haydn — the paper is about eruptions of magnitude 7 or greater, which includes magnitude 8. The periodicity figure I quote for magnitude 8 is taken directly from the paper. 

Comment by matthew.vandermerwe on Future Matters #5: supervolcanoes, AI takeover, and What We Owe the Future · 2022-09-14T14:51:42.474Z · EA · GW

Hi Eli — this was my mistake; thanks for flagging. We'll correct the post.

Comment by matthew.vandermerwe on Existential risk pessimism and the time of perils · 2022-09-05T08:55:54.087Z · EA · GW

Crossposting Carl Shulman's comment on a recent post 'The discount rate is not zero', which is relevant here:

It's quite likely the extinction/existential catastrophe rate approaches zero within a few centuries if civilization survives, because:

  1. Riches and technology make us comprehensively immune to  natural disasters.
  2. Cheap ubiquitous detection, barriers, and sterilization make civilization immune to biothreats
  3. Advanced tech makes neutral parties immune to the effects of nuclear winter.
  4. Local cheap production makes for small supply chains that can regrow from disruption as industry becomes more like information goods.
  5. Space colonization creates robustness against local disruption.
  6. Aligned AI blocks threats from misaligned AI (and many other things).
  7. Advanced technology enables stable policies (e.g. the same AI police systems enforce treaties banning WMD war for billions of years), and the world is likely to wind up in some stable situation (bouncing around until it does).

If we're more than 50% likely to get to that kind of robust state, which I think is true, and I believe Toby  does as well, then the life expectancy of civilization is very long, almost as long on a log scale as with 100%.

Your argument depends on  99%+++ credence that such safe stable states won't be attained, which is doubtful for 50% credence, and quite implausible at that level. A classic paper by the climate economist Martin Weitzman shows that the average discount rate over long periods  is set by the lowest plausible rate (as the possibilities of high rates drop out after a short period and you get a constant factor penalty for the probability of low discount rates, not exponential decay).

Comment by matthew.vandermerwe on Existential risk pessimism and the time of perils · 2022-08-15T14:28:44.744Z · EA · GW

I doubt those coming up with the figures you cite believe per century risk is about 20% on average

Indeed! In The Precipice, Ord estimates a 50% chance that humanity never suffers an existential catastrophe (p.169).

Comment by matthew.vandermerwe on Against longtermism · 2022-08-11T10:16:15.747Z · EA · GW

Nuclear war similarly can be justified without longtermism, which we know because this has been the case for many decades already

Much of the mobilization against nuclear risk from the 1940s onwards was explictly grounded in the threat of human extinction — from the Russell-Einsten manifesto to grassroots movements like Women Strike for Peace with the slogan "End the Arms Race not the Human Race"

Comment by matthew.vandermerwe on On the Vulnerable World Hypothesis · 2022-08-01T18:09:34.926Z · EA · GW

Thanks for writing this, I like the forensic approach. I've long wished there was more discussion of the VWH paper, so it's been great to see yours and Maxwell Tabarrok's post in recent weeks. 

Not an objection to your argument, but minor quibble with your reconstructed Bostrom argument:

P4: Ubiquitous real-time worldwide surveillance is the best way to decrease the risk of global catastrophes

I think it's worth noting that the paper's conclusion is that both ubiquitous surveillance and  effective global governance are required for avoiding existential catastrophe,[1] even if only discussing one of these.

[Disclaimer: I work for Nick Bostrom, these are my personal views]

  1. ^

    from conclusion: "We traced the root cause of our civilizational exposure to two structural properties of the contemporary world order: on the one hand, the lack of preventive policing capacity to block, with extremely high reliability, individuals or small groups from carrying out actions that are highly illegal; and, on the other hand, the lack of global governance capacity to reliably solve the gravest international coordination problems even when vital national interests by default incentivize states to defect. General stabilization against potential civilizational vulnerabilities [...] would require that both of these governance gaps be eliminated."

Comment by matthew.vandermerwe on Future Matters #3: digital sentience, AGI ruin, and forecasting track records · 2022-07-05T13:43:18.376Z · EA · GW

Hi Zach,  thank you for your comment. I'll field this one, as I wrote both of the summaries.

This strongly suggests that Bostrom is commenting on LaMDA, but he's discussing "the ethics and political status of digital minds" in general.

I'm comfortable with this suggestion. Bostrom's comment was made (i.e. uploaded to nickbostrom.com) the day after the Lemoine story broke. (source: I manage the website). 

"[Yudkowsky] recently announced that MIRI had pretty much given up on solving AI alignment"

I chose this phrasing on the basis of the second sentence of the post: "MIRI didn't solve AGI alignment and at least knows that it didn't." Thanks for pointing me to Bensinger's comment, which I hadn't seen. I remain confused by how much of the post should be interpreted literally vs tongue-in-cheek. I will add the following note into the summary: 

(Edit: Rob Bensinger clarifies in the comments that "MIRI has [not] decided to give up on reducing existential risk from AI.")

Thanks!

Comment by matthew.vandermerwe on Cool Offices ? · 2022-05-21T19:36:29.502Z · EA · GW

Cool Offices ?

Good/reliable AC and ventilation are very important IMO. 

Comment by matthew.vandermerwe on [deleted post] 2022-05-21T19:27:15.489Z

I'm trying to understand the simulation argument.

You might enjoy Joe Carlsmith's essay, Simulation Arguments (LW).

Comment by matthew.vandermerwe on How many lives has the U.S. President's Emergency Plan for AIDS Relief (PEPFAR) saved? · 2022-01-18T13:41:37.750Z · EA · GW

This Vox article by Dylan Matthews cites these two studies, which try to get at this question:

EDIT to add: here's a more recent analysis, looking at mortality impact up to 2018 — Kates et al. (2021)

Comment by matthew.vandermerwe on [deleted post] 2021-09-15T19:30:03.489Z

btw — there's a short section on this in my old Existential Risk wikipedia draft. maybe some useful stuff to incorporate into this. 

Comment by matthew.vandermerwe on [deleted post] 2021-09-15T19:23:42.665Z

weak disagree. FWIW lots of good cites in endnotes to chapter 2 of The Precipice pp.305–12; and  Moynihan's X-Risk.

Comment by matthew.vandermerwe on What are things everyone here should (maybe) read? · 2021-07-18T12:02:02.629Z · EA · GW

I considered writing a post about the same biography you mentioned for the forum.

I would love to read such a post! 

It's very humbling to see how much he already thought of, which we now call EA. 

Agreed — I think the Ramsey/Keynes-era Apostles would make an interesting case study of a 'proto-EA' community. 

Comment by matthew.vandermerwe on Should EA Buy Distribution Rights for Foundational Books? · 2021-06-22T10:59:31.619Z · EA · GW

Another historical precedent

In 1820, James Mill seeks permission for a plan to print and circulate 1,000 copies of his Essay on Government, originally published as a Supplement to Napier's Encyclopaedia Britannica:

I have yet to speak to you about an application which has been made to me as to the article on Government, from certain persons, who think it calculated to disseminate very useful notions, and wish to give a stimulus to the circulation of them. Their proposal is, to print (not for sale, but gratis distribution) a thousand copies. I have refused my consent till I should learn from you, whether this would be considered an impropriety with respect to the Supplement. To me it appears the reverse, as the distribution would in some degree operate as an advertisement.

Ernest Barker suggests it was quite successful:

Mill's article was thus given a wider circulation that the Supplement to the Encyclopedia would have afforded by itself ... By 1824 ... there had appeared what was possibly a second edition ... Mill, in a letter of August 1825, speaks of the second reprint 'being all gone, and great demand remaining.' (He also mentions ... that his essays 'are the text-books of the young men of the Union at Cambridge'.

Comment by matthew.vandermerwe on [deleted post] 2021-06-04T14:29:41.953Z

FWIW, and setting aside stylistic considerations for the Wiki, I dislike 'x-risk' as a term and avoid using it myself even in informal discussions. 

  • it's ambiguous between 'extinction' and 'existential', which is already a common confusion
  • it seems unserious and somewhat flippant (vaguely comic book/sci-fi vibes)
  • the 'x' prefix can denote the edgy, or sexual  (e.g. X Games; x-rated; Generation X?)
  • 'x' also often denotes an unknown value (e.g. in 'Cause X' — another abbreviation I dislike; or indeed Stefan's comment earlier in this thread)
Comment by matthew.vandermerwe on [deleted post] 2021-06-04T14:06:47.271Z

I prefer this option to all others mentioned here. 

Comment by matthew.vandermerwe on What are things everyone here should (maybe) read? · 2021-05-20T10:34:14.195Z · EA · GW

I also kind of think everyone should read at least one biography, in particular of people who have become scientifically, intellectually, culturally, or politically influential.

Some biographies I've enjoyed in this vein:

  • Frank Ramsey: A Sheer Excess of Powers
  • The Price of Peace: Money, Democracy, and the Life of John Maynard Keynes
  • Karl Marx: a Nineteenth-Century Life
Comment by matthew.vandermerwe on Some learnings I had from forecasting in 2020 · 2020-10-08T17:34:30.216Z · EA · GW

With regards to the AGI timeline, it's important to note that Metaculus' resolution criteria are quite different from a 'standard' interpretation of what would constitute AGI[1], (or human-level AI[2], superintelligence[3], transformative AI, etc.). It's also unclear what proportion of forecasters have read this fine print (interested to hear others' views on this), which further complicates interpretation.

For these purposes we will thus define "an artificial general intelligence" as a single unified software system that can satisfy the following criteria, all easily completable by a typical college-educated human.

  • Able to reliably pass a Turing test of the type that would win the Loebner Silver Prize.
  • Able to score 90% or more on a robust version of the Winograd Schema Challenge, e.g. the "Winogrande" challenge or comparable data set for which human performance is at 90+%
  • Be able to score 75th percentile (as compared to the corresponding year's human students; this was a score of 600 in 2016) on all the full mathematics section of a circa-2015-2020 standard SAT exam, using just images of the exam pages and having less than ten SAT exams as part of the training data. (Training on other corpuses of math problems is fair game as long as they are arguably distinct from SAT exams.)
  • Be able to learn the classic Atari game "Montezuma's revenge" (based on just visual inputs and standard controls) and explore all 24 rooms based on the equivalent of less than 100 hours of real-time play (see closely-related question.)

By "unified" we mean that the system is integrated enough that it can, for example, explain its reasoning on an SAT problem or Winograd schema question, or verbally report its progress and identify objects during videogame play. (This is not really meant to be an additional capability of "introspection" so much as a provision that the system not simply be cobbled together as a set of sub-systems specialized to tasks like the above, but rather a single system applicable to many problems.)


  1. OpenAI Charter ↩︎

  2. expert survey ↩︎

  3. Bostrom ↩︎

Comment by matthew.vandermerwe on Has anyone gone into the 'High-Impact PA' path? · 2020-10-03T14:03:45.290Z · EA · GW

I work at FHI, as RA and project manager for Toby Ord/The Precipice (2018–20), and more recently as RA to Nick Bostrom (2020–). Prior to this, I spent 2 years in finance, where my role was effectively that of an RA (researching cement companies, rather than existential risk). All of the below is in reference to my time working with Toby.

Let me know if a longer post on being an RA would be useful, as this might motivate me to write it.

Impact

I think a lot of the impact can be captured in terms of being a multiplier[1] on their time, as discussed by Caroline and Tanya. This can be sub-divided into two (fuzzy, non-exhaustive) pathways:

  • Decision-making — helping them make better decisions, ranging from small (e.g. should they appear on podcast X) to big (e.g. should they write a book)
  • Execution — helping them better execute their plans

When I joined Toby on The Precipice, a large proportion of his impact was ‘locked in’ insofar he was definitely writing the book. There were some important decisions, but I expect more of my impact was via execution, which influenced (1) the quality of the book itself; (2) the likelihood of it’s being published on schedule; (3) the promotion of the book and its ideas; (4) the proportion of Toby’s productive time it took up, i.e. by freeing up time for him to do non-book things. Over the course of my role, I think I (very roughly) added 5–25% to the book’s impact, and freed up 10–33% of Toby's time.

Career decisions

Before joining Toby, I was planning to join the first cohort of FHI’s Research Scholars Program and pursued my own independent projects for 2 years. At the time, the most compelling reason for choosing the RA role was:

  • Toby’s book will have large impact X, and I can expect to multiply this by ~10%, for impact of ~0.1X
  • If I ‘do my own thing’, it would take me much longer than 2 years to find and execute a project with at least 0.1X impact (relative to the canonical book on existential risk…)

One thing I didn’t foresee is how valuable the role would be for my development as a researcher. While I’ve had less opportunity to choose my own research projects; publish papers; etc., I think this has been substantially outweighed by the learning benefits of working so closely with a top tier researcher on important projects. Overall, I expect that working with Toby ‘sped up’ my development by a few years relative to doing independent research of some sort.

One noteworthy feature of being a ‘high-impact RA/PA/etc’ is that while these jobs are relatively highly regarded in EA circles, they can sound a bit baffling to anyone else. As such, I think I’ve built up some pretty EA-specific career capital.

Some other names

Here's an incomplete list of people who have done (or are doing) this line of work, other than Caroline and myself:

Nick Bostrom — Kyle Scott, Tanya Singh, Andrew Snyder-Beattie

Toby Ord — Andrew Snyder-Beattie, Joe Carlsmith

Will MacAskill – Pablo Stafforini, Laura Pomarius, Luisa Rodriguez, Frankie Andersen-Wood, Aron Vallinder


  1. some RA trivia — Richard Kahn, the economist normally credited with the idea of a (fiscal) multiplier, was a long-time RA to John Maynard Keynes, of whom Keynes’ wrote “He is a marvelous critic and suggester and improver … There never was anyone in the history of the world to whom it was so helpful to submit one’s stuff.” ↩︎

Comment by matthew.vandermerwe on Some thoughts on EA outreach to high schoolers · 2020-09-16T12:07:37.812Z · EA · GW

If there were more orgs doing this, there’d be the risk of abuse working with minors if in-person.

I think this deserves more than a brief mention. One of the two high school programs mentioned (ESPR) failed to safeguard students from someone later credibly accused of serious abuse, as detailed in CFAR's write-up:

Of the interactions CFAR had with Brent, we consider the decision to let him assist at ESPR—a program we helped run for high school students—to have been particularly unwise ... We do not believe any students were harmed. However, Brent did invite a student (a minor) to leave camp early to join him at Burning Man. Beforehand, Brent had persuaded a CFAR staff member to ask the camp director for permission for Brent to invite the student. Multiple other staff members stepped in to prevent this, by which time the student had decided against attending anyway.

This is a terrible track record for this sort of outreach effort. I think it provides a strong reason against pursuing it further without a high degree of assurance that the appropriate lessons have been learned — something which doesn't seem to have been addressed in the post or comments.

Comment by matthew.vandermerwe on Max_Daniel's Shortform · 2020-08-07T08:48:56.550Z · EA · GW

Nice post. I’m reminded of this Bertrand Russell passage:

“all the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man's achievement must inevitably be buried beneath the debris of a universe in ruins ... Only within the scaffolding of these truths, only on the firm foundation of unyielding despair, can the soul's habitation henceforth be safely built.” —A Free Man’s Worship, 1903

I take Russell as arguing that the inevitability (as he saw it) of extinction undermines the possibility of enduring achievement, and that we must therefore either ground life’s meaning in something else, or accept nihilism.

At a stretch, maybe you could run your argument together with Russell's — if we ground life’s meaning in achievement, then avoiding nihilism requires that humanity neither go extinct nor achieve total existential security.

Comment by matthew.vandermerwe on The Importance of Unknown Existential Risks · 2020-07-28T10:51:21.275Z · EA · GW

Thanks — I agree with this, and should have made clearer that I didn't see my comment as undermining the thrust of Michael's argument, which I find quite convincing.

Comment by matthew.vandermerwe on The Importance of Unknown Existential Risks · 2020-07-24T11:16:55.280Z · EA · GW

Great post!

But based on Rowe & Beard's survey (as well as Michael Aird's database of existential risk estimates), no other sources appear to have addressed the likelihood of unknown x-risks, which implies that most others do not give unknown risks serious consideration.

I don't think this is true. The Doomsday Argument literature (Carter, Leslie, Gott etc.) mostly considers the probability of extinction independently of any specific risks, so these authors' estimates implicitly involve an assessment of unknown risks. Lots of this writing was before there were well-developed cases for specific risks. Indeed, the Doomsday literature seems to have inspired Leslie, and then Bostrom, to start seriously considering specific risks.

Leslie explicitly considers unknown risks (p.146, End of the World):

Finally, we may well run a severe risk from something-we-know-not-what: something of which we can say only that it would come as a nasty surprise like the Antarctic ozone hole and that, again like the ozone hole, it would be a consequence of technological advances.

As does Bostrom (2002):

We need a catch-all category. It would be foolish to be confident that we have already imagined and anticipated all significant risks. Future technological or scientific developments may very well reveal novel ways of destroying the world.

Comment by matthew.vandermerwe on How Much Does New Research Inform Us About Existential Climate Risk? · 2020-07-23T10:17:46.866Z · EA · GW

Very useful comment — thanks.

Overall, I don't view this as especially good news ...

How do these tail values compare with your previous best guess?

Comment by matthew.vandermerwe on Objections to Value-Alignment between Effective Altruists · 2020-07-17T12:30:25.410Z · EA · GW

[ii] Some queries to MacAskill’s Q&A show reverence here, (“I'm a longtime fan of all of your work, and of you personally. I just got your book and can't wait to read it.”, “You seem to have accomplished quite a lot for a young person (I think I read 28?). Were you always interested in doing the most good? At what age did you fully commit to that idea?”).

I share your concerns about fandom culture / guru worship in EA, and am glad to see it raised as a troubling feature of the community. I don’t think these examples are convincing, though. They strike me as normal, nice things to say in the context of an AMA, and indicative of admiration and warmth, but not reverence.

Comment by matthew.vandermerwe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-18T05:39:47.611Z · EA · GW

Hayek's Road to Serfdom, and twentieth century neoliberalism more broadly, owes a lot of its success to this sort of promotion. The book was published in 1944 and initially quite successful, but print runs were limited by wartime paper rationing. In 1945, the US magazine Reader's Digest created a 20-page condensed version, and sold 1 million of these very cheaply (5¢ per copy). Anthony Fisher, who founded the IEA, came across Hayek's ideas through this edition.

Source: https://press.uchicago.edu/Misc/Chicago/320553.html

Comment by matthew.vandermerwe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T07:42:13.927Z · EA · GW

Great post — this is something EA should definitely be thinking more about as the canon of EA books grows and matures. Peter Singer has done it already, buying back the rights for TLYCS and distributing a free digital versions for its 10th anniversary.

I wonder whether most of the value of buying back rights could be captured by just buying books for people on request. A streamlined process for doing this could have pretty low overheads — it only takes a couple minutes to send someone a book via Amazon — and seems scalable. This should be easy enough for a donor or EA org to try.

I also imagine that for most publishers, profits are concentrated after release

I looked into this recently, using Goodreads data as a proxy for sales. My takeaway was that sales of these books have been surprisingly linear over time, rather than being concentrated early on: Superintelligence; Doing Good Better; TLYCS

Comment by matthew.vandermerwe on X-risks to all life v. to humans · 2020-06-04T07:28:12.196Z · EA · GW

Welcome to the forum!

Further development of a mathematical model to realise how important timelines for re-evolution are.

Re-evolution timelines have another interesting effect on overall risk — all else equal, the more confident one is that intelligence will re-evolve, the more confident one should be that we will be able to build AGI,* which should increase one’s estimate of existential risk from AI.

So it seems that AI risk gets a twofold ‘boost’ from evidence for a speedy re-emergence of intelligent life:

  • Relative AI risk increases, since risk from most other sources is discounted a bit.
  • Absolute AI risk increases, since it pushes towards shorter AGI timelines.

*Shulman & Bostrom 2012 discuss this type of argument, and some complexities in adjusting for observation selection effects

Comment by matthew.vandermerwe on How Much Leverage Should Altruists Use? · 2020-05-18T11:17:10.550Z · EA · GW

[disclosure: not an economist or investment professional]

emerging market bonds ... aren't (to my knowledge) distorted by the Fed buying huge amounts of bonds

This seems wrong — the spillover effects of 2008–13 QE on EM capital markets are fairly well-established (cf the 'Taper Tantrum' of 2013).

see e.g. Effects of US Quantitative Easing on Emerging Market Economies

"We find that an expansionary US QE shock has significant effects on financial variables in EMEs. It leads to an exchange rate appreciation, a reduction in long-term bond yields, a stock market boom, and an increase in capital inflows to these countries."
Comment by matthew.vandermerwe on EA Updates for April 2020 · 2020-05-02T06:33:19.348Z · EA · GW

My top picks for April media relating to The Precipice:

Comment by matthew.vandermerwe on How hot will it get? · 2020-04-25T07:41:08.849Z · EA · GW

I wasn't thinking about any implications like that really. My guess would be that the Kaya Identity isn't the right tool for thinking about either (i) extreme growth scenarios; or (ii) the fossil fuel endgame; and definitely not (iii) AI takeoff scenarios.

If I were more confident in the resource estimate, I would probably switch out the AI explosion scenario for a 'we burn all the fossil fuels' scenario. I'm not sure we can rule out the possibility that the actual limit is a few orders of magnitude more than 13.6PtC. IPCC cites Rogner 2014 for the figure. In personal communication, one scientist described Rogner's previous (1997) estimate as:

a mishmash of unreliable information, including self-reported questionnaires by individual governments

It would be great to better understand these estimates — I'm surprised there isn't more work on this. In particular, you'd think there would be geologically-based models of how much carbon there is, that aren't so strongly grounded in known-reserves + current/near-term technological capabilities.

Comment by matthew.vandermerwe on How hot will it get? · 2020-04-24T14:39:14.564Z · EA · GW

Also note that your estimate for emissions in the AI explosion scenario exceeds the highest estimates for how much fossil fuel there is left to burn. The upper bound given in IPCC AR5 (WG3.C7.p.525) is ~13.6 PtC (or ~5*10^16 tons CO2).

Awesome post!

Comment by matthew.vandermerwe on Toby Ord’s ‘The Precipice’ is published! · 2020-03-30T11:27:18.209Z · EA · GW

Yes I gave authorization!

Comment by matthew.vandermerwe on Toby Ord’s ‘The Precipice’ is published! · 2020-03-12T16:35:40.006Z · EA · GW

The audiobook will not include the endnotes. We really couldn't see any good way of doing this, unfortunately.

Toby is right that there's a huge amount of great stuff in there, particularly for those already more familiar with existential risk, so I would highly recommend getting your hands on a physical or ebook version (IMO ebook is the best format for endnotes, since they'll be hyperlinked).

Comment by matthew.vandermerwe on Toby Ord’s ‘The Precipice’ is published! · 2020-03-08T15:28:38.046Z · EA · GW

I will investigate this and get back to you!

Comment by matthew.vandermerwe on COVID-19 brief for friends and family · 2020-02-29T14:50:49.738Z · EA · GW

Thanks for writing this!

In the early stages, it will be doubling every week approximately

I’d be interested in pointers on how to interpret all the evidence on this:

  • until Jan 4: (Li et al) find 7.4 days
  • Jan 16–Jan 30: (Cheng & Shan) find ~1.8 days in China, before quarantine measures start kicking in.
  • Jan 20–Feb 6: (Muniz-Rodriguez et al) find 2.5 for Hubei [95%: 2.4–2.7], and other provinces ranging from 1.5 to 3.0 (with much wider error bars).
  • Eyeballing the most recent charts:
  • I’ve also seen it suggested that the outside-China growth might be inflated due to ‘catch up’ from slow roll-out of testing.

Altogether, what is our best guess, and what evidence should we be looking out?

Comment by matthew.vandermerwe on Should Longtermists Mostly Think About Animals? · 2020-02-04T10:19:51.480Z · EA · GW
In fact, x-risks that eliminate human life, but leave animal life unaffected would generally be almost negligible in value to prevent compared to preventing x-risks to animals and improving their welfare.

Eliminating human life would lock in a very narrow set of futures for animals - something similar to the status quo (minus factory farming) until the Earth becomes uninhabitable. What reason is there to think the difference between these futures, and those we could expect if humanity continues to exist, would be negligible?

As far as we know, humans are the only thing capable of moral reasoning, systematically pushing the world toward more valuable states, embarking on multi-generational plans, etc. etc. This gives very strong reasons for thinking the extinction of humanity would be of profound significance to the value of the future for non-humans.

Comment by matthew.vandermerwe on UK donor-advised funds · 2020-01-22T16:21:15.798Z · EA · GW

Yeah - plus the opportunity cost of having it in cash. Looks like a non-starter.

Comment by matthew.vandermerwe on UK donor-advised funds · 2020-01-22T16:18:55.192Z · EA · GW

Yeah, and how much you value the flexibility depends on what you expect to donate to.

EA Funds already allows you to donate to small/speculative projects, non-UK charities, etc, via a UK registered charity, so 'only ever donating to UK charities' is less restrictive than it sounds..

Comment by matthew.vandermerwe on UK donor-advised funds · 2020-01-22T14:41:22.141Z · EA · GW

Yes that's what I meant - will edit for clarity

Comment by matthew.vandermerwe on UK donor-advised funds · 2020-01-22T14:22:32.513Z · EA · GW

I spent an hour or so looking into this recently, and couldn't find any DAFs that were suitable for small donors. It's possible I missed one, though.

CAF offers a 'giving account', which is effectively a low-interest savings account. You can get immediate tax relief for deposits, but forgo returns, and can only donate to UK registered charities, so seems like a weak option: https://www.cafonline.org/docs/default-source/personal-giving/2573h_charityacc_web-app_pdf_220519.pdf

FWIW my tentative conclusion was that the best option for savings small-ish sums for giving later is a straightforward ISA. You don't get any immediate tax relief from paying into the ISA (you'll get this when you donate the money to charity) but you do get (tax-free) growth, and retain flexibility in when and how the money is donated. Returns are important if you think you might wait a long time before giving, flexibility seems important if one's motivation for giving later is the possibility of some high-leverage opportunities in the future. The main downside is that you can't easily bind yourself into future donations this way, but I thought this was outweighed by the other factors in my own case.

Comment by matthew.vandermerwe on [deleted post] 2020-01-14T12:27:53.484Z

Sorry I should have disclaimed that I don't think this is a sensible strategy, and that people should approach party membership in good faith (for roughly the reasons Greg outlines above). Thanks for prompting me to clarify this.

My comment was just to point out that timing is an important factor in leverage-per-member.

Comment by matthew.vandermerwe on Response to recent criticisms of EA "longtermist" thinking · 2020-01-13T13:44:24.161Z · EA · GW
The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes).

I find this surprising. Can you point to examples?

Comment by matthew.vandermerwe on [deleted post] 2020-01-13T11:20:59.558Z

This seems unlikely to be a useful tie-break in most cases, provided one can switch membership. UK party leadership elections are rarely contemporaneous [1] (unlike in the US), so the likelihood of a given party member being able to realise their leverage will generally differ by more than a factor of 4.5x at any given time.

[1] Conservatives: 1975, 1990, 1997, 2001, 2005, 2019

Labour: 1980, 1983, 1992, 1994, 2010, 2015, 2016, 2020