Posts

UK to host human challenge trials for Covid-19 vaccines 2020-09-23T14:45:01.278Z · score: 40 (15 votes)
Yew-Kwang Ng, 'Effective Altruism Despite the Second-best Challenge' 2020-05-09T16:47:36.346Z · score: 22 (12 votes)
Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good 2020-03-17T17:00:14.108Z · score: 29 (14 votes)
Good Done Right conference 2020-02-04T13:21:02.903Z · score: 42 (23 votes)
Cotton‐Barratt, Daniel & Sandberg, 'Defence in Depth Against Human Extinction' 2020-01-28T19:24:48.033Z · score: 25 (10 votes)
Announcing the Bentham Prize 2020-01-21T22:23:16.860Z · score: 32 (13 votes)
Pablo_Stafforini's Shortform 2020-01-09T15:10:48.053Z · score: 6 (1 votes)
Dylan Matthews: The case for caring about the year 3000 2019-12-18T01:07:49.958Z · score: 27 (13 votes)
Are comment "disclaimers" necessary? 2019-11-23T22:47:01.414Z · score: 57 (20 votes)
Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term' 2019-11-05T20:24:00.445Z · score: 38 (14 votes)
A wealth tax could have unpredictable effects on politics and philanthropy 2019-10-31T13:05:28.421Z · score: 20 (9 votes)
Schubert, Caviola & Faber, 'The Psychology of Existential Risk' 2019-10-22T12:41:53.542Z · score: 49 (20 votes)
How this year’s winners of the Nobel Prize in Economics influenced GiveWell’s work 2019-10-19T02:56:46.480Z · score: 16 (9 votes)
A bunch of new GPI papers 2019-09-25T13:32:37.768Z · score: 102 (39 votes)
Andreas Mogensen's "Maximal Cluelessness" 2019-09-25T11:18:35.651Z · score: 54 (18 votes)
'Crucial Considerations and Wise Philanthropy', by Nick Bostrom 2017-03-17T06:48:47.986Z · score: 14 (14 votes)
Effective Altruism Blogs 2014-11-28T17:26:05.861Z · score: 4 (4 votes)
The Economist on "extreme altruism" 2014-09-18T19:53:52.287Z · score: 4 (4 votes)
Effective altruism quotes 2014-09-17T06:47:27.140Z · score: 5 (5 votes)

Comments

Comment by pablo_stafforini on Long-Term Future Fund: September 2020 grants · 2020-09-18T18:50:48.978Z · score: 16 (6 votes) · EA · GW

I agree that the sentence Linch quoted sounds like a "bravery debate" opening, but that's not how I perceive it in the broader context. I don't think the author is presenting himself/herself as an underdog, intentionally or otherwise. Rather, they are making that remark as part of their overall attempt to indicate that they are aware that they are raising a sensitive issue and that they are doing so in a collaborative spirit and with admittedly limited information. This strikes me as importantly  different from the prototypical bravery debate, where the primary effect is not to foster an atmosphere of open dialogue but to gain sympathy for a position.

I am tentatively in agreement with you that "clarification of intent" can be done without "bravery talk", by which I understand any mention that the view one is advancing is unpopular. But I also think that such talk doesn't always communicate that one is the underdog, and is therefore not inherently problematic. So, yes, the OP could have avoided that kind of language altogether, but given the broader context, I don't think the use of that language did any harm.

(I'm maybe 80% confident in what I say above, so if you disagree, feel free to push me.)

Comment by pablo_stafforini on Long-Term Future Fund: September 2020 grants · 2020-09-18T17:38:24.544Z · score: 6 (3 votes) · EA · GW

Thanks, you are right. I have amended the last sentence of  my comment.

Comment by pablo_stafforini on Long-Term Future Fund: September 2020 grants · 2020-09-18T14:39:42.165Z · score: 44 (21 votes) · EA · GW

FWIW, I think that the qualification was very appropriate and I didn't see the author as intending to start a "bravery debate". Instead, the purpose appears to have been to emphasize that the concerns were raised in good faith and with limited information. Clarifications of this sort seem very relevant and useful, and quite unlike the phenomenon described in Scott's post.

Comment by pablo_stafforini on Pablo Stafforini’s Forecasting System · 2020-09-17T09:28:06.445Z · score: 6 (3 votes) · EA · GW

The link is broken; can you fix it?

In the meantime, a few random thoughts. First, the index fund analogy suggests a self-correcting mechanism. Players defer to the community only to the degree that they expect it to track the truth more reliably than their individual judgment, given their time and ability constraints. As the reliability of the community prediction changes, in response to changes in the degree to which individual players defer to it, so will these players's willingness to defer to the community. 

Second, other things equal, I think it's a desirable property of a prediction platform that it makes it rational for players to sometimes defer to the community. This could be seen as embodying the important and neglected truth that in many areas of life one can generally do better by deferring to society's collective wisdom than by going with one's individual opinion. Furthermore, it requires considerable ability to determine when and to what degree one should defer to others in any given case. In fact, this metacognitive skill of knowing how much more (or less) reliable other opinions are relative to one's own seems like a core epistemic virtue, and one that can be assessed only if users are allowed to defer to others.

Finally, insofar as there are reasons for wanting players not to defer to the community, I think the appropriate response is to change the scoring function rather than to ask players to exercise self-restraint. As fellow forecaster Tom Adamczewski reminded me, the Metaculus Scoring System page describes one such possible change:

 It's easy to account for the average community prediction  by adding a constant to each of these. For example, . This way a player would get precisely zero points if they just go along with the community average.

Perhaps Metaculus could have two separate leaderboards: in addition to the current ranking, it could also display a ranking of players with the community component subtracted. These two rankings could be seen as measuring the quality of a player's "credences" and "impressions", respectively.

Comment by pablo_stafforini on Pablo Stafforini’s Forecasting System · 2020-09-17T00:07:54.293Z · score: 3 (2 votes) · EA · GW

My spacemacs config file is here. The main Keyboard Maestro macros I use are here.  As noted, these macros were created back when I was beginning to use Emacs, so they don't make use of org capture or other native functionality (including Emacs own internal macros, or the even more powerful elmacro package). I plan to review these files at some point, but not in the immediate future. Happy to answer questions if anything is unclear.

Comment by pablo_stafforini on Pablo Stafforini’s Forecasting System · 2020-09-16T23:54:15.827Z · score: 2 (1 votes) · EA · GW

Yes, indeed. I was about to suggest an edit to the transcript to make that clear. When I created the Keyboard Maestro script, I was still relatively unfamiliar with Org mode so I didn't make use of org capture. But that's the proper way to do it.

Comment by pablo_stafforini on Pablo Stafforini’s Forecasting System · 2020-09-16T23:07:33.436Z · score: 11 (5 votes) · EA · GW

It was a pleasure to discuss my approach to forecasting with Jungwon and Amanda. I'd be happy to clarify anything that I failed to explain properly during our conversation, or to answer any questions related to the implementation or reasoning behind my "system" (if one may call it that).

Comment by pablo_stafforini on Are social media algorithms an existential risk? · 2020-09-15T11:21:30.139Z · score: 19 (15 votes) · EA · GW

I haven't watched the documentary, but I'm antecedently skeptical of claims that social media constitute an existential risk in the sense in which EAs use that term. The brief summary provided by the Wikipedia article doesn't seem to support that characterization:

the film explores the rise of social media and the damage it has caused to society, focusing on its exploitation of its users for financial gain through surveillance capitalism and data mining, how its design is meant to nurture an addiction, its use in politics, its impact on mental health (including the mental health of adolescents and rising teen suicide rates), and its role in spreading conspiracy theories and aiding groups such as flat-earthers and white supremacists.

While many of these effects are terrible (and concern about them partly explains why I myself basically don't use social media), they do not appear to amount to threats of existential catastrophe. Maybe the claim is that the kind of surveillance made possible by social media and big tech firms more generally ("surveillance capitalism") has the potential to establish an unrecoverable global dystopia?

Are there other concrete mechanisms discussed by the documentary? 

Comment by pablo_stafforini on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-06T12:47:41.831Z · score: 29 (16 votes) · EA · GW

To what degree are the differences between longtermists who prioritize s-risks and longtermists who prioritize x-risks driven by moral disagreements about the relative importance of suffering versus happiness, rather than by factual disagreements about the relative magnitude of s-risks versus x-risks?

Comment by pablo_stafforini on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-06T12:40:22.745Z · score: 13 (7 votes) · EA · GW

The universe is vast, so it seems there is a lot of room for variation even within the subset of risks involving astronomical quantities of suffering. How much, in your opinion, do s-risks vary in severity? Relatedly, what are your grounds for singling out s-risks as the object of concern, rather than those risks involving the most suffering?

Comment by pablo_stafforini on "Disappointing Futures" Might Be As Important As Existential Risks · 2020-09-05T10:39:39.410Z · score: 5 (3 votes) · EA · GW

Just to say that I would put very little weight on my responses in that post, many of which are highly unstable, and some of which I no longer endorse (including the 1% and 99% estimates quoted above). I hope to revise it soon, adding measures of resilience as Greg Lewis suggests here.

Comment by pablo_stafforini on Database of existential risk estimates · 2020-09-02T12:50:44.282Z · score: 6 (3 votes) · EA · GW

Yes, sorry, I should have made that clearer—I posted here because I couldn't think of a more appropriate thread, and I didn't want to create a separate post (though maybe I could have used the "shortform" feature).

AI Impacts has a bunch of posts on this topic. This one discusses what appears to be the largest dataset of such predictions in existence. This one lists several analyses of time to human-level AI. And this one provides an overview of all their writings on AI timelines.

Comment by pablo_stafforini on Database of existential risk estimates · 2020-09-01T12:44:07.241Z · score: 8 (4 votes) · EA · GW

An important new aggregate forecast, based on inputs from LW users:

  • Aggregated median date: January 26, 2047
  • Aggregated most likely date: November 2, 2033
  • Earliest median date of any forecast: June 25, 2030
  • Latest median date of any forecast: After 2100
Comment by pablo_stafforini on More empirical data on 'value drift' · 2020-08-29T13:08:26.933Z · score: 10 (6 votes) · EA · GW

Thanks for this useful summary!

Note that section 4 reiterates Peter Hurford's analysis in a post from last year.

There are probably more older samples that people could dig out and follow up on themselves, such as the analysis of CEA’s early team.

One possibility is to take a look at the top contributors to Felicifia, an early EA/utilitarian forum, and note how many are still around.  Louis Francini kindly restored the original site earlier this year, which had been down for a long time, so this can be done very easily.

Comment by pablo_stafforini on Should We Prioritize Long-Term Existential Risk? · 2020-08-21T02:01:11.390Z · score: 2 (1 votes) · EA · GW

If insufficient efforts are made to reduce short-term x-risk, there may not be future generations to spend your investment.

Comment by pablo_stafforini on Should We Prioritize Long-Term Existential Risk? · 2020-08-20T19:27:45.245Z · score: 4 (2 votes) · EA · GW

I do agree that investing is a promising way to punt to the future. I don't have strong views on whether at the current margin one empowers future generations more by trying to reduce risks that threaten their existence or their ability to reduce x-risk, or by accumulating financial resources and other types of capacity that they can either deploy to reduce x-risk or continue to accumulate. What makes you favor the capacity-building approach over the short-term x-risk reduction approach?

Comment by pablo_stafforini on Should We Prioritize Long-Term Existential Risk? · 2020-08-20T11:28:30.826Z · score: 5 (3 votes) · EA · GW

A plausible longtermist argument for prioritizing short-term risk is the "punting to the future" approach to dealing with radical cluelessness. On this approach, we should try to reduce only those risks which only the present generation can influence, and let future generations take care of the remaining risks. (In fact, the optimal strategy is probably one where each generation pursues this approach, at least for sufficiently many generations.)

Comment by pablo_stafforini on EA Forum update: New editor! (And more) · 2020-08-19T12:22:38.662Z · score: 2 (1 votes) · EA · GW

Update: In case it is of use to other users of Vimium, I've found a workaround for removing persistent previews. Add this to the 'custom key mappings' field under 'Vimium options':

map x LinkHints.activateMode action=hover

where 'x' is the key binding for this action (I use 'q'). Just press that key and select the link corresponding to the preview you want to disable. (The behavior is a bit erratic; sometimes you need to do it twice.)

Comment by pablo_stafforini on What are some low-information priors that you find practically useful for thinking about the world? · 2020-08-08T21:29:25.110Z · score: 7 (4 votes) · EA · GW

I agree this is useful and I often use it when forecasting. It's important to emphasize that this is a useful prior, though, since Gott appears to treat it as an all-things-considered posterior.

I have used this method with great success to estimate, among other things, the probability that friends will break up with their romantic partners.

William Poundstone uses this example, too, to illustrate the "Copernican principle" in his popular book on the doomsday argument.

Comment by pablo_stafforini on EA reading list: utilitarianism and consciousness · 2020-08-08T12:28:14.576Z · score: 16 (7 votes) · EA · GW

I would add the extended episode of the 80,000 Hours podcast with David Chalmers. To my knowledge, some of the views he expresses there—e.g. that phenomenal consciousness is morally valuable even if not hedonically valenced—have not been explicitly discussed in either the EA or the philosophical literature.

Many utilitarian EAs have independently gravitated towards the view that the intrinsic value of pleasure and pain can be known by introspection or "direct acquaintance". Surprisingly, as far as I know no statement of this view exists in the EA literature, though some may be found in the philosophical literature (including publications by philosophers sympathetic to EA):

Comment by pablo_stafforini on What are some low-information priors that you find practically useful for thinking about the world? · 2020-08-07T12:19:50.685Z · score: 18 (10 votes) · EA · GW

Douglas Hubbard mentions a few rules of the sort you seem to be describing in his book How to measure anything. For example, his "Rule of Five" states that «There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population.» One of the central themes in that book is in fact that you "have more data that you think", and that these simple rules can take you surprisingly far.

Comment by pablo_stafforini on Open and Welcome Thread: August 2020 · 2020-08-03T12:32:01.740Z · score: 2 (1 votes) · EA · GW

I'm also not a native English speaker; to my ears, "unusual causes" feels similar in connotation to "non-standard causes". What about simply "other causes"?

Comment by pablo_stafforini on EA reading list: cluelessness and epistemic modesty · 2020-08-03T12:01:10.473Z · score: 10 (3 votes) · EA · GW

On cluelessness, I would add

On epistemic modesty:

Comment by pablo_stafforini on EA Forum update: New editor! (And more) · 2020-08-02T11:26:41.641Z · score: 6 (3 votes) · EA · GW

Makes sense. I checked Gwern's site and the previews are also triggered, confirming that this is a Vimium bug/feature.

Comment by pablo_stafforini on EA Forum update: New editor! (And more) · 2020-08-01T14:05:17.032Z · score: 3 (2 votes) · EA · GW

Thank you for implementing these improvements! I'm particularly pleased to see that I can now add hyperlinks with a shortcut.

A request: not sure if it's related, but around the time these changes were made, whenever I try to open an EA Forum link in a new tab using Vimium (keystroke: capital F), a preview of that page is shown and there's no way I can turn it off without using the mouse. I think the preview should be shown only when one hovers over the relevant link (to inform the user whether the link is in fact worth visiting), whereas in this case it is shown after one selects the link to be opened, so it's serving no useful function. It would be great if this could be fixed.

Comment by pablo_stafforini on List of EA-related email newsletters · 2020-08-01T13:50:51.482Z · score: 4 (2 votes) · EA · GW

New newsletter: Effective Altruism Asia.

(H/T David Nash)

Comment by pablo_stafforini on Delegate a forecast · 2020-07-28T12:05:23.112Z · score: 3 (2 votes) · EA · GW

Thank you, that was informative. I don't think you missed anything, though I haven't myself thought about this question much—that is in part why I was curious to see someone else try to answer it.

I think genetic selection and/or editing has the potential to be transformative, and perhaps even to result in greater-than-human intelligence. Despite this, it's comparatively neglected, both within EA and society at large. So having more explicit forecasts in this area seems pretty valuable.

Comment by pablo_stafforini on Delegate a forecast · 2020-07-27T00:56:01.499Z · score: 2 (1 votes) · EA · GW

I misread the post as asking for a personal forecast. Since I now realize it's possible to ask questions of any type, I would much rather delegate a forecast on an important topic, such as:

How many standard deviations away from the mean will the 1000th human born from stem-cell derived gametes score in a test of cognitive ability taken at the age of 18?

Comment by pablo_stafforini on Delegate a forecast · 2020-07-26T14:34:06.795Z · score: 4 (3 votes) · EA · GW

How many points will I have on Metaculus at the end of 2021?

(Question resolves according to the number listed here on 2021-12-31 23:59:59 GMT.)

Comment by pablo_stafforini on The Importance of Unknown Existential Risks · 2020-07-24T19:27:30.242Z · score: 16 (7 votes) · EA · GW

Note that it's not just the Doomsday Argument that may give one reason for revising one's x-risk estimates beyond what is suggested by object-level analyses of specific risks. See this post by Robin Hanson for an enumeration of the relevant types of arguments.

I am puzzled that these arguments do not seem to be influencing many of the most cited x-risk estimates (e.g. Toby's). Is this because these arguments are thought to be clearly flawed? Or is it because people feel they just don't know how to think about them, and so they simply ignore them? I would like to see more "reasoning transparency" about these issues.

It's also worth noting that some of these "speculative" arguments would provide reason for revising not only overall x-risks estimates, but also estimates of specific risks. For example, as Katja Grace and Greg Lewis have noted, a misaligned intelligence explosion cannot be the Great Filter. Accordingly, insofar as the Great Filter influences one's x-risk estimates, one should shift probability mass away from AI risk and toward risks that are plausible Great Filters.

Comment by pablo_stafforini on The Importance of Unknown Existential Risks · 2020-07-24T13:22:27.895Z · score: 9 (6 votes) · EA · GW

I agree that the literature on the Doomsday Argument involves an implicit assessment of unknown risks, in the sense that any residual probability mass assigned to existential risk after deducting the known x-risks must fall under the unknown risks category. (Note that our object-level assessment of specific risks may cause us to update our prior general risk estimates derived from the Doomsday Argument.)

Still, Michael's argument is not based on anthropic considerations, but on extrapolation from the rate of x-risk discovery. These are two very different reasons for revising our estimates of unknown x-risks, so it's important to keep them separate. (I don't think we disagree; I just thought this was worth highlighting.)

Comment by pablo_stafforini on Database of existential risk estimates · 2020-07-21T15:13:48.202Z · score: 6 (3 votes) · EA · GW

Not in your database, I think:

“Personally, I now think we humans will be wiped out this century,” Frank Tipler told me. He may be the most pessimistic of Bayesian doomsayers, followed by Willard Wells (who gives the same time frame for the end of civilization and the beginning of the postapocalypse).

William Poundstone, The Doomsday Calculation, p. 259

Comment by pablo_stafforini on How tractable is changing the course of history? · 2020-07-21T14:59:09.980Z · score: 10 (3 votes) · EA · GW
Unfortunately I haven't read this book, and doubt I'll get to it anytime soon, partly because I don't think there's an audiobook version.

FYI, you can turn virtually any text file into a machine-read audiobook with the Voice Aloud Reader app (Android, iOS). Not as nice as an audiobook read by a professional actor, but still pretty good, in my opinion (I use Google TTS Engine with 'English (India)' as the voice; I just like the Indian accent, and also get the impression that the speech synthesis is better than some of the other accents').

Comment by pablo_stafforini on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T17:01:34.664Z · score: 13 (9 votes) · EA · GW
Ambiguous statements are bad, 100%, but so are clear, baseless statements.

You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report. The former claim is undoubtedly true, but it doesn't necessarily describe a problematic phenomenon. (See Greg Lewis's recent post; I'm not sure if you disagree.). The latter claim would be very worrying if true, but I don't see reason to believe that it is. Sure, EAs sometimes lack good reasons for the views they espouse, but this is a general phenomenon unrelated to the practice of reporting credences explicitly.

Comment by pablo_stafforini on Information hazards: a very simple typology · 2020-07-16T13:33:01.694Z · score: 7 (4 votes) · EA · GW

You credit Anders Sandberg here and elsewhere, but you don't provide a reference. Where did Sandberg propose the typology that inspired yours? A search for 'direct information hazard' (the expression you attribute to Sandberg) only results in this post and your LW comment.

Comment by pablo_stafforini on Mike Huemer on The Case for Tyranny · 2020-07-16T12:22:16.937Z · score: 13 (4 votes) · EA · GW
This is how our species is going to die. Not necessarily from nuclear war specifically, but from ignoring existential risks that don’t appear imminent‌ at this moment. If we keep doing that, eventually, something is going to kill us – something that looked improbable in advance, but that, by the time it looks imminent, is too late to stop.
Comment by pablo_stafforini on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-15T14:22:11.311Z · score: 14 (6 votes) · EA · GW
Generally, I'd like to hear more about how different people introduce the ideas of EA, longtermism, and specific cause areas. There's no clear cut canon, and effectively personalizing an intro can difficult, so I'd love to hear how others navigate it.

This seems like a promising topic for an EA Forum question. I would consider creating one and reposting your comment as an answer to it. A separate question is probably also a better place to collect answers than this thread, which is best reserved for questions addressed to Ben and for Ben's answers to those questions.

Comment by pablo_stafforini on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T19:26:49.810Z · score: 32 (15 votes) · EA · GW

Which of the EA-related views you hold are least popular within the EA community?

Comment by pablo_stafforini on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T19:22:53.876Z · score: 61 (25 votes) · EA · GW

Have you considered doing a joint standup comedy show with Nick Bostrom?

Comment by pablo_stafforini on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T19:13:41.551Z · score: 7 (4 votes) · EA · GW

What writings have influenced your thinking the most?

Comment by pablo_stafforini on The 80,000 Hours podcast should host debates · 2020-07-12T00:29:54.048Z · score: 6 (4 votes) · EA · GW

I'm happy to hear that you are keen on the anti-debates idea! I suggested it to the EA Global organizers a few years ago, but it seems they weren't very interested. (Incidentally, the idea isn't Will's, or mine; it dates back at least to this debate between David Chalmers and Guido Tononi from 2016.)

A possible variant is to randomize whether the debate will or will not be reversed, and challenge the audience to guess whether the debaters are arguing for their own positions or their opponents', disclosing the answer only at the end of the episode. (In some cases, or for some members of the audience, the answer will be obvious from background information about the debaters, but it's unclear how often this will be the case.)

EDIT: I now see that I misunderstood what was meant by an 'anti-debate': not a debate where each person defends the opposite side, but rather a debate that is collaborative rather than competitive. I personally would be interested in anti-debates in either of those senses.

Comment by pablo_stafforini on Forecasting Newsletter: June 2020 · 2020-07-01T22:38:48.743Z · score: 10 (7 votes) · EA · GW

A new, Ethereum-based prediction market has launched: Polymarket. It even features a question about Slate Star Codex and the New York Times. I tried it and it was pretty easy to set up. (I have no affiliation or with the owner.)

Comment by pablo_stafforini on The Case for Impact Purchase | Part 1 · 2020-06-26T14:00:29.738Z · score: 5 (3 votes) · EA · GW

Are there plans to publish Part II?

Comment by pablo_stafforini on List of EA-related email newsletters · 2020-06-26T12:22:31.670Z · score: 8 (4 votes) · EA · GW

Nick Bostrom has now a newsletter for "rare" updates.

Comment by pablo_stafforini on Modeling the Human Trajectory (Open Philanthropy) · 2020-06-24T18:57:20.066Z · score: 3 (2 votes) · EA · GW

The latest edition of the Alignment Newsletter includes a good summary of Roodman's post, as well as brief comments by Nicholas Joseph and Rohin Shah:

Modeling the Human Trajectory (David Roodman) (summarized by Nicholas): This post analyzes the human trajectory from 10,000 BCE to the present and considers its implications for the future. The metric used for this is Gross World Product (GWP), the sum total of goods and services produced in the world over the course of a year.
Looking at GWP over this long stretch leads to a few interesting conclusions. First, until 1800, most people lived near subsistence levels. This means that growth in GWP was primarily driven by growth in population. Since then population growth has slowed and GWP per capita has increased, leading to our vastly improved quality of life today. Second, an exponential function does not fit the data well at all. In an exponential function, the time for GWP to double would be constant. Instead, GWP seems to be doubling faster, which is better fit by a power law. However, the conclusion of extrapolating this relationship forward is extremely rapid economic growth, approaching infinite GWP as we near the year 2047.
Next, Roodman creates a stochastic model in order to analyze not just the modal prediction, but also get the full distribution over how likely particular outcomes are. By fitting this to only past data, he analyzes how surprising each period of GWP was. This finds that the industrial revolution and the period after it was above the 90th percentile of the model’s distribution, corresponding to surprisingly fast economic growth. Analogously, the past 30 years have seen anomalously lower growth, around the 25th percentile. This suggests that the model's stochasticity does not appropriately capture the real world -- while a good model can certainly be "surprised" by high or low growth during one period, it should probably not be consistently surprised in the same direction, as happens here.
In addition to looking at the data empirically, he provides a theoretical model for how this accelerating growth can occur by generalizing a standard economic model. Typically, the economic model assumes technology is a fixed input or has a fixed rate of growth and does not allow for production to be reinvested in technological improvements. Once reinvestment is incorporated into the model, then the economic growth rate accelerates similarly to the historical data.
Nicholas's opinion: I found this paper very interesting and was quite surprised by its results. That said, I remain confused about what conclusions I should draw from it. The power law trend does seem to fit historical data very well, but the past 70 years are fit quite well by an exponential trend. Which one is relevant for predicting the future, if either, is quite unclear to me.
The theoretical model proposed makes more sense to me. If technology is responsible for the growth rate, then reinvesting production in technology will cause the growth rate to be faster. I'd be curious to see data on what fraction of GWP gets reinvested in improved technology and how that lines up with the other trends.
Rohin’s opinion: I enjoyed this post; it gave me a visceral sense for what hyperbolic models with noise look like (see the blog post for this, the summary doesn’t capture it). Overall, I think my takeaway is that the picture used in AI risk of explosive growth is in fact plausible, despite how crazy it initially sounds. Of course, it won’t literally diverge to infinity -- we will eventually hit some sort of limit on growth, even with “just” exponential growth -- but this limit could be quite far beyond what we have achieved so far. See also this related post.
Comment by pablo_stafforini on How should we run the EA Forum Prize? · 2020-06-23T14:32:25.533Z · score: 16 (7 votes) · EA · GW

Why don't you conduct an experiment? E.g. you could award prizes only for posts/comments written by users whose usernames start with letters A-L (and whose accounts were created prior to the announcement) and see if you notice any significant difference in the quality of those users' submissions.

Comment by pablo_stafforini on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T11:46:10.737Z · score: 23 (9 votes) · EA · GW

Just to say that I appreciate all the "mini literature reviews" you have been posting!

Comment by pablo_stafforini on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T11:41:18.430Z · score: 2 (1 votes) · EA · GW

Hi Arden,

Your worries seem sensible, and discussing it under 'building effective altruism' might be the way to go.

Comment by pablo_stafforini on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T01:23:25.658Z · score: 62 (28 votes) · EA · GW

Great post, thank you for compiling this list, and especially for the pointers for further reading.

In addition to Tobias's proposed additions, which I endorse, I'd like to suggest protecting effective altruism as a very high priority problem area. Especially in the current political climate, but also in light of base rates from related movements as well as other considerations, I think there's a serious risk (perhaps 15%) that EA will either cease to exist or lose most of its value within the next decade. Reducing such risks is not only obviously important, but also surprisingly neglected. To my knowledge, this issue has only been the primary focus of an EA Forum post by Rebecca Baron, a Leaders' Forum talk by Roxanne Heston, an unpublished document by Kerry Vaughan, and an essay by Leverage Research (no longer online). (Risks to EA are also sometimes discussed tangentially in writings about movement building, but not as a primary focus.)

Comment by pablo_stafforini on Problem areas beyond 80,000 Hours' current priorities · 2020-06-22T17:08:25.412Z · score: 5 (3 votes) · EA · GW

The last link under the 'Aging' heading is dead. I think you meant this.