AGI Predictions 2020-11-21T12:02:35.158Z
UK to host human challenge trials for Covid-19 vaccines 2020-09-23T14:45:01.278Z
Yew-Kwang Ng, 'Effective Altruism Despite the Second-best Challenge' 2020-05-09T16:47:36.346Z
Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good 2020-03-17T17:00:14.108Z
Good Done Right conference 2020-02-04T13:21:02.903Z
Cotton‐Barratt, Daniel & Sandberg, 'Defence in Depth Against Human Extinction' 2020-01-28T19:24:48.033Z
Announcing the Bentham Prize 2020-01-21T22:23:16.860Z
Pablo_Stafforini's Shortform 2020-01-09T15:10:48.053Z
Dylan Matthews: The case for caring about the year 3000 2019-12-18T01:07:49.958Z
Are comment "disclaimers" necessary? 2019-11-23T22:47:01.414Z
Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term' 2019-11-05T20:24:00.445Z
A wealth tax could have unpredictable effects on politics and philanthropy 2019-10-31T13:05:28.421Z
Schubert, Caviola & Faber, 'The Psychology of Existential Risk' 2019-10-22T12:41:53.542Z
How this year’s winners of the Nobel Prize in Economics influenced GiveWell’s work 2019-10-19T02:56:46.480Z
A bunch of new GPI papers 2019-09-25T13:32:37.768Z
Andreas Mogensen's "Maximal Cluelessness" 2019-09-25T11:18:35.651Z
'Crucial Considerations and Wise Philanthropy', by Nick Bostrom 2017-03-17T06:48:47.986Z
Effective Altruism Blogs 2014-11-28T17:26:05.861Z
The Economist on "extreme altruism" 2014-09-18T19:53:52.287Z
Effective altruism quotes 2014-09-17T06:47:27.140Z


Comment by pablo_stafforini on alexrjl's Shortform · 2020-11-07T22:35:44.058Z · EA · GW

Together with a few EA friends, I ended up betting a substantial amount of money on Biden. It went well for me, too, as well as for some of my friends. I think presidential elections present unusually good opportunities for both betting and arbitrage, so it may be worth coordinating some joint effort next time.

(As a note of historical interest, during the 2012 US election a small group of early EAs made some money arbitraging Intrade.)

Comment by pablo_stafforini on 4 Years Later: President Trump and Global Catastrophic Risk · 2020-10-27T11:40:18.785Z · EA · GW

In some cases Trump has been bad, but for the opposite reason than you were worried about! For example you criticized him for supporting travel bans during Ebola

It's not the opposite reason. The underlying criticism is that Trump's measures were miscalibrated to the magnitude of the problem. If your decision-making process is deeply flawed, as Trump's is, you should expect miscalibration in both directions.

Comment by pablo_stafforini on 'Existential Risk and Growth' Deep Dive #1 - Summary of the Paper · 2020-10-20T15:42:38.853Z · EA · GW

Leopold has now published a popular article discussing this topic. Highly recommended.

An excerpt:

Philosophers like Nick Bostrom, Derek Parfit, and Toby Ord have become increasingly concerned about such so-called “existential risks.” An unrecoverable collapse of civilization wouldn’t just be tragic for the billions who would suffer and die. Perhaps the greatest tragedy would be the foreclosing of all of humanity’s potential. Humanity could flourish for billions of years and enable trillions of happy human lives—if only we do not destroy ourselves beforehand.

This line of thinking has led some to question whether “progress”—in particular, technological progress—is as straightforwardly beneficial as commonly assumed. Nick Bostrom imagines the process of technological development as “pulling balls out of a giant urn.” So far, we’ve been lucky, pulling out a great many “white” balls that are broadly beneficial. But someday, we might pull out a “black” ball: a new technology that destroys humanity. Before that first nuclear test, some of the physicists worried that the nuclear bomb would ignite the atmosphere and end the world. Their calculations ultimately deemed it “extremely unlikely,” and so they proceeded with the test—which, as it turns out, did not end the world. Perhaps the next time, we don’t get so lucky.

The same technological progress that creates these risks is also what drives economic growth. Does that mean economic growth is inherently risky? Economic growth has brought about extraordinary prosperity. But for the sake of posterity, must we choose safe stagnation instead? This view is arguably becoming ever-more popular, particularly amongst those concerned about climate change; Greta Thunberg recently denounced “fairy tales of eternal economic growth” at the United Nations.

I argue that the opposite is the case. It is not safe stagnation and risky growth that we must choose between; rather, it is stagnation that is risky and it is growth that leads to safety.

We might indeed be in “time of perils”: we might be advanced enough to have developed the means for our destruction, but not advanced enough to care sufficiently about safety. But stagnation does not solve the problem: we would simply stagnate at this high level of risk. Eventually, a nuclear war or environmental catastrophe would doom humanity regardless.

Faster economic growth could initially increase risk, as feared. But it will also help us get past this time of perils more quickly. When people are poor, they can’t focus on much beyond ensuring their own livelihoods. But as people grow richer, they start caring more about things like the environment and protecting against risks to life. And so, as economic growth makes people richer, they will invest more in safety, protecting against existential catastrophes. As technological innovation and our growing wealth has allowed us to conquer past threats to human life like smallpox, so can faster economic growth, in the long run, increase the overall chances  of humanity’s survival.

Comment by pablo_stafforini on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-16T16:52:26.542Z · EA · GW

I sympathise with this, but I think if we don't have public posts like this one, the outcome is more-or-less decided in advance.

Yes, I agree. What I'm uncertain about is whether it's desirable to have more of these posts at the current margin. And to be clear: by saying I'm uncertain whether it's a good idea, I don't mean to suggest it's not a good idea; I'm simply agnostic.

Comment by pablo_stafforini on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-16T15:07:12.389Z · EA · GW

This comment expresses something I was considering saying, but more clearly than I could. I would add that thinking strategically about this cultural phenomenon involves not only trying to understand its mechanism of action, but also coming up with frameworks for deciding what tradeoffs to make in response to it. I am personally very disturbed by the potential of cancel culture to undermine or destroy EA, and my natural reaction is to believe that we should stand firm and make no concessions to it, as well as to upvote posts and comments that express this sentiment. This is not, however, a position I feel I can endorse on reflection: it seems instead that protecting our movement against this risk involves striking a difficult and delicate balance between excessive and insufficient relaxation of our epistemic standards. By giving in too much the EA movement risks relinquishing its core principles, but by giving in too little the movement risks ruining its reputation. Unfortunately, I suspect that an open discussion of this issue may itself pose a reputational risk, and in fact I'm not sure it's even a good idea to have public posts like the one this comment is responding to, however much I agree with it.

Comment by pablo_stafforini on jackmalde's Shortform · 2020-10-13T12:28:22.411Z · EA · GW

On a charitable reading of Parfit,  the 'muzak and potatoes' expression is meant to pick out the kind of phenomenal experience associated with the "drab existence" he wants to communicate to the reader. So he is not asking you to imagine a life where you do nothing but listen to muzak and eat potatoes. Instead, he is asking you to consider how it typically feels like to listen to muzak and eat potatoes, and to then imagine a life that feels like that, all the time.

Comment by pablo_stafforini on Some learnings I had from forecasting in 2020 · 2020-10-08T12:40:46.753Z · EA · GW

On the one hand, my opinion of Metaculus predictions worsened as I saw how the 'recent predictions' showed people piling in on the median on some questions I watch.

Can you say more about this? I ask because this behavior seems consistent with an attitude of epistemic deference towards the community prediction when individual predictors perceive it to be superior to what they can themselves predict given their time and ability constraints.

Comment by pablo_stafforini on Election scenarios · 2020-09-25T11:00:52.932Z · EA · GW

The US democracy may be at risk. It is only "our democracy" for 4.25% of the world's population.

(Apologies for focusing on a single word of your post, but I think this seemingly trivial semantic difference reflects a more substantive and widespread issue. How many concerned posts about the political situation in, say, India have you seen in the Forum recently? How many "action items" for protecting democracy in, say, Brazil have you encountered? It is depressing and, yes, irritating to see a community that supposedly values all people equally concentrate their attention so overwhelmingly on a single country when it comes to politics and "current affairs".)

Comment by pablo_stafforini on Long-Term Future Fund: September 2020 grants · 2020-09-18T18:50:48.978Z · EA · GW

I agree that the sentence Linch quoted sounds like a "bravery debate" opening, but that's not how I perceive it in the broader context. I don't think the author is presenting himself/herself as an underdog, intentionally or otherwise. Rather, they are making that remark as part of their overall attempt to indicate that they are aware that they are raising a sensitive issue and that they are doing so in a collaborative spirit and with admittedly limited information. This strikes me as importantly  different from the prototypical bravery debate, where the primary effect is not to foster an atmosphere of open dialogue but to gain sympathy for a position.

I am tentatively in agreement with you that "clarification of intent" can be done without "bravery talk", by which I understand any mention that the view one is advancing is unpopular. But I also think that such talk doesn't always communicate that one is the underdog, and is therefore not inherently problematic. So, yes, the OP could have avoided that kind of language altogether, but given the broader context, I don't think the use of that language did any harm.

(I'm maybe 80% confident in what I say above, so if you disagree, feel free to push me.)

Comment by pablo_stafforini on Long-Term Future Fund: September 2020 grants · 2020-09-18T17:38:24.544Z · EA · GW

Thanks, you are right. I have amended the last sentence of  my comment.

Comment by pablo_stafforini on Long-Term Future Fund: September 2020 grants · 2020-09-18T14:39:42.165Z · EA · GW

FWIW, I think that the qualification was very appropriate and I didn't see the author as intending to start a "bravery debate". Instead, the purpose appears to have been to emphasize that the concerns were raised in good faith and with limited information. Clarifications of this sort seem very relevant and useful, and quite unlike the phenomenon described in Scott's post.

Comment by pablo_stafforini on Pablo Stafforini’s Forecasting System · 2020-09-17T09:28:06.445Z · EA · GW

The link is broken; can you fix it?

In the meantime, a few random thoughts. First, the index fund analogy suggests a self-correcting mechanism. Players defer to the community only to the degree that they expect it to track the truth more reliably than their individual judgment, given their time and ability constraints. As the reliability of the community prediction changes, in response to changes in the degree to which individual players defer to it, so will these players's willingness to defer to the community. 

Second, other things equal, I think it's a desirable property of a prediction platform that it makes it rational for players to sometimes defer to the community. This could be seen as embodying the important and neglected truth that in many areas of life one can generally do better by deferring to society's collective wisdom than by going with one's individual opinion. Furthermore, it requires considerable ability to determine when and to what degree one should defer to others in any given case. In fact, this metacognitive skill of knowing how much more (or less) reliable other opinions are relative to one's own seems like a core epistemic virtue, and one that can be assessed only if users are allowed to defer to others.

Finally, insofar as there are reasons for wanting players not to defer to the community, I think the appropriate response is to change the scoring function rather than to ask players to exercise self-restraint. As fellow forecaster Tom Adamczewski reminded me, the Metaculus Scoring System page describes one such possible change:

 It's easy to account for the average community prediction  by adding a constant to each of these. For example, . This way a player would get precisely zero points if they just go along with the community average.

Perhaps Metaculus could have two separate leaderboards: in addition to the current ranking, it could also display a ranking of players with the community component subtracted. These two rankings could be seen as measuring the quality of a player's "credences" and "impressions", respectively.

Comment by pablo_stafforini on Pablo Stafforini’s Forecasting System · 2020-09-17T00:07:54.293Z · EA · GW

My spacemacs config file is here. The main Keyboard Maestro macros I use are here.  As noted, these macros were created back when I was beginning to use Emacs, so they don't make use of org capture or other native functionality (including Emacs own internal macros, or the even more powerful elmacro package). I plan to review these files at some point, but not in the immediate future. Happy to answer questions if anything is unclear.

Comment by pablo_stafforini on Pablo Stafforini’s Forecasting System · 2020-09-16T23:54:15.827Z · EA · GW

Yes, indeed. I was about to suggest an edit to the transcript to make that clear. When I created the Keyboard Maestro script, I was still relatively unfamiliar with Org mode so I didn't make use of org capture. But that's the proper way to do it.

Comment by pablo_stafforini on Pablo Stafforini’s Forecasting System · 2020-09-16T23:07:33.436Z · EA · GW

It was a pleasure to discuss my approach to forecasting with Jungwon and Amanda. I'd be happy to clarify anything that I failed to explain properly during our conversation, or to answer any questions related to the implementation or reasoning behind my "system" (if one may call it that).

Comment by pablo_stafforini on Are social media algorithms an existential risk? · 2020-09-15T11:21:30.139Z · EA · GW

I haven't watched the documentary, but I'm antecedently skeptical of claims that social media constitute an existential risk in the sense in which EAs use that term. The brief summary provided by the Wikipedia article doesn't seem to support that characterization:

the film explores the rise of social media and the damage it has caused to society, focusing on its exploitation of its users for financial gain through surveillance capitalism and data mining, how its design is meant to nurture an addiction, its use in politics, its impact on mental health (including the mental health of adolescents and rising teen suicide rates), and its role in spreading conspiracy theories and aiding groups such as flat-earthers and white supremacists.

While many of these effects are terrible (and concern about them partly explains why I myself basically don't use social media), they do not appear to amount to threats of existential catastrophe. Maybe the claim is that the kind of surveillance made possible by social media and big tech firms more generally ("surveillance capitalism") has the potential to establish an unrecoverable global dystopia?

Are there other concrete mechanisms discussed by the documentary? 

Comment by pablo_stafforini on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-06T12:47:41.831Z · EA · GW

To what degree are the differences between longtermists who prioritize s-risks and longtermists who prioritize x-risks driven by moral disagreements about the relative importance of suffering versus happiness, rather than by factual disagreements about the relative magnitude of s-risks versus x-risks?

Comment by pablo_stafforini on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-06T12:40:22.745Z · EA · GW

The universe is vast, so it seems there is a lot of room for variation even within the subset of risks involving astronomical quantities of suffering. How much, in your opinion, do s-risks vary in severity? Relatedly, what are your grounds for singling out s-risks as the object of concern, rather than those risks involving the most suffering?

Comment by pablo_stafforini on "Disappointing Futures" Might Be As Important As Existential Risks · 2020-09-05T10:39:39.410Z · EA · GW

Just to say that I would put very little weight on my responses in that post, many of which are highly unstable, and some of which I no longer endorse (including the 1% and 99% estimates quoted above). I hope to revise it soon, adding measures of resilience as Greg Lewis suggests here.

Comment by pablo_stafforini on Database of existential risk estimates · 2020-09-02T12:50:44.282Z · EA · GW

Yes, sorry, I should have made that clearer—I posted here because I couldn't think of a more appropriate thread, and I didn't want to create a separate post (though maybe I could have used the "shortform" feature).

AI Impacts has a bunch of posts on this topic. This one discusses what appears to be the largest dataset of such predictions in existence. This one lists several analyses of time to human-level AI. And this one provides an overview of all their writings on AI timelines.

Comment by pablo_stafforini on Database of existential risk estimates · 2020-09-01T12:44:07.241Z · EA · GW

An important new aggregate forecast, based on inputs from LW users:

  • Aggregated median date: January 26, 2047
  • Aggregated most likely date: November 2, 2033
  • Earliest median date of any forecast: June 25, 2030
  • Latest median date of any forecast: After 2100
Comment by pablo_stafforini on More empirical data on 'value drift' · 2020-08-29T13:08:26.933Z · EA · GW

Thanks for this useful summary!

Note that section 4 reiterates Peter Hurford's analysis in a post from last year.

There are probably more older samples that people could dig out and follow up on themselves, such as the analysis of CEA’s early team.

One possibility is to take a look at the top contributors to Felicifia, an early EA/utilitarian forum, and note how many are still around.  Louis Francini kindly restored the original site earlier this year, which had been down for a long time, so this can be done very easily.

Comment by pablo_stafforini on Should We Prioritize Long-Term Existential Risk? · 2020-08-21T02:01:11.390Z · EA · GW

If insufficient efforts are made to reduce short-term x-risk, there may not be future generations to spend your investment.

Comment by pablo_stafforini on Should We Prioritize Long-Term Existential Risk? · 2020-08-20T19:27:45.245Z · EA · GW

I do agree that investing is a promising way to punt to the future. I don't have strong views on whether at the current margin one empowers future generations more by trying to reduce risks that threaten their existence or their ability to reduce x-risk, or by accumulating financial resources and other types of capacity that they can either deploy to reduce x-risk or continue to accumulate. What makes you favor the capacity-building approach over the short-term x-risk reduction approach?

Comment by pablo_stafforini on Should We Prioritize Long-Term Existential Risk? · 2020-08-20T11:28:30.826Z · EA · GW

A plausible longtermist argument for prioritizing short-term risk is the "punting to the future" approach to dealing with radical cluelessness. On this approach, we should try to reduce only those risks which only the present generation can influence, and let future generations take care of the remaining risks. (In fact, the optimal strategy is probably one where each generation pursues this approach, at least for sufficiently many generations.)

Comment by pablo_stafforini on EA Forum update: New editor! (And more) · 2020-08-19T12:22:38.662Z · EA · GW

Update: In case it is of use to other users of Vimium, I've found a workaround for removing persistent previews. Add this to the 'custom key mappings' field under 'Vimium options':

map x LinkHints.activateMode action=hover

where 'x' is the key binding for this action (I use 'q'). Just press that key and select the link corresponding to the preview you want to disable. (The behavior is a bit erratic; sometimes you need to do it twice.)

Comment by pablo_stafforini on What are some low-information priors that you find practically useful for thinking about the world? · 2020-08-08T21:29:25.110Z · EA · GW

I agree this is useful and I often use it when forecasting. It's important to emphasize that this is a useful prior, though, since Gott appears to treat it as an all-things-considered posterior.

I have used this method with great success to estimate, among other things, the probability that friends will break up with their romantic partners.

William Poundstone uses this example, too, to illustrate the "Copernican principle" in his popular book on the doomsday argument.

Comment by pablo_stafforini on EA reading list: utilitarianism and consciousness · 2020-08-08T12:28:14.576Z · EA · GW

I would add the extended episode of the 80,000 Hours podcast with David Chalmers. To my knowledge, some of the views he expresses there—e.g. that phenomenal consciousness is morally valuable even if not hedonically valenced—have not been explicitly discussed in either the EA or the philosophical literature.

Many utilitarian EAs have independently gravitated towards the view that the intrinsic value of pleasure and pain can be known by introspection or "direct acquaintance". Surprisingly, as far as I know no statement of this view exists in the EA literature, though some may be found in the philosophical literature (including publications by philosophers sympathetic to EA):

Comment by pablo_stafforini on What are some low-information priors that you find practically useful for thinking about the world? · 2020-08-07T12:19:50.685Z · EA · GW

Douglas Hubbard mentions a few rules of the sort you seem to be describing in his book How to measure anything. For example, his "Rule of Five" states that «There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population.» One of the central themes in that book is in fact that you "have more data that you think", and that these simple rules can take you surprisingly far.

Comment by pablo_stafforini on Open and Welcome Thread: August 2020 · 2020-08-03T12:32:01.740Z · EA · GW

I'm also not a native English speaker; to my ears, "unusual causes" feels similar in connotation to "non-standard causes". What about simply "other causes"?

Comment by pablo_stafforini on EA reading list: cluelessness and epistemic modesty · 2020-08-03T12:01:10.473Z · EA · GW

On cluelessness, I would add

On epistemic modesty:

Comment by pablo_stafforini on EA Forum update: New editor! (And more) · 2020-08-02T11:26:41.641Z · EA · GW

Makes sense. I checked Gwern's site and the previews are also triggered, confirming that this is a Vimium bug/feature.

Comment by pablo_stafforini on EA Forum update: New editor! (And more) · 2020-08-01T14:05:17.032Z · EA · GW

Thank you for implementing these improvements! I'm particularly pleased to see that I can now add hyperlinks with a shortcut.

A request: not sure if it's related, but around the time these changes were made, whenever I try to open an EA Forum link in a new tab using Vimium (keystroke: capital F), a preview of that page is shown and there's no way I can turn it off without using the mouse. I think the preview should be shown only when one hovers over the relevant link (to inform the user whether the link is in fact worth visiting), whereas in this case it is shown after one selects the link to be opened, so it's serving no useful function. It would be great if this could be fixed.

Comment by pablo_stafforini on List of EA-related email newsletters · 2020-08-01T13:50:51.482Z · EA · GW

New newsletter: Effective Altruism Asia.

(H/T David Nash)

Comment by pablo_stafforini on Delegate a forecast · 2020-07-28T12:05:23.112Z · EA · GW

Thank you, that was informative. I don't think you missed anything, though I haven't myself thought about this question much—that is in part why I was curious to see someone else try to answer it.

I think genetic selection and/or editing has the potential to be transformative, and perhaps even to result in greater-than-human intelligence. Despite this, it's comparatively neglected, both within EA and society at large. So having more explicit forecasts in this area seems pretty valuable.

Comment by pablo_stafforini on Delegate a forecast · 2020-07-27T00:56:01.499Z · EA · GW

I misread the post as asking for a personal forecast. Since I now realize it's possible to ask questions of any type, I would much rather delegate a forecast on an important topic, such as:

How many standard deviations away from the mean will the 1000th human born from stem-cell derived gametes score in a test of cognitive ability taken at the age of 18?

Comment by pablo_stafforini on Delegate a forecast · 2020-07-26T14:34:06.795Z · EA · GW

How many points will I have on Metaculus at the end of 2021?

(Question resolves according to the number listed here on 2021-12-31 23:59:59 GMT.)

Comment by pablo_stafforini on The Importance of Unknown Existential Risks · 2020-07-24T19:27:30.242Z · EA · GW

Note that it's not just the Doomsday Argument that may give one reason for revising one's x-risk estimates beyond what is suggested by object-level analyses of specific risks. See this post by Robin Hanson for an enumeration of the relevant types of arguments.

I am puzzled that these arguments do not seem to be influencing many of the most cited x-risk estimates (e.g. Toby's). Is this because these arguments are thought to be clearly flawed? Or is it because people feel they just don't know how to think about them, and so they simply ignore them? I would like to see more "reasoning transparency" about these issues.

It's also worth noting that some of these "speculative" arguments would provide reason for revising not only overall x-risks estimates, but also estimates of specific risks. For example, as Katja Grace and Greg Lewis have noted, a misaligned intelligence explosion cannot be the Great Filter. Accordingly, insofar as the Great Filter influences one's x-risk estimates, one should shift probability mass away from AI risk and toward risks that are plausible Great Filters.

Comment by pablo_stafforini on The Importance of Unknown Existential Risks · 2020-07-24T13:22:27.895Z · EA · GW

I agree that the literature on the Doomsday Argument involves an implicit assessment of unknown risks, in the sense that any residual probability mass assigned to existential risk after deducting the known x-risks must fall under the unknown risks category. (Note that our object-level assessment of specific risks may cause us to update our prior general risk estimates derived from the Doomsday Argument.)

Still, Michael's argument is not based on anthropic considerations, but on extrapolation from the rate of x-risk discovery. These are two very different reasons for revising our estimates of unknown x-risks, so it's important to keep them separate. (I don't think we disagree; I just thought this was worth highlighting.)

Comment by pablo_stafforini on Database of existential risk estimates · 2020-07-21T15:13:48.202Z · EA · GW

Not in your database, I think:

“Personally, I now think we humans will be wiped out this century,” Frank Tipler told me. He may be the most pessimistic of Bayesian doomsayers, followed by Willard Wells (who gives the same time frame for the end of civilization and the beginning of the postapocalypse).

William Poundstone, The Doomsday Calculation, p. 259

Comment by pablo_stafforini on How tractable is changing the course of history? · 2020-07-21T14:59:09.980Z · EA · GW
Unfortunately I haven't read this book, and doubt I'll get to it anytime soon, partly because I don't think there's an audiobook version.

FYI, you can turn virtually any text file into a machine-read audiobook with the Voice Aloud Reader app (Android, iOS). Not as nice as an audiobook read by a professional actor, but still pretty good, in my opinion (I use Google TTS Engine with 'English (India)' as the voice; I just like the Indian accent, and also get the impression that the speech synthesis is better than some of the other accents').

Comment by pablo_stafforini on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T17:01:34.664Z · EA · GW
Ambiguous statements are bad, 100%, but so are clear, baseless statements.

You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report. The former claim is undoubtedly true, but it doesn't necessarily describe a problematic phenomenon. (See Greg Lewis's recent post; I'm not sure if you disagree.). The latter claim would be very worrying if true, but I don't see reason to believe that it is. Sure, EAs sometimes lack good reasons for the views they espouse, but this is a general phenomenon unrelated to the practice of reporting credences explicitly.

Comment by pablo_stafforini on Information hazards: a very simple typology · 2020-07-16T13:33:01.694Z · EA · GW

You credit Anders Sandberg here and elsewhere, but you don't provide a reference. Where did Sandberg propose the typology that inspired yours? A search for 'direct information hazard' (the expression you attribute to Sandberg) only results in this post and your LW comment.

Comment by pablo_stafforini on Mike Huemer on The Case for Tyranny · 2020-07-16T12:22:16.937Z · EA · GW
This is how our species is going to die. Not necessarily from nuclear war specifically, but from ignoring existential risks that don’t appear imminent‌ at this moment. If we keep doing that, eventually, something is going to kill us – something that looked improbable in advance, but that, by the time it looks imminent, is too late to stop.
Comment by pablo_stafforini on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-15T14:22:11.311Z · EA · GW
Generally, I'd like to hear more about how different people introduce the ideas of EA, longtermism, and specific cause areas. There's no clear cut canon, and effectively personalizing an intro can difficult, so I'd love to hear how others navigate it.

This seems like a promising topic for an EA Forum question. I would consider creating one and reposting your comment as an answer to it. A separate question is probably also a better place to collect answers than this thread, which is best reserved for questions addressed to Ben and for Ben's answers to those questions.

Comment by pablo_stafforini on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T19:26:49.810Z · EA · GW

Which of the EA-related views you hold are least popular within the EA community?

Comment by pablo_stafforini on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T19:22:53.876Z · EA · GW

Have you considered doing a joint standup comedy show with Nick Bostrom?

Comment by pablo_stafforini on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T19:13:41.551Z · EA · GW

What writings have influenced your thinking the most?

Comment by pablo_stafforini on The 80,000 Hours podcast should host debates · 2020-07-12T00:29:54.048Z · EA · GW

I'm happy to hear that you are keen on the anti-debates idea! I suggested it to the EA Global organizers a few years ago, but it seems they weren't very interested. (Incidentally, the idea isn't Will's, or mine; it dates back at least to this debate between David Chalmers and Guido Tononi from 2016.)

A possible variant is to randomize whether the debate will or will not be reversed, and challenge the audience to guess whether the debaters are arguing for their own positions or their opponents', disclosing the answer only at the end of the episode. (In some cases, or for some members of the audience, the answer will be obvious from background information about the debaters, but it's unclear how often this will be the case.)

EDIT: I now see that I misunderstood what was meant by an 'anti-debate': not a debate where each person defends the opposite side, but rather a debate that is collaborative rather than competitive. I personally would be interested in anti-debates in either of those senses.

Comment by pablo_stafforini on Forecasting Newsletter: June 2020. · 2020-07-01T22:38:48.743Z · EA · GW

A new, Ethereum-based prediction market has launched: Polymarket. It even features a question about Slate Star Codex and the New York Times. I tried it and it was pretty easy to set up. (I have no affiliation or with the owner.)