Posts

Pablo_Stafforini's Shortform 2020-01-09T15:10:48.053Z · score: 6 (1 votes)
Dylan Matthews: The case for caring about the year 3000 2019-12-18T01:07:49.958Z · score: 27 (13 votes)
Are comment "disclaimers" necessary? 2019-11-23T22:47:01.414Z · score: 57 (20 votes)
Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term' 2019-11-05T20:24:00.445Z · score: 38 (14 votes)
A wealth tax could have unpredictable effects on politics and philanthropy 2019-10-31T13:05:28.421Z · score: 20 (9 votes)
Schubert, Caviola & Faber, 'The Psychology of Existential Risk' 2019-10-22T12:41:53.542Z · score: 49 (20 votes)
How this year’s winners of the Nobel Prize in Economics influenced GiveWell’s work 2019-10-19T02:56:46.480Z · score: 16 (9 votes)
A bunch of new GPI papers 2019-09-25T13:32:37.768Z · score: 102 (39 votes)
Andreas Mogensen's "Maximal Cluelessness" 2019-09-25T11:18:35.651Z · score: 46 (16 votes)
'Crucial Considerations and Wise Philanthropy', by Nick Bostrom 2017-03-17T06:48:47.986Z · score: 14 (14 votes)
Effective Altruism Blogs 2014-11-28T17:26:05.861Z · score: 4 (4 votes)
The Economist on "extreme altruism" 2014-09-18T19:53:52.287Z · score: 4 (4 votes)
Effective altruism quotes 2014-09-17T06:47:27.140Z · score: 5 (5 votes)

Comments

Comment by pablo_stafforini on Growth and the case against randomista development · 2020-01-17T22:58:18.136Z · score: 9 (5 votes) · EA · GW
The track record of attempts to overthrow any system of power are abysmal

I think you are seriously mistaken. Attempts to overthrow monarchy do not remotely have the track record of attempts to overthrow capitalism. Compare, say, the American and French revolutions of the 18th century with the Russian and Chinese revolutions of the 20th century.

[I have edited my comment to make it less confrontational.]

Comment by pablo_stafforini on Growth and the case against randomista development · 2020-01-17T18:33:38.095Z · score: 9 (6 votes) · EA · GW

The author didn't say that all "left/socialist" policies are bad. The first sentence of his comment reads:

This post reminds me of a common left/socialist reaction to EA: “Charity is pointless, overthrowing capitalism is clearly the best way to increase human welfare."

When he later writes that "[t]he best reply to the left/socialists is probably that their empirical track record is much worse", he is referring specifically to the empirical track record of attempts to overthrow capitalism, which is indisputably abysmal.

Comment by pablo_stafforini on Growth and the case against randomista development · 2020-01-17T14:11:54.262Z · score: 17 (5 votes) · EA · GW
Randomista is clearly not a neutral term, and I think constitutes a kind of name calling

What's your basis for claiming that 'randomista' is a non-neutral term? That is not my impression. A popular book that presents a positive picture of the field is titled Randomistas: How Radical Researchers Are Changing Our World. A recent article by one of the world's most prestigious science journals uses the headline "‘Randomistas’ who used controlled trials to fight poverty win economics Nobel", and includes the following line: "Kremer, Banerjee and Duflo are at the vanguard of the ‘randomista’ movement, which applies the methods of rigorous medical trials — in which large numbers of participants are randomized to receive either a particular intervention or a standard treatment, and followed over time — to social interventions such as improving education." And Mark Ravallion, a leading authority on the economics of poverty, explicitly writes: "That term 'randomistas' is not pejorative." (p. 2)

Comment by pablo_stafforini on Response to recent criticisms of EA "longtermist" thinking · 2020-01-16T20:20:27.031Z · score: 22 (7 votes) · EA · GW
They take total utilitarian axiology and EV maximization for granted in their main arguments

I think this is a very misleading characterization of the paper. The passage you quoted is part of a paragraph which reads as follows (emphasis added):

Our discussion above was conducted on the assumption of (i) a total utilitarian axiology and (ii) an expected-value approach to ex ante evaluation under uncertainty. Both of these assumptions are at least somewhat controversial. The present section examines the extent to which our arguments would be undermined by various ways of deviating from those assumptions. Broadly, the upshot will be that the case for strong longtermism is quite robust to plausible deviations from these starting axiological and decision-theoretic assumptions.

Moreover, this is not a claim incidental to the paper; it is one of the paper's central claims. As the authors write in the introductory section:

Our aim in this paper is to expand on this prior work in four ways... Second, we show that the argument goes through on a wide range of axiologies and decision theories, not only on the combination of total utilitarianism and expected utility theory.

In other words, one of the four key arguments made in the paper is that the case for axiological strong longtermism does not require the acceptance of a total utilitarian axiology or expected utility theory.

Comment by pablo_stafforini on Pablo_Stafforini's Shortform · 2020-01-15T23:40:30.502Z · score: 4 (2 votes) · EA · GW
After more thought, we’ve decided that we will change the name to “Forum Favorites”

Great, thank you!

Comment by pablo_stafforini on Pablo_Stafforini's Shortform · 2020-01-15T23:39:41.551Z · score: 2 (1 votes) · EA · GW

Thanks for the reply. I think it's totally fine for you to deprioritize this suggestion—not very important.

Comment by pablo_stafforini on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T22:10:53.748Z · score: 9 (7 votes) · EA · GW

That wasn't so boring.

Comment by pablo_stafforini on Pablo_Stafforini's Shortform · 2020-01-11T12:28:53.019Z · score: 6 (3 votes) · EA · GW

I think in this case the fault lies entirely with me, given the number of different ways one can see a list of all the most recent posts.

(My original bullet point also mentioned that sorting by recency seemed like a preferable way to display posts anyway, and for this reason I concluded that this should be the default display. But in his reply Oli mentioned some important drawbacks that I had overlooked, so I no longer believe this.)

Comment by pablo_stafforini on Pablo_Stafforini's Shortform · 2020-01-09T23:47:54.763Z · score: 9 (3 votes) · EA · GW

Ah, I hadn't noticed the 'All-posts page'. That addresses my needs, thanks. And point taken about the drawbacks of recency sorting. I retract that part of my comment.

Comment by pablo_stafforini on Pablo_Stafforini's Shortform · 2020-01-09T18:54:19.572Z · score: 13 (4 votes) · EA · GW

I now realize I had already seen that post. Perhaps my memory is faulty, or perhaps the distinction between Frontpage and Community is not one that sticks. A couple of comments:

In general, I think it's not a good sign if a central feature of a website isn't self-explanatory, but instead requires the reading of a detailed explanation. Moreover, in this case the explanation is buried in a post that new users are unlikely to encounter (and at least some old users are apt to forget). But, more fundamentally, I just don't see a compelling reason for categorizing posts in this complicated manner to begin with. Why not just have a "curated" category to promote posts that stand out in the relevant dimensions, like LessWrong does? Or dispense with the idea of "promoted" posts altogether, and let the karma system do the work. Keep it simple, stupid.

Comment by pablo_stafforini on Pablo_Stafforini's Shortform · 2020-01-09T15:10:48.235Z · score: 23 (9 votes) · EA · GW

I don't know if there is a designated place to leave comments about the EA Forum, so for the time being I'm posting them here. I think the current homepage has a number of problems:

  • The 'Community Favorites' section keeps listing the same posts over and over again. I don't see the point of having a prominent list of favorite posts in the home page that changes so little. I suggest expanding the list considerably so that regular visitors can still expect to see novel posts every time they visit the homepage.
  • [Note: in light of Oli's comment below, I'm retracting this bullet point.] The 'Latest Posts' section sorts posts neither by karma nor by date; rather, it seems to rely on a hybrid sorting algorithm. I don't think this is useful: as someone who checks the home page regularly, I want to be able to easily see what the latest posts are, so that when I go down the list and eventually come across a post I have already seen, I can conclude that I have seen all posts after it as well. With the current sorting algorithm, there's no way for me to insure that my browsing session has exhausted all the posts seen since the previous session.
  • I find it hard to understand the meaning of the 'Community' category. The description says that it consists of "posts with topical content or which relate to the EA community itself". But that description also draws a contrast to "Frontpage posts, which are selected by moderators as especially interesting or useful to people with interest in doing good effectively." This contrast suggests that 'community' posts are simply posts that haven't been curated, as opposed to posts with a focus on the EA community. In other words, there are two separate distinctions here: that between curated vs. non-curated posts, and that between posts with a community focus vs. posts with a focus on other aspects of EA; and it is unclear in terms of which of these the 'Community' category is defined. To make things even more confusing, the 'Community Favorites' section doesn't appear to employ the term 'Community' in either of those senses; indeed, the term seems to be used with the opposite meaning of "non-curated", since the "Community Favorites" consists of a list of "all-time greatest posts".
Comment by pablo_stafforini on EA Forum Prize: Winners for September 2019 · 2020-01-09T13:10:03.966Z · score: 15 (6 votes) · EA · GW

[Meta] Any reason why this post is still pinned?

Comment by pablo_stafforini on Response to recent criticisms of EA "longtermist" thinking · 2020-01-06T13:22:42.444Z · score: 21 (9 votes) · EA · GW
Reason 1 [for disagreeing with longtermism]: You don't believe that very large numbers of people in the far future add up to being a very big moral priority. For instance, you may take a Rawlsian view, believing that we should always focus on helping the worst-off.

It's not clear that, of all the people that will ever exist, the worst-off among them are currently alive. True, the future will likely be on average better than the present. But since the future potentially contains vastly more people, it's also more likely to contain the worst-off people. Moreover, work on S-risks by Tomasik, Gloor, Baumann and others provides additional reason for expecting such people—using 'people' in a broad sense—to be located in the future.

Comment by pablo_stafforini on Effective Altruism Blogs · 2019-12-29T22:20:56.652Z · score: 4 (2 votes) · EA · GW

Thanks. As noted in the update, this list is no longer updated. Please see eablogs.net.

Comment by pablo_stafforini on Max_Daniel's Shortform · 2019-12-17T14:32:20.197Z · score: 2 (1 votes) · EA · GW

[deleted because the question I asked turned out to be answered in the comment, upon careful reading]

Comment by pablo_stafforini on EA Forum Prize: Winners for October 2019 · 2019-12-12T02:13:47.809Z · score: 2 (1 votes) · EA · GW

Thanks. Just to be clear: before your edit, there was no thread linked, or at least no link showed up on my browser. I mention this in case it reflects a bug with the site rather than an oversight.

Comment by pablo_stafforini on EA Forum Prize: Winners for October 2019 · 2019-12-11T12:42:13.668Z · score: 4 (2 votes) · EA · GW

Max Daniel is listed as one of the four recipients of a Comment Prize, but no comment is listed.

Comment by pablo_stafforini on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-04T13:00:27.385Z · score: 43 (18 votes) · EA · GW

You have been part of the effective altruism movement since its inception. What are some interesting or important ways in which you think EA has changed over the years?

Comment by pablo_stafforini on A list of EA-related podcasts · 2019-11-27T19:55:04.759Z · score: 17 (13 votes) · EA · GW

Thanks for compiling this.

I've created a ListenNotes list with all the "Strongly EA-related podcasts" and a few others here. It displays the most recent episode from each of those podcasts and lets you import them all easily to your favorite podcast app.

Comment by pablo_stafforini on Are comment "disclaimers" necessary? · 2019-11-26T20:34:37.673Z · score: 3 (2 votes) · EA · GW
If you're adding a disclosure already, surely having it be a disclaimer also isn't more distracting?

I agree with this. But my sense is that only a small fraction of the comments which include a disclaimer are also comments which include or should include a disclosure. So the fact that it's not more distracting to have both than only a disclaimer doesn't influence my general thinking about disclaimers much.

There's also the separate argument that adding disclaimers runs the risk of changing expectations about what can be inferred from posts that lack them. Other things equal, I would prefer to support the conversational norm that no one is speaking in a professional capacity unless they say so explicitly, or is otherwise obvious from context.

Comment by pablo_stafforini on Are comment "disclaimers" necessary? · 2019-11-24T18:03:16.559Z · score: 14 (6 votes) · EA · GW

Thanks. I agree it probably makes sense to add such statements when your posts or comments could be seen as promoting an organization you work for. The general argument for disclosing potential conflicts of interest applies here.

While I didn't make it clear in my question, the cases I had in mind are not cases of this sort. Rather, I was thinking of cases in which the purpose of the disclaimer is to indicate that the views one expresses should not be interpreted as representing those of one's organization.

Larks draws a useful distinction between disclosures and disclaimers, which corresponds to these two different cases. I sympathize with his arguments for concluding that, while disclosures are desirable, disclaimers are unnecessary.

Comment by pablo_stafforini on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T12:50:37.874Z · score: 9 (6 votes) · EA · GW

Hey Mike,

I'm a community moderator at Metaculus and am generally interested in creating more EA-relevant questions. Are your predictions explicitly listed somewhere? It would be great to add at least some of them to the site.

Comment by pablo_stafforini on How to find EA documents on a particular topic · 2019-11-19T11:50:36.792Z · score: 8 (5 votes) · EA · GW

The search box on eablogs.net will run a search restricted to all and only those domains tracked by that website. Google Custom Search, however, doesn't work well, and results will only include a tiny subset of all occurrences of a given search term (John reports a similarly frustrating experience with this service). If anyone has suggestions for alternatives, please let me know.

Comment by pablo_stafforini on What areas of maths are useful across disciplines? · 2019-11-18T13:24:48.775Z · score: 2 (1 votes) · EA · GW

+1

Note that answering those questions doesn't require any advanced knowledge of statistics. Completing AP Statistics or an equivalent introductory course should suffice.

Comment by pablo_stafforini on What areas of maths are useful across disciplines? · 2019-11-17T22:06:54.281Z · score: 12 (4 votes) · EA · GW

I never studied maths or any math-heavy discipline formally (my background is in philosophy), but recently I completed the entire Khan Academy math curriculum. Speaking purely from personal experience, the most valuable math I learned was just basic algebra I had studied in high school but never really mastered. Besides that, I'd say statistics, linear algebra, and parts of calculus (especially series) have been the most useful so far.

Brian Tomasik's great article on education matters for altruism has a section listing useful disciplines and areas. Within maths, it mentions "probability, real analysis, abstract algebra, and general 'mathematical sophistication'" (statistics is also listed, but as a separate discipline).

Comment by pablo_stafforini on What book(s) would you want a gifted teenager to come across? · 2019-11-14T22:55:09.864Z · score: 12 (4 votes) · EA · GW

The other day was my mother's birthday and, not knowing what to buy her, I suddenly remembered this thread and comment, and decided to get her a copy of Rosling's excellent book, which had conveniently just been translated into Spanish.

True, my mother is not a teenager (I'm not that young), but as you point out the book makes a great gift for anyone.

Comment by pablo_stafforini on Formalizing the cause prioritization framework · 2019-11-06T01:09:34.496Z · score: 4 (3 votes) · EA · GW

Clicking on 'Open Image in New Tab' indicates that the image is hosted by Google Photos, so I suspect the privacy settings are preventing us from seeing them. Maybe Google read Rob's angry post and have now taken things to the other extreme. :P

Comment by pablo_stafforini on Formalizing the cause prioritization framework · 2019-11-06T01:08:51.536Z · score: 2 (1 votes) · EA · GW

oops, wrong thread.

Comment by pablo_stafforini on EA Updates for October 2019 · 2019-11-01T13:57:11.354Z · score: 17 (7 votes) · EA · GW

Thanks, as usual, for these posts.

One potentially EA-relevant book not included in your list is Open Borders: The Science and Ethics of Immigration by Bryan Caplan & Zach Weinersmith, published just a few days ago.

Comment by pablo_stafforini on Against value drift · 2019-10-30T01:51:41.810Z · score: 36 (18 votes) · EA · GW

What kind of evidence will cause you to abandon the view that people always act selfishly?

Comment by pablo_stafforini on EA Hotel Fundraiser 5: Out of runway! · 2019-10-26T14:18:59.693Z · score: 39 (20 votes) · EA · GW

Is this really important? A discrepancy of £700 relative to the £5000 projection seems acceptable to me.

Comment by pablo_stafforini on Probability estimate for wild animal welfare prioritization · 2019-10-25T13:55:26.796Z · score: 4 (2 votes) · EA · GW
To someone who already rejects Mere Addition, the Sadistic Conclusion is only a small cost, since if it's bad to add some lives with (seemingly) positive welfare, then it's a small step to accept that it can sometimes be worse to add lives with negative welfare over lives with positive welfare.

The question is whether one should accept some variety of CU or NU antecedently of any theoretical commitments to either. Naturally, if one is already committed to some aspects of NU, committing to further aspects of it will incur a relatively smaller cost, but that's only because the remaining costs have already been incurred.

Comment by pablo_stafforini on Probability estimate for wild animal welfare prioritization · 2019-10-25T13:19:43.203Z · score: 5 (3 votes) · EA · GW
We don't actually have a great definition of what suffering is and, if we model it in terms of preferences, it bottoms out. AKA, there's a point in suffering when I could imagine myself saying something like "This is the worst thing ever; get me out of here no matter what."

Proponents or sympathizers of lexical NU (e.g. Tomasik) often make this claim, but I'm not at all persuaded. The hypothetical person you describe would beg for the suffering to stop even if continuing to experience it was necessary and sufficient to avoid an even more intense or longer episode of extreme suffering. So if this alleged datum of experience had the evidential force you attribute to it, it would actually undermine lexical NU.

It's also super hard to really understand what it's like to be in edge-case extreme suffering situations without actually being in one, and most people haven't.

It's even harder to understand what it's like to experience comparably extreme happiness, since evolutionary pressures selected for brains capable of experiencing wider intensity ranges of suffering than of happiness. The kind of consideration you invoke here actually provides the basis for a debunking argument of the core intuition behind NU, as has been noted by Shulman and others. (Though admittedly many NUs appear not to be persuaded by this argument.)

I'm a moral anti-realist. There's no strict reason why we can't have weird dicontinuities in our utility functions if that's what we actually have.

Humans have all sorts of weird and inconsistent attitudes. Regardless of whether you are a realist or an anti-realist, you need to reconcile this particular belief of yours with all the other beliefs you have, including the belief that an experience that is almost imperceptibly more intense than another experience can't be infinitely (infinitely!) worse than it. Or, if you want a more vivid example, the belief that it would not be worth subjecting a quadrillion animals having perfectly happy lives to a lifetime of agony in factory farms solely to spare a single animal a mere second of slightly more intense agony just above the relevant critical threshold.

Comment by pablo_stafforini on Probability estimate for wild animal welfare prioritization · 2019-10-24T14:37:01.150Z · score: 2 (1 votes) · EA · GW
So the suffering focused ethic that I am proposing, does not imply that sadistic conclusion that you mentioned... My personal favorite suffering focused ethic is variable critical level utilitarianism: a flexible version of critical level utilitarianism where everyone can freely choose their own non-negative critical level

As long as the critical level is positive, critical-level utilitarianism does imply the sadistic conclusion. A population where everyone experiences extreme suffering would be ranked above a population where everyone is between neutrality and the critical level, provided the latter population is sufficiently large. The flexibility of the positive critical level can't help avoid this implication.

Comment by pablo_stafforini on Probability estimate for wild animal welfare prioritization · 2019-10-24T11:40:14.261Z · score: 4 (2 votes) · EA · GW

Yes, I agree that lexical NU doesn't have that implication. My comment was addressed to the particular suffering-focused view I took Stijn to be defending, which he contrasted to CU. If his defence is of "suffering-focused views" as a whole, however, then it seems unfair to compare them to CU specifically, rather than to "classical views" generally. Classical views also avoid the repugnant and very repugnant conclusions, since some specific views in this family, such as critical level utilitarianism, don't have this implication. [EDIT: Greg makes the same point in his comment; remarkably, we posted at exactly the same time.]

Concerning the merits of lexical NU, I just don't see how it's plausible to postulate a sharp value discontinuity along the suffering continuum. As discussed many times in the past, one can construct a series of pairwise comparisons involving painful experiences that differ only negligibly in their intensity. It is deeply counterintuitive that one of these experiences should be infinitely (!) worse than the other, but this is what the view implies. (I've only skimmed the essay, so please correct me if I'm misinterpreting it.)

Comment by pablo_stafforini on Probability estimate for wild animal welfare prioritization · 2019-10-24T02:46:49.882Z · score: 9 (4 votes) · EA · GW
Suffering focused ethics can also avoid the repugnant sadistic conclusion, which is the most counterintuitive implication of total utilitarianism that maximizes the sum of everyone’s welfare. Consider the choice between two situations. In situation A, a number of extremely happy people exist. In situation B, the same people exist and have extreme suffering (maximal misery), and a huge number of extra people exist, all with lives barely worth living (slight positive welfare). If the extra population in B is large enough, the total welfare in B becomes larger than the total welfare in A. Hence, total utilitarianism would prefer situation B, which is sadistic (there are people with extreme suffering) and repugnant (a huge number of people have lives barely worth living and no-one is very happy).

As pointed out recently, suffering focused views imply that a population where everyone experiences extreme suffering is better than a population where everyone experiences extreme happiness plus a brief, mild instance of suffering, provided the latter population is sufficiently more numerous. This seems even more problematic than the implication you describe, since at least in that case you have a very large population enjoying "muzak and potatoes", whereas here there's no redeeming feature: extreme suffering is all that exists.

Comment by pablo_stafforini on Probability estimate for wild animal welfare prioritization · 2019-10-24T02:18:13.355Z · score: 11 (4 votes) · EA · GW
the repugnant sadistic conclusion of total utilitarianism

Note that total utilitarianism does not lead to what is known as the "sadistic conclusion". This conclusion was originally introduced by Arrhenius, and results when adding a number of people each with net negative welfare to a population is better than adding some (usually larger) number of people each with net positive welfare to that population.

Given what you say in the rest of the paragraph, I think by 'repugnant sadistic conclusion' you mean what Arrhenius calls the 'very repugnant conclusion', which is very different from the sadistic conclusion. (Personally, I think the sadistic conclusion is a much more serious problem than the repugnant conclusion or even the very repugnant conclusion, so it's important to be clear about which of these conditions is implied by total utilitarianism.)

Comment by pablo_stafforini on Conditional interests, asymmetries and EA priorities · 2019-10-22T20:10:01.573Z · score: 6 (4 votes) · EA · GW

Interesting example. I have never taken such pills, but if they simply intensify the ordinary experience of sleepiness, I'd say that the reason I (as a CU) don't try to stay awake is that I can't dissociate the pleasantness of falling asleep from actually falling asleep: if I were to try to stay awake, I would also cease to have a pleasant experience. (If anyone knows of an effective dissociative technique, please send it over to Harri Besceli, who once famously remarked that "falling asleep is the highlight of my day.")

More generally, I think cases of this sort have rough counterparts for negative experience, e.g. the act of scratching an itch, or of playing with a loose tooth, despite the concomitant pain induced by those activities. I think such cases are sufficiently marginal, and susceptible to alternative explanations, that they do not pose a serious problem to either (1) or (2).

Comment by pablo_stafforini on Conditional interests, asymmetries and EA priorities · 2019-10-22T17:35:18.118Z · score: 4 (2 votes) · EA · GW
I believe that Michael's point is that, while we cannot imagine suffering without some kind of interest to have it stop (at least in the moment itself), we can imagine a mind that does not care for further joy.

The relevant comparison, I think, is between (1) someone who experiences suffering and wants this suffering to stop and (2) someone who experiences happiness and wants this happiness not to stop. It seems that you and Michael think that one can plausibly deny only (2), but I just don't see why that is so, especially if one focuses on comparisons where the positive and negative experiences are of the same intensity. Like Paul, I think the two scenarios are symmetrical.

[EDIT: I hadn't seen Paul's reply when I first posted my comment.]

Comment by pablo_stafforini on Publication of Stuart Russell’s new book on AI safety - reviews needed · 2019-10-19T02:51:02.777Z · score: 8 (4 votes) · EA · GW

The latest Alignment Newsletter (published today) includes a review of Russell's book by Rohin Shah. Perhaps he can publish it on Amazon and/or GoodReads?

Comment by pablo_stafforini on Ineffective Altruism: Are there ideologies which generally cause there adherents to have worse impacts? · 2019-10-18T12:03:42.676Z · score: 8 (5 votes) · EA · GW

Pinker lists ideology as one of his five "inner demons" in The Better Angels of our Nature, together with predatory violence, dominance, sadism and revenge.

Comment by pablo_stafforini on What actions would obviously decrease x-risk? · 2019-10-18T11:50:21.766Z · score: 4 (2 votes) · EA · GW

Thank you for those references!

Comment by pablo_stafforini on What actions would obviously decrease x-risk? · 2019-10-08T18:30:02.131Z · score: 7 (5 votes) · EA · GW
ALSO, UN was created post WW2. Maybe we only have appetite for major international cooperation after nasty wars?

This seems like point worth highlighting, especially vis-à-vis Bostrom's own views about the importance of global governance in 'The Vulnerable World Hypothesis'. Worth also noting that the League of Nations was created in the aftermath of WW1.

Comment by pablo_stafforini on JP's Shortform · 2019-10-08T11:24:17.255Z · score: 5 (3 votes) · EA · GW
Although I do think it's possible the Forum shouldn't let you change away from those defaults.

I am in favor of these defaults and also in favor of disallowing people to change them. I know of two people on LW who have admitted to strong-upvoting their comments, and my sense is that this behavior isn't that uncommon (to give a concrete estimate: I'd guess about 10% of active users do this on a regular basis). Moreover, some of the people who may be initially disinclined to upvote themselves might start to do so if they suspect others are, both because the perception that a type of behavior is normal makes people more willing to engage in it, and because the norm to exercise restrain in using the upvote option may seem unfair when others are believed to not be abiding by it. This dynamic may eventually cause a much larger fraction of users to regularly self-upvote.

So I think these are pretty strong reasons for disallowing that option. And I don't see any strong reasons for the opposite view.

Comment by pablo_stafforini on Andreas Mogensen's "Maximal Cluelessness" · 2019-10-08T10:55:34.452Z · score: 5 (3 votes) · EA · GW

Very interesting comment!

To explain, I think that the common sense justification for not wandering blindly into the road simply is that I currently have a preference against being hit by a car.

I don't think this defence works, because some of your current preferences are manifestly about future events. Insisting that all these preferences are ultimately about the most immediate causal antecedent (1) misdescribes our preferences and (2) lacks a sound theoretical justification. You may think that Parfit's arguments against S provide such a justification, but this isn't so. One can accept Parfit's criticism and reject the view that what is rational for an agent is to maximize their lifetime wellbeing, accepting instead a view on which it is rational for the agent to satisfy their present desires (which, incidentally, is not Parfit's view). This in no way rules out the possibility that some of these present desires are aimed at future events. So the possibility that you may be clueless about which course of action satisfies those future oriented desires remains.

Comment by pablo_stafforini on JP's Shortform · 2019-10-08T09:57:07.125Z · score: 8 (2 votes) · EA · GW

On the whole, I really like the search engine. But one small bug you may want to fix is that occasionally the wrong results appear under 'Users'. For example, if you type 'Will MacAskill', the three results that show up are posts where the name 'Will MacAskill' appears in the title, rather than the user Will MacAskill.

EDIT: Mmh, this appears to happen because a trackback to Luke Muehlhauser's post, 'Will MacAskill on Normative Uncertainty', is being categorized as the name of a user. So, not a bug with the search engine as such, but still something that the EA Forum tech team may want to fix.

Comment by pablo_stafforini on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-08T09:29:26.285Z · score: 25 (9 votes) · EA · GW

Thank you. Your comment has caused me to change my mind somewhat. In particular, I am now inclined to believe that getting people to actually read the material is, for a significant fraction of these people, a more serious challenge than I previously assumed. And if CFAR's goal is to selectively target folks concerned with x-risk, the benefits of insuring that this small, select group learn the material well may justify the workshop format, with its associated costs.

I would still like to see more empirical research conducted on this, so that decisions that involve the allocation of hundreds of thousands of EA dollars per year rest on firmer ground than speculative reasoning. At the current margin, I'd be surprised if a dollar given to CFAR to do object-level work achieves more than a dollar spent in uncovering "organizational crucial considerations"—that is, information with the potential to induce a major shift in the organization's direction or priority. (Note that I think this is true of some other EA orgs, too. For example, I believe that 80k should be using randomization to test the impact of their coaching sessions.)

Comment by pablo_stafforini on What actions would obviously decrease x-risk? · 2019-10-07T23:37:28.699Z · score: 7 (4 votes) · EA · GW

Personally, I don't find that skeptical comments like Max's discourage me from ideating. And the suggestion to keep ideation and evaluation separate might discourage the latter, since it's actually not obvious how to operationalize 'keeping separate'.

Comment by pablo_stafforini on What actions would obviously decrease x-risk? · 2019-10-07T23:30:11.320Z · score: 7 (5 votes) · EA · GW

In this talk on 'Crucial considerations and wise philanthropy', Nick Bostrom tentatively mentions some actions that appear to be robustly x-risk reducing, including promoting international peace and cooperation, growing the effective altruism movement, and working on solutions to the control problem.

Comment by pablo_stafforini on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-07T23:22:10.572Z · score: 18 (9 votes) · EA · GW

Ah, but should you familiarize yourself with the literature on familiarizing yourself with the literature before writing an EA Forum post?