Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good 2020-03-17T17:00:14.108Z · score: 28 (13 votes)
Good Done Right conference 2020-02-04T13:21:02.903Z · score: 41 (22 votes)
Cotton‐Barratt, Daniel & Sandberg, 'Defence in Depth Against Human Extinction' 2020-01-28T19:24:48.033Z · score: 25 (10 votes)
Announcing the Bentham Prize 2020-01-21T22:23:16.860Z · score: 32 (13 votes)
Pablo_Stafforini's Shortform 2020-01-09T15:10:48.053Z · score: 6 (1 votes)
Dylan Matthews: The case for caring about the year 3000 2019-12-18T01:07:49.958Z · score: 27 (13 votes)
Are comment "disclaimers" necessary? 2019-11-23T22:47:01.414Z · score: 57 (20 votes)
Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term' 2019-11-05T20:24:00.445Z · score: 38 (14 votes)
A wealth tax could have unpredictable effects on politics and philanthropy 2019-10-31T13:05:28.421Z · score: 20 (9 votes)
Schubert, Caviola & Faber, 'The Psychology of Existential Risk' 2019-10-22T12:41:53.542Z · score: 49 (20 votes)
How this year’s winners of the Nobel Prize in Economics influenced GiveWell’s work 2019-10-19T02:56:46.480Z · score: 16 (9 votes)
A bunch of new GPI papers 2019-09-25T13:32:37.768Z · score: 102 (39 votes)
Andreas Mogensen's "Maximal Cluelessness" 2019-09-25T11:18:35.651Z · score: 46 (16 votes)
'Crucial Considerations and Wise Philanthropy', by Nick Bostrom 2017-03-17T06:48:47.986Z · score: 14 (14 votes)
Effective Altruism Blogs 2014-11-28T17:26:05.861Z · score: 4 (4 votes)
The Economist on "extreme altruism" 2014-09-18T19:53:52.287Z · score: 4 (4 votes)
Effective altruism quotes 2014-09-17T06:47:27.140Z · score: 5 (5 votes)


Comment by pablo_stafforini on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T14:06:15.984Z · score: 6 (3 votes) · EA · GW

Source for the screenshot: Samuel Karlin & Howard E. Taylor, A First Course in Stochastic Processes, 2nd ed., New York: Academic Press, 1975.

Comment by pablo_stafforini on Effects of anti-aging research on the long-term future · 2020-02-28T02:29:46.830Z · score: 3 (2 votes) · EA · GW

I'm also interested.

Anders Sandberg discusses the issue a bit in one of his conversations with Rob Wiblin for the 80k Podcast.

Comment by pablo_stafforini on Why SENS makes sense · 2020-02-22T21:01:46.434Z · score: 19 (10 votes) · EA · GW
I once read a comment on the effective altruism subreddit that tried to explain why aging didn't get much attention in EA despite being so important, and I thought it was quite enlightening.

For background, here's the comment I wrote:

Longevity research occupies an unstable position in the space of possible EA cause areas: it is very "hardcore" and "weird" on some dimensions, but not at all on others. The EAs in principle most receptive to the case for longevity research tend also to be those most willing to question the "common-sense" views that only humans, and present humans, matter morally. But, as you note, one needs to exclude animals and take a person-affecting view to derive the "obvious corollary that curing aging is our number one priority". As a consequence, such potential supporters of longevity research end up deprioritizing this cause area relative to less human-centric or more long-termist alternatives.
Comment by pablo_stafforini on Cost-Effectiveness of Aging Research · 2020-02-21T13:23:47.445Z · score: 2 (1 votes) · EA · GW
Crossposted from Hourglass Magazine

The entire "magazine" seems to have gone offline. SAD!

Comment by pablo_stafforini on Thoughts on electoral reform · 2020-02-20T18:46:10.324Z · score: 5 (3 votes) · EA · GW

Thanks to your comment, I can now endorse what you said as a more accurate and nuanced version of the position my previous comment expressed. Agreed 100%.

Comment by pablo_stafforini on Thoughts on electoral reform · 2020-02-19T13:03:35.630Z · score: 3 (2 votes) · EA · GW

Yeah, see my reply to Tobias.

Comment by pablo_stafforini on Thoughts on electoral reform · 2020-02-19T13:02:03.734Z · score: 11 (5 votes) · EA · GW
I suspect that these results are very sensitive to model assumptions, such as tactical voting behaviour. But it would be interesting to see more work on VSE.

I agree with this. An approach I find promising is that of Nicolaus Tideman & Florenz Plassmann. In one study, the authors consider several different statistical models, use them to simulate actual elections, and rank the models by how best they approximate actual results. Then, in a subsequent study, the authors use the top-ranking model from their previous study to evaluate a dozen or so alternative voting rules, finding that plurality, anti-plurality, and Bucklin perform worst. As far as I'm aware, this is the only example of an attempt to assess voting rules by conducting simulations with a model that has been pre-fitted to actual election data. I believe that extending this approach may be among the most impactful research within this cause area.

Comment by pablo_stafforini on Thoughts on electoral reform · 2020-02-19T02:13:12.351Z · score: 29 (13 votes) · EA · GW

Thanks for writing this—I think electoral reform is an interesting and important cause area.

[Approval voting] fails the later-no-harm criterion

All voting systems violate intuitively desirable conditions, so noting that some system violates some condition is in itself no reason to favor other systems. One needs to look at the full picture, see what conditions are violated by what systems, and pick the system that minimizes weight-adjusted violations. (There is a clear parallel here between voting theory and population ethics: impossibility theorems have demonstrated in both fields that there exists no voting rule or population axiology that satisfies all intuitively plausible desiderata, so violation of a condition can't be adduced as a reason for rejecting the rule or axiology that violates it.)

But there is a much better approach, namely, to assess different systems by their "voter satisfaction efficiency" (VSE). Instead of relying on adequacy conditions, this approach considers the preferences that the electorate has for rival candidates and deals with them using the apparatus of expected utility theory. Each candidate is scored by the degree to which they satisfy the preferences of each voter, and then rival voting systems are scored by their probability of electing different candidates. Monte Carlo simulations independently performed by Warren Smith, Jameson Quinn and others generally find that approval voting has higher VSE than instant-runoff voting, and that both approval voting and instant-runoff voting have much higher VSE than plurality voting.

Given these results, I think the priority for EAs is to support whichever alternatives to plurality voting are most viable in a particular jurisdiction, rather than obsess over which of these alternatives to plurality is the absolute best. Of course, I also think it makes sense to continue to research the field, and especially refine the models used to compute VSE. What EAs definitely shouldn't do, in my opinion, is to spend considerable resources discrediting those alternatives to one's own preferred system, as FairVote has repeatedly done with respect to approval voting. Much more is gained by displacing plurality than is lost by replacing it with a suboptimal alternative (for all reasonable alternatives to plurality).

(In case it isn't obvious, I'm definitely not saying that you have done this in your essay; I'm rather highlighting a serious failure mode I see in the "voting reform" community that I believe we should strive to avoid.)

Comment by pablo_stafforini on Empirical data on value drift · 2020-02-17T14:00:32.977Z · score: 2 (1 votes) · EA · GW
a quick look would suggest ~75% moved from 50% to 10%

So, to confirm, are you saying that maybe 5 out of the 7 people who moved out of the 50% category moved in the 10% category? I think it's important to get clarity on this, since until encountering this comment I was interpreting your post (perhaps unreasonably) as saying that those 7 people had left the EA community entirely. If in fact only a couple of people in that class left the community, out of a total of 16, that's a much lower rate of drift than I was assuming, and more in line with anonymous's analysis of value drift in the original CEA team.

Comment by pablo_stafforini on Growth and the case against randomista development · 2020-02-16T22:35:11.596Z · score: 31 (13 votes) · EA · GW
Its interesting to note that I got downvoted for giving excellent sources. While you got upvoted for reading the articles and commenting. Basically I am outgroup/outcaste in EA.

I'm not sure I'm the right person to comment on this, given that I'm one of the parties involved, but I'll provide my perspective here anyway in case it is of any help or interest.

I don't think you are characterizing this exchange or the reasons behind the pattern of votes accurately. Bruno asked you to provide a source in support of the following claim, which you made four comments above:

One child policy had no effect on China's population size. It was their widespread education pre-1979 than reduced fertility.

In response to that request, you provided two sources. I looked at them and found that both failed to support the assertion that "It was [China's] widespread education pre-1979 than reduced fertility", and that one directly contradicted it.

I didn't downvote your comment, but I don't think it's unreasonable to expect some people to downvote it in light of this revelation. In fact, on reflection I'm inclined to favor a norm of downvoting comments that incorrectly claim that a scholarly source supports some proposition, since such a norm would incentivize epistemic hygiene and reduce the incidence of information cascades. I do agree with you that ingroup/outgroup dynamics sometimes explain observed behavior in the EA community, but I don't think this is one of those cases. As one datapoint confirming this, consider that a month or two ago, when I pointed out that someone had mischaracterized the main theses of a paper, that person's comment was heavily downvoted, despite this user being a regular commenter and not someone (I think) generally perceived to be an "outsider".

Moving to the object-level, in your recent comment you appear to have modified your original contention. Whereas before your stated that "widespread education" was the factor explaining China's reduced fertility, now you state that education was one factor among many. Although this difference may seem minor, in the present context it is crucial, because both in comments to this post and elsewhere in the Forum you have argued that EAs should prioritize education over growth. Yet if both of these factors account for the fertility reduction in China, your position cannot derive any support from this Chinese experience.

Comment by pablo_stafforini on Growth and the case against randomista development · 2020-02-15T18:26:41.001Z · score: 27 (10 votes) · EA · GW

I actually took the time to look at those two sources, and as far as I can tell they provide no support whatsoever for your claim that "It was [China's] widespread education pre-1979 that reduced fertility." The word 'education' occurs exactly once in the first article, and in a sentence that doesn't make any claims about education reducing fertility. As for the second article, to the extent that it attributes the fertility decline to anything, it attributes it not to "education", but to economic development (pp. 158-159):

The third fatal problem with the “400 million births prevented” claim is that it totally ignores the most significant source of fertility decline worldwide: economic development... China’s rapid economic development since 1980 deserves the lion’s share of the credit for the [fertility decline].
Comment by pablo_stafforini on EA Survey 2018 Series: How Long Do EAs Stay in EA? · 2020-02-15T16:45:40.353Z · score: 2 (1 votes) · EA · GW

I just thought it would be valuable to recalculate the estimated rates of attrition with this new data, though I think it's totally fine for you to deprioritize this.

Comment by pablo_stafforini on EA Survey 2018 Series: How Long Do EAs Stay in EA? · 2020-02-15T14:41:36.087Z · score: 4 (2 votes) · EA · GW
This is more accurate than email tracking in that it captures more people (such as those who didn’t give an email or those who changed emails), but less accurate in that it is possible that people who state they joined EA earlier could still show up just on later surveys and offset people who dropped off, making the retention rate appear higher than it actually is.

Why should the possibility of early EAs failing to take early surveys inflate the retention rate more than the possibility of early EAs failing to take later surveys deflate it? Shouldn't we expect these two effects to roughly cancel each other out? If anything, I would expect EAs in a given cohort to be slightly less willing to participate in the EA survey with each successive year, since completing the survey becomes arguably more tedious the more you do it. If so, this methodology should slightly underestimate, rather than overestimate, the true retention rate. Apologies if I'm misunderstanding the reasoning here.

Comment by pablo_stafforini on EA Survey 2018 Series: How Long Do EAs Stay in EA? · 2020-02-15T13:18:18.411Z · score: 2 (1 votes) · EA · GW

Are you planning to update the analysis with data from the 2019 survey?

Comment by pablo_stafforini on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-14T23:36:50.501Z · score: 3 (2 votes) · EA · GW

Note that there is now a Metaculus prize for questions and comments related to the coronavirus outbreak. Here you can see the existing questions in this series.

Comment by pablo_stafforini on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T00:49:16.217Z · score: 8 (4 votes) · EA · GW

I think he means that they do have to disclose if they're romantically involved. Perhaps replace 'or is' with 'nor is' to make it clearer.

Comment by pablo_stafforini on Fireside Chat with Philip Tetlock · 2020-02-05T13:40:15.342Z · score: 8 (5 votes) · EA · GW

Just wanted to say that I'm really glad all these talks are being transcribed!

Comment by pablo_stafforini on Announcing the Bentham Prize · 2020-02-04T13:28:38.153Z · score: 13 (5 votes) · EA · GW

First round of prizes announced. Congratulations to user haven and to our very own AABoyles and PeterHurford!

Comment by pablo_stafforini on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-03T02:50:51.959Z · score: 6 (3 votes) · EA · GW

Tyler Cowen has written about this in his post "A Bet is a Tax on Bullshit".

This doesn't affect your point, but I just wanted to note that the post—including the wonderful title—was written by Alex Tabarrok.

Comment by pablo_stafforini on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T16:09:39.829Z · score: 57 (24 votes) · EA · GW

Some concrete problems I see with your choice:

  • The name of an organization should ideally consist of two or three main words, perhaps four if there are strong enough reasons. Yours has six.
  • The acronym formed by the name should ideally be pronounceable and aesthetically pleasing. I'm not sure CEEALAR is pronounceable. I don't think it's pleasing.
  • The rules for generating the acronym should ideally be consistent. Either all articles and prepositions are included (e.g. CFAR) or none are (e.g. CEA). In 'Centre for Enabling EA Learning and Research', CEEALAR includes 'and' but excludes 'for'.
  • [Note: Greg tells me that the name needs to be intelligible to the Charity Commission, so I retract this bullet point] The name need not provide a full description of the nature of the organization, or even be intelligible to newcomers. Those are desiderata, but may be trumped by other considerations. Consider, e.g., 80,000 Hours: no one would ever guess what they do just from the name alone, but it is still adequate, and much better than, say, Career and Coaching Services for Young Effective Altruists (CACSYEA).

I'll try to think of some concrete suggestions later, but all of Jonas's proposals look superior to CEEALAR, in my opinion. If you don't like the word 'Hotel' because of its for-profit connotations, how about replacing it with 'House'?

You may also want to consider creating a poll on an EA Facebook group, just like other EA orgs which went through a process of rebranding did in the past (e.g. Stefan Torges created one such poll a couple of months ago asking for alternatives to 'Foundational Research Institute').

I hope this doesn't come across as overly critical. Congratulations for putting in all the time and effort required to get the (former) EA Hotel registered as a proper charity!

EDIT: See also Ryan's comment.

Comment by pablo_stafforini on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T15:02:28.208Z · score: 50 (21 votes) · EA · GW

I second the suggestion to put at least a bit of thinking into coming up with more memorable, pronounceable and authoritative alternatives, if it's not too late already. Really, this is an acronym that will last for years or decades, will be written and uttered thousands of times, and will often be the very first thing someone will see or hear when exposed to the organization.

Comment by pablo_stafforini on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T23:23:00.413Z · score: 11 (4 votes) · EA · GW

A study published today attempts to estimate the nCoV incubation period:

Using the travel history and symptom onset of 34 confirmed cases that were detected outside Wuhan, we estimate the mean incubation period to be 5.8 (4.6 – 7.9, 95% CI) days, ranging from 1.3 to 11.3 days (2.5th to 97.5th percentile).

As the authors note, this estimate indicates an incubation period remarkably similar to that of the Middle East respiratory syndrome.

Comment by pablo_stafforini on When to post here, vs to LessWrong, vs to both? · 2020-01-27T12:19:31.448Z · score: 8 (5 votes) · EA · GW

Not an answer to your question, but I think it would be nice to have a consolidated comments thread for posts that are cross-posted to both forums. At the very least, it would be an informative experiment. I'm not sure how technically challenging this would be, but since the EA Forum is based on the LW codebase, I'd imagine it shouldn't be that difficult.

Comment by pablo_stafforini on Love seems like a high priority · 2020-01-25T00:44:10.310Z · score: 3 (2 votes) · EA · GW

Your comment helped me understand this discussion better. It seems I was indeed assuming causality in the stronger sense, though I now see there wasn't much justification for this assumption. As you point out, the stronger sense would fail to vindicate many relationships we generally take to be causal.

I still feel reluctant to assert that marriage causes death from the data you provided. Maybe it's because I'm not sure what type of link exists between marriage and higher rates of childbirth. Though it seems clear that married people have more children, I'm not sure it's correct to say that marriage causes people to have more children. People often get married because they want to have children. Even when this is not the initial motivation, it seems odd to say that marriage explains why these people have children. By contrast, the link between smoking and cancer seems much more tight.

I haven't thought much about whether the causal attributions we make in social science tend to be more similar to "marriage causes higher rates of childbirth" or to "smoking causes higher rates of cancer".

Comment by pablo_stafforini on Love seems like a high priority · 2020-01-24T18:43:43.856Z · score: 2 (1 votes) · EA · GW

there's literally a strong causal relationship between marriage and having a shorter lifespan.

What causal relationship are you alluding to? As far as I can tell, the data you mention three comments above establishes a correlation between marriage and mortality, not causation. Moreover, that data also appears to show that the mechanism implicated in this correlation is complications during childbirth, which rules out marriage as the causal factor.

Comment by pablo_stafforini on Growth and the case against randomista development · 2020-01-17T22:58:18.136Z · score: 17 (8 votes) · EA · GW
The track record of attempts to overthrow any system of power are abysmal

I think you are seriously mistaken. Attempts to overthrow monarchy do not remotely have the track record of attempts to overthrow capitalism. Compare, say, the American and French revolutions of the 18th century with the Russian and Chinese revolutions of the 20th century.

[I have edited my comment to make it less confrontational.]

Comment by pablo_stafforini on Growth and the case against randomista development · 2020-01-17T18:33:38.095Z · score: 18 (10 votes) · EA · GW

The author didn't say that all "left/socialist" policies are bad. The first sentence of his comment reads:

This post reminds me of a common left/socialist reaction to EA: “Charity is pointless, overthrowing capitalism is clearly the best way to increase human welfare."

When he later writes that "[t]he best reply to the left/socialists is probably that their empirical track record is much worse", he is referring specifically to the empirical track record of attempts to overthrow capitalism, which is indisputably abysmal.

Comment by pablo_stafforini on Growth and the case against randomista development · 2020-01-17T14:11:54.262Z · score: 28 (10 votes) · EA · GW
Randomista is clearly not a neutral term, and I think constitutes a kind of name calling

What's your basis for claiming that 'randomista' is a non-neutral term? That is not my impression. A popular book that presents a positive picture of the field is titled Randomistas: How Radical Researchers Are Changing Our World. A recent article by one of the world's most prestigious science journals uses the headline "‘Randomistas’ who used controlled trials to fight poverty win economics Nobel", and includes the following line: "Kremer, Banerjee and Duflo are at the vanguard of the ‘randomista’ movement, which applies the methods of rigorous medical trials — in which large numbers of participants are randomized to receive either a particular intervention or a standard treatment, and followed over time — to social interventions such as improving education." And Mark Ravallion, a leading authority on the economics of poverty, explicitly writes: "That term 'randomistas' is not pejorative." (p. 2)

Comment by pablo_stafforini on Response to recent criticisms of EA "longtermist" thinking · 2020-01-16T20:20:27.031Z · score: 22 (7 votes) · EA · GW
They take total utilitarian axiology and EV maximization for granted in their main arguments

I think this is a very misleading characterization of the paper. The passage you quoted is part of a paragraph which reads as follows (emphasis added):

Our discussion above was conducted on the assumption of (i) a total utilitarian axiology and (ii) an expected-value approach to ex ante evaluation under uncertainty. Both of these assumptions are at least somewhat controversial. The present section examines the extent to which our arguments would be undermined by various ways of deviating from those assumptions. Broadly, the upshot will be that the case for strong longtermism is quite robust to plausible deviations from these starting axiological and decision-theoretic assumptions.

Moreover, this is not a claim incidental to the paper; it is one of the paper's central claims. As the authors write in the introductory section:

Our aim in this paper is to expand on this prior work in four ways... Second, we show that the argument goes through on a wide range of axiologies and decision theories, not only on the combination of total utilitarianism and expected utility theory.

In other words, one of the four key arguments made in the paper is that the case for axiological strong longtermism does not require the acceptance of a total utilitarian axiology or expected utility theory.

Comment by pablo_stafforini on Pablo_Stafforini's Shortform · 2020-01-15T23:40:30.502Z · score: 4 (2 votes) · EA · GW
After more thought, we’ve decided that we will change the name to “Forum Favorites”

Great, thank you!

Comment by pablo_stafforini on Pablo_Stafforini's Shortform · 2020-01-15T23:39:41.551Z · score: 2 (1 votes) · EA · GW

Thanks for the reply. I think it's totally fine for you to deprioritize this suggestion—not very important.

Comment by pablo_stafforini on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T22:10:53.748Z · score: 9 (7 votes) · EA · GW

That wasn't so boring.

Comment by pablo_stafforini on Pablo_Stafforini's Shortform · 2020-01-11T12:28:53.019Z · score: 6 (3 votes) · EA · GW

I think in this case the fault lies entirely with me, given the number of different ways one can see a list of all the most recent posts.

(My original bullet point also mentioned that sorting by recency seemed like a preferable way to display posts anyway, and for this reason I concluded that this should be the default display. But in his reply Oli mentioned some important drawbacks that I had overlooked, so I no longer believe this.)

Comment by pablo_stafforini on Pablo_Stafforini's Shortform · 2020-01-09T23:47:54.763Z · score: 9 (3 votes) · EA · GW

Ah, I hadn't noticed the 'All-posts page'. That addresses my needs, thanks. And point taken about the drawbacks of recency sorting. I retract that part of my comment.

Comment by pablo_stafforini on Pablo_Stafforini's Shortform · 2020-01-09T18:54:19.572Z · score: 13 (4 votes) · EA · GW

I now realize I had already seen that post. Perhaps my memory is faulty, or perhaps the distinction between Frontpage and Community is not one that sticks. A couple of comments:

In general, I think it's not a good sign if a central feature of a website isn't self-explanatory, but instead requires the reading of a detailed explanation. Moreover, in this case the explanation is buried in a post that new users are unlikely to encounter (and at least some old users are apt to forget). But, more fundamentally, I just don't see a compelling reason for categorizing posts in this complicated manner to begin with. Why not just have a "curated" category to promote posts that stand out in the relevant dimensions, like LessWrong does? Or dispense with the idea of "promoted" posts altogether, and let the karma system do the work. Keep it simple, stupid.

Comment by pablo_stafforini on Pablo_Stafforini's Shortform · 2020-01-09T15:10:48.235Z · score: 24 (10 votes) · EA · GW

I don't know if there is a designated place to leave comments about the EA Forum, so for the time being I'm posting them here. I think the current homepage has a number of problems:

  • The 'Community Favorites' section keeps listing the same posts over and over again. I don't see the point of having a prominent list of favorite posts in the home page that changes so little. I suggest expanding the list considerably so that regular visitors can still expect to see novel posts every time they visit the homepage.
  • [Note: in light of Oli's comment below, I'm retracting this bullet point.] The 'Latest Posts' section sorts posts neither by karma nor by date; rather, it seems to rely on a hybrid sorting algorithm. I don't think this is useful: as someone who checks the home page regularly, I want to be able to easily see what the latest posts are, so that when I go down the list and eventually come across a post I have already seen, I can conclude that I have seen all posts after it as well. With the current sorting algorithm, there's no way for me to insure that my browsing session has exhausted all the posts seen since the previous session.
  • I find it hard to understand the meaning of the 'Community' category. The description says that it consists of "posts with topical content or which relate to the EA community itself". But that description also draws a contrast to "Frontpage posts, which are selected by moderators as especially interesting or useful to people with interest in doing good effectively." This contrast suggests that 'community' posts are simply posts that haven't been curated, as opposed to posts with a focus on the EA community. In other words, there are two separate distinctions here: that between curated vs. non-curated posts, and that between posts with a community focus vs. posts with a focus on other aspects of EA; and it is unclear in terms of which of these the 'Community' category is defined. To make things even more confusing, the 'Community Favorites' section doesn't appear to employ the term 'Community' in either of those senses; indeed, the term seems to be used with the opposite meaning of "non-curated", since the "Community Favorites" consists of a list of "all-time greatest posts".
Comment by pablo_stafforini on EA Forum Prize: Winners for September 2019 · 2020-01-09T13:10:03.966Z · score: 15 (6 votes) · EA · GW

[Meta] Any reason why this post is still pinned?

Comment by pablo_stafforini on Response to recent criticisms of EA "longtermist" thinking · 2020-01-06T13:22:42.444Z · score: 21 (9 votes) · EA · GW
Reason 1 [for disagreeing with longtermism]: You don't believe that very large numbers of people in the far future add up to being a very big moral priority. For instance, you may take a Rawlsian view, believing that we should always focus on helping the worst-off.

It's not clear that, of all the people that will ever exist, the worst-off among them are currently alive. True, the future will likely be on average better than the present. But since the future potentially contains vastly more people, it's also more likely to contain the worst-off people. Moreover, work on S-risks by Tomasik, Gloor, Baumann and others provides additional reason for expecting such people—using 'people' in a broad sense—to be located in the future.

Comment by pablo_stafforini on Effective Altruism Blogs · 2019-12-29T22:20:56.652Z · score: 4 (2 votes) · EA · GW

Thanks. As noted in the update, this list is no longer updated. Please see

Comment by pablo_stafforini on Max_Daniel's Shortform · 2019-12-17T14:32:20.197Z · score: 2 (1 votes) · EA · GW

[deleted because the question I asked turned out to be answered in the comment, upon careful reading]

Comment by pablo_stafforini on EA Forum Prize: Winners for October 2019 · 2019-12-12T02:13:47.809Z · score: 2 (1 votes) · EA · GW

Thanks. Just to be clear: before your edit, there was no thread linked, or at least no link showed up on my browser. I mention this in case it reflects a bug with the site rather than an oversight.

Comment by pablo_stafforini on EA Forum Prize: Winners for October 2019 · 2019-12-11T12:42:13.668Z · score: 4 (2 votes) · EA · GW

Max Daniel is listed as one of the four recipients of a Comment Prize, but no comment is listed.

Comment by pablo_stafforini on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-04T13:00:27.385Z · score: 43 (18 votes) · EA · GW

You have been part of the effective altruism movement since its inception. What are some interesting or important ways in which you think EA has changed over the years?

Comment by pablo_stafforini on A list of EA-related podcasts · 2019-11-27T19:55:04.759Z · score: 17 (13 votes) · EA · GW

Thanks for compiling this.

I've created a ListenNotes list with all the "Strongly EA-related podcasts" and a few others here. It displays the most recent episode from each of those podcasts and lets you import them all easily to your favorite podcast app.

Comment by pablo_stafforini on Are comment "disclaimers" necessary? · 2019-11-26T20:34:37.673Z · score: 3 (2 votes) · EA · GW
If you're adding a disclosure already, surely having it be a disclaimer also isn't more distracting?

I agree with this. But my sense is that only a small fraction of the comments which include a disclaimer are also comments which include or should include a disclosure. So the fact that it's not more distracting to have both than only a disclaimer doesn't influence my general thinking about disclaimers much.

There's also the separate argument that adding disclaimers runs the risk of changing expectations about what can be inferred from posts that lack them. Other things equal, I would prefer to support the conversational norm that no one is speaking in a professional capacity unless they say so explicitly, or is otherwise obvious from context.

Comment by pablo_stafforini on Are comment "disclaimers" necessary? · 2019-11-24T18:03:16.559Z · score: 14 (6 votes) · EA · GW

Thanks. I agree it probably makes sense to add such statements when your posts or comments could be seen as promoting an organization you work for. The general argument for disclosing potential conflicts of interest applies here.

While I didn't make it clear in my question, the cases I had in mind are not cases of this sort. Rather, I was thinking of cases in which the purpose of the disclaimer is to indicate that the views one expresses should not be interpreted as representing those of one's organization.

Larks draws a useful distinction between disclosures and disclaimers, which corresponds to these two different cases. I sympathize with his arguments for concluding that, while disclosures are desirable, disclaimers are unnecessary.

Comment by pablo_stafforini on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T12:50:37.874Z · score: 9 (6 votes) · EA · GW

Hey Mike,

I'm a community moderator at Metaculus and am generally interested in creating more EA-relevant questions. Are your predictions explicitly listed somewhere? It would be great to add at least some of them to the site.

Comment by pablo_stafforini on How to find EA documents on a particular topic · 2019-11-19T11:50:36.792Z · score: 8 (5 votes) · EA · GW

The search box on will run a search restricted to all and only those domains tracked by that website. Google Custom Search, however, doesn't work well, and results will only include a tiny subset of all occurrences of a given search term (John reports a similarly frustrating experience with this service). If anyone has suggestions for alternatives, please let me know.

Comment by pablo_stafforini on What areas of maths are useful across disciplines? · 2019-11-18T13:24:48.775Z · score: 2 (1 votes) · EA · GW


Note that answering those questions doesn't require any advanced knowledge of statistics. Completing AP Statistics or an equivalent introductory course should suffice.

Comment by pablo_stafforini on What areas of maths are useful across disciplines? · 2019-11-17T22:06:54.281Z · score: 12 (4 votes) · EA · GW

I never studied maths or any math-heavy discipline formally (my background is in philosophy), but recently I completed the entire Khan Academy math curriculum. Speaking purely from personal experience, the most valuable math I learned was just basic algebra I had studied in high school but never really mastered. Besides that, I'd say statistics, linear algebra, and parts of calculus (especially series) have been the most useful so far.

Brian Tomasik's great article on education matters for altruism has a section listing useful disciplines and areas. Within maths, it mentions "probability, real analysis, abstract algebra, and general 'mathematical sophistication'" (statistics is also listed, but as a separate discipline).