Posts

Yew-Kwang Ng, 'Effective Altruism Despite the Second-best Challenge' 2020-05-09T16:47:36.346Z · score: 22 (12 votes)
Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good 2020-03-17T17:00:14.108Z · score: 29 (14 votes)
Good Done Right conference 2020-02-04T13:21:02.903Z · score: 42 (23 votes)
Cotton‐Barratt, Daniel & Sandberg, 'Defence in Depth Against Human Extinction' 2020-01-28T19:24:48.033Z · score: 25 (10 votes)
Announcing the Bentham Prize 2020-01-21T22:23:16.860Z · score: 32 (13 votes)
Pablo_Stafforini's Shortform 2020-01-09T15:10:48.053Z · score: 6 (1 votes)
Dylan Matthews: The case for caring about the year 3000 2019-12-18T01:07:49.958Z · score: 27 (13 votes)
Are comment "disclaimers" necessary? 2019-11-23T22:47:01.414Z · score: 57 (20 votes)
Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term' 2019-11-05T20:24:00.445Z · score: 38 (14 votes)
A wealth tax could have unpredictable effects on politics and philanthropy 2019-10-31T13:05:28.421Z · score: 20 (9 votes)
Schubert, Caviola & Faber, 'The Psychology of Existential Risk' 2019-10-22T12:41:53.542Z · score: 49 (20 votes)
How this year’s winners of the Nobel Prize in Economics influenced GiveWell’s work 2019-10-19T02:56:46.480Z · score: 16 (9 votes)
A bunch of new GPI papers 2019-09-25T13:32:37.768Z · score: 102 (39 votes)
Andreas Mogensen's "Maximal Cluelessness" 2019-09-25T11:18:35.651Z · score: 46 (16 votes)
'Crucial Considerations and Wise Philanthropy', by Nick Bostrom 2017-03-17T06:48:47.986Z · score: 14 (14 votes)
Effective Altruism Blogs 2014-11-28T17:26:05.861Z · score: 4 (4 votes)
The Economist on "extreme altruism" 2014-09-18T19:53:52.287Z · score: 4 (4 votes)
Effective altruism quotes 2014-09-17T06:47:27.140Z · score: 5 (5 votes)

Comments

Comment by pablo_stafforini on Information hazards: a very simple typology · 2020-07-16T13:33:01.694Z · score: 2 (1 votes) · EA · GW

You credit Anders Sandberg here and elsewhere, but you don't provide a reference. Where did Sandberg propose the typology that inspired yours? A search for 'direct information hazard' (the expression you attribute to Sandberg) only results in this post and your LW comment.

Comment by pablo_stafforini on Mike Huemer on The Case for Tyranny · 2020-07-16T12:22:16.937Z · score: 2 (1 votes) · EA · GW
This is how our species is going to die. Not necessarily from nuclear war specifically, but from ignoring existential risks that don’t appear imminent‌ at this moment. If we keep doing that, eventually, something is going to kill us – something that looked improbable in advance, but that, by the time it looks imminent, is too late to stop.
Comment by pablo_stafforini on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-15T14:22:11.311Z · score: 8 (3 votes) · EA · GW
Generally, I'd like to hear more about how different people introduce the ideas of EA, longtermism, and specific cause areas. There's no clear cut canon, and effectively personalizing an intro can difficult, so I'd love to hear how others navigate it.

This seems like a promising topic for an EA Forum question. I would consider creating one and reposting your comment as an answer to it. A separate question is probably also a better place to collect answers than this thread, which is best reserved for questions addressed to Ben and Ben's answers to those questions.

Comment by pablo_stafforini on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T19:26:49.810Z · score: 19 (9 votes) · EA · GW

Which of the EA-related views you hold are the least popular within the EA community?

Comment by pablo_stafforini on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T19:22:53.876Z · score: 53 (22 votes) · EA · GW

Have you considered doing a joint standup comedy show with Nick Bostrom?

Comment by pablo_stafforini on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T19:13:41.551Z · score: 5 (3 votes) · EA · GW

What writings have influenced your thinking the most?

Comment by pablo_stafforini on The 80,000 Hours podcast should host debates · 2020-07-12T00:29:54.048Z · score: 6 (4 votes) · EA · GW

I'm happy to hear that you are keen on the anti-debates idea! I suggested it to the EA Global organizers a few years ago, but it seems they weren't very interested. (Incidentally, the idea isn't Will's, or mine; it dates back at least to this debate between David Chalmers and Guido Tononi from 2016.)

A possible variant is to randomize whether the debate will or will not be reversed, and challenge the audience to guess whether the debaters are arguing for their own positions or their opponents', disclosing the answer only at the end of the episode. (In some cases, or for some members of the audience, the answer will be obvious from background information about the debaters, but it's unclear how often this will be the case.)

EDIT: I now see that I misunderstood what was meant by an 'anti-debate': not a debate where each person defends the opposite side, but rather a debate that is collaborative rather than competitive. I personally would be interested in anti-debates in either of those senses.

Comment by pablo_stafforini on Forecasting Newsletter: June 2020 · 2020-07-01T22:38:48.743Z · score: 9 (6 votes) · EA · GW

A new, Ethereum-based prediction market has launched: Polymarket. It even features a question about Slate Star Codex and the New York Times. I tried it and it was pretty easy to set up. (I have no affiliation or with the owner.)

Comment by pablo_stafforini on The Case for Impact Purchase | Part 1 · 2020-06-26T14:00:29.738Z · score: 5 (3 votes) · EA · GW

Are there plans to publish Part II?

Comment by pablo_stafforini on List of EA-related email newsletters · 2020-06-26T12:22:31.670Z · score: 8 (4 votes) · EA · GW

Nick Bostrom has now a newsletter for "rare" updates.

Comment by pablo_stafforini on Modeling the Human Trajectory (Open Philanthropy) · 2020-06-24T18:57:20.066Z · score: 3 (2 votes) · EA · GW

The latest edition of the Alignment Newsletter includes a good summary of Roodman's post, as well as brief comments by Nicholas Joseph and Rohin Shah:

Modeling the Human Trajectory (David Roodman) (summarized by Nicholas): This post analyzes the human trajectory from 10,000 BCE to the present and considers its implications for the future. The metric used for this is Gross World Product (GWP), the sum total of goods and services produced in the world over the course of a year.
Looking at GWP over this long stretch leads to a few interesting conclusions. First, until 1800, most people lived near subsistence levels. This means that growth in GWP was primarily driven by growth in population. Since then population growth has slowed and GWP per capita has increased, leading to our vastly improved quality of life today. Second, an exponential function does not fit the data well at all. In an exponential function, the time for GWP to double would be constant. Instead, GWP seems to be doubling faster, which is better fit by a power law. However, the conclusion of extrapolating this relationship forward is extremely rapid economic growth, approaching infinite GWP as we near the year 2047.
Next, Roodman creates a stochastic model in order to analyze not just the modal prediction, but also get the full distribution over how likely particular outcomes are. By fitting this to only past data, he analyzes how surprising each period of GWP was. This finds that the industrial revolution and the period after it was above the 90th percentile of the model’s distribution, corresponding to surprisingly fast economic growth. Analogously, the past 30 years have seen anomalously lower growth, around the 25th percentile. This suggests that the model's stochasticity does not appropriately capture the real world -- while a good model can certainly be "surprised" by high or low growth during one period, it should probably not be consistently surprised in the same direction, as happens here.
In addition to looking at the data empirically, he provides a theoretical model for how this accelerating growth can occur by generalizing a standard economic model. Typically, the economic model assumes technology is a fixed input or has a fixed rate of growth and does not allow for production to be reinvested in technological improvements. Once reinvestment is incorporated into the model, then the economic growth rate accelerates similarly to the historical data.
Nicholas's opinion: I found this paper very interesting and was quite surprised by its results. That said, I remain confused about what conclusions I should draw from it. The power law trend does seem to fit historical data very well, but the past 70 years are fit quite well by an exponential trend. Which one is relevant for predicting the future, if either, is quite unclear to me.
The theoretical model proposed makes more sense to me. If technology is responsible for the growth rate, then reinvesting production in technology will cause the growth rate to be faster. I'd be curious to see data on what fraction of GWP gets reinvested in improved technology and how that lines up with the other trends.
Rohin’s opinion: I enjoyed this post; it gave me a visceral sense for what hyperbolic models with noise look like (see the blog post for this, the summary doesn’t capture it). Overall, I think my takeaway is that the picture used in AI risk of explosive growth is in fact plausible, despite how crazy it initially sounds. Of course, it won’t literally diverge to infinity -- we will eventually hit some sort of limit on growth, even with “just” exponential growth -- but this limit could be quite far beyond what we have achieved so far. See also this related post.
Comment by pablo_stafforini on How should we run the EA Forum Prize? · 2020-06-23T14:32:25.533Z · score: 16 (7 votes) · EA · GW

Why don't you conduct an experiment? E.g. you could award prizes only for posts/comments written by users whose usernames start with letters A-L (and whose accounts were created prior to the announcement) and see if you notice any significant difference in the quality of those users' submissions.

Comment by pablo_stafforini on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T11:46:10.737Z · score: 23 (9 votes) · EA · GW

Just to say that I appreciate all the "mini literature reviews" you have been posting!

Comment by pablo_stafforini on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T11:41:18.430Z · score: 2 (1 votes) · EA · GW

Hi Arden,

Your worries seem sensible, and discussing it under 'building effective altruism' might be the way to go.

Comment by pablo_stafforini on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T01:23:25.658Z · score: 55 (26 votes) · EA · GW

Great post, thank you for compiling this list, and especially for the pointers for further reading.

In addition to Tobias's proposed additions, which I endorse, I'd like to suggest protecting effective altruism as a very high priority problem area. Especially in the current political climate, but also in light of base rates from related movements as well as other considerations, I think there's a serious risk (perhaps 15%) that EA will either cease to exist or lose most of its value within the next decade. Reducing such risks is not only obviously important, but also surprisingly neglected. To my knowledge, this issue has only been the primary focus of an EA Forum post by Rebecca Baron, a Leaders' Forum talk by Roxanne Heston, an unpublished document by Kerry Vaughan, and an essay by Leverage Research (no longer online). (Risks to EA are also sometimes discussed tangentially in writings about movement building, but not as a primary focus.)

Comment by pablo_stafforini on Problem areas beyond 80,000 Hours' current priorities · 2020-06-22T17:08:25.412Z · score: 5 (3 votes) · EA · GW

The last link under the 'Aging' heading is dead. I think you meant this.

Comment by pablo_stafforini on Assumptions about the far future and cause priority · 2020-06-22T01:33:28.838Z · score: 2 (1 votes) · EA · GW

.

Comment by pablo_stafforini on EA considerations regarding increasing political polarization · 2020-06-20T11:08:23.860Z · score: 28 (12 votes) · EA · GW

In case it is of interest, Gwern provides a good summary of the Cultural Revolution in his review of Frank Dikötter's book, The Cultural Revolution: A People's History, 1962-1976.

Comment by pablo_stafforini on EA considerations regarding increasing political polarization · 2020-06-20T03:11:43.276Z · score: 11 (4 votes) · EA · GW

[meta] Why does the comment count not match the actual number of visible comments? Is this a bug or are some comments being deliberately hidden? As of this writing (and not counting this comment), I can see only one of the supposedly four comments posted.

Comment by pablo_stafforini on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T15:01:47.379Z · score: 22 (11 votes) · EA · GW

Many books are still not available on Library Genesis. Fortunately, a sizeable fraction of those can be "borrowed" for 14 days from the Internet Archive.

Comment by pablo_stafforini on [Link] "Will He Go?" book review (Scott Aaronson) · 2020-06-12T23:55:38.152Z · score: 11 (7 votes) · EA · GW

A Metaculus question on whether Trump will concede if he loses the election has just been posted:

https://www.metaculus.com/questions/4609/if-president-trump-loses-the-2020-election-will-he-concede/

Comment by pablo_stafforini on Why might one value animals far less than humans? · 2020-06-08T14:44:46.720Z · score: 13 (6 votes) · EA · GW
I think that physical pain is bad, but when considered in isolation, it's not the worst thing that can happen. Suffering includes the experience of anticipation of bad, the memory of it occurring, the appreciation of time and lack of hope, etc.
People would far prefer to have 1 hour of pain and the knowledge that it would be over at that point than have 1 hour of pain but not be sure when it would end. They'd also prefer to know when the pain would occur, rather than have it be unexpected. These seem to significantly change the moral importance of pain, even by orders of magnitude.

It seems this consideration would provide a (pro tanto) reason for valuing nonhumans more than humans. If pain metacognition can reduce the disvalue of suffering, nonhuman animals, who lack such capacities, should be expected to have worse experiences, other things equal.

Comment by pablo_stafforini on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-06-06T13:46:04.724Z · score: 3 (2 votes) · EA · GW

I meant visually pleasing. I agree it sounds good. (Though I feel that when you know the spelling, it becomes harder to appreciate the euphony, given the incongruity between the two.)

Comment by pablo_stafforini on Will protests lead to thousands of coronavirus deaths? · 2020-06-03T21:49:02.147Z · score: 3 (3 votes) · EA · GW
It's quite possible more spread will be caused by the latter

What do you mean by 'quite possible'? And what's your estimate of the minimum ratio of arrests to protesters needed for spread due to arrests to exceed spread due to protests?

Comment by pablo_stafforini on What are the leading critiques of "longtermism" and related concepts · 2020-06-02T13:47:52.738Z · score: 11 (3 votes) · EA · GW

Judging from the comment, I expect the post to be a very valuable summary of existing arguments against longtermism, and am looking forward to reading it. One request: as Jesse Clifton notes, some of the arguments you list apply only to x-risk (a narrower focus than longtermism), and some apply only to AI risk (a narrower focus than x-risk). It would be great if your post could highlight the scope of each argument.

Comment by pablo_stafforini on A cause can be too neglected · 2020-05-09T00:33:49.791Z · score: 2 (1 votes) · EA · GW

I list a couple of possible sources.

Comment by pablo_stafforini on A cause can be too neglected · 2020-05-09T00:32:37.231Z · score: 18 (6 votes) · EA · GW

Caspar Oesterheld makes this point in Complications in evaluating neglectedness:

I think many interventions initially face increasing returns from learning/research, creating economies of scale, specialization within the cause area, etc. For example, in most cause areas, the first $10,000 are probably invested into prioritization, organizing, or (potentially symbolic) interventions that later turn out to be suboptimal.

(I strongly recommend this neglected (!) article.)

Ben Todd makes a related point about charities (rather than causes) in Stop assuming ‘declining returns’ in small charities:

Economies of scale are a force for increasing returns, and they win out while still at a small scale, so the impact of the 5th staff member can easily be greater than the 4th.
Economies of scale are caused by:
1. Gains from specialisation. In a one person organisation, that person has to do everything – marketing, making the product, operations and so on. In a larger organisation, however, you can hire a specialist to do each function, which is more efficient.
2. Fixed costs. Often you have to pay the same amount of money for a service no matter what scale you have e.g. legally registering as an organisation costs about the same amount of time no matter how large you are; an aircraft with 100 passengers requires the same number of pilots as one with 200 passengers. As you become larger, fixed costs become a smaller and smaller fraction of the total.
3. Physical effects. Running an office that’s 2x as large doesn’t cost 2x as much to heat, because the volume increases by the cube of the length, while the surface area only increases by the square of the length. A rule of thumb is that capital costs only increase 50% in order to double capacity.
Comment by pablo_stafforini on The Important/Neglected/Tractable framework needs to be applied with care · 2020-05-09T00:31:37.861Z · score: 0 (0 votes) · EA · GW

[posted in wrong thread]

Comment by pablo_stafforini on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2020-05-08T15:29:01.732Z · score: 4 (2 votes) · EA · GW

See also bmg's LW post, Realism and rationality. Relevant excerpt:

A third point of tension is the community's engagement with normative decision theory research. Different normative decision theories pick out different necessary conditions for an action to be the one that a given person should take, with a focus on how one should respond to uncertainty (rather than on what ends one should pursue).
A typical version of CDT says that the action you should take at a particular point in time is the one that would cause the largest expected increase in value (under some particular framework for evaluating causation). A typical version of EDT says that the action you should take at a particular point in time is the one that would, once you take it, allow you to rationally expect the most value. There are also alternative versions of these theories -- for instance, versions using risk-weighted expected value maximization or the criterion of stochastic dominance -- that break from the use of pure expected value.
I've pretty frequently seen it argued within the community (e.g. in the papers “Cheating Death in Damascus” and “Functional Decision Theory”) that CDT and EDT are not “correct" and that some other new theory such as functional decision theory is. But if anti-realism is true, then no decision theory is correct.
Eliezier Yudkowsky's influential early writing on decision theory seems to me to take an anti-realist stance. It suggests that we can only ask meaningful questions about the effects and correlates of decisions. For example, in the context of the Newcomb thought experiment, we can ask whether one-boxing is correlated with winning more money. But, it suggests, we cannot take a step further and ask what these effects and correlations imply about what it is "reasonable" for an agent to do (i.e. what they should do). This question -- the one that normative decision theory research, as I understand it, is generally about -- is seemingly dismissed as vacuous.
If this apparently anti-realist stance is widely held, then I don't understand why the community engages so heavily with normative decision theory research or why it takes part in discussions about which decision theory is "correct." It strikes me a bit like an atheist enthustiastically following theological debates about which god is the true god. But I'm mostly just confused here.
Comment by pablo_stafforini on List of EA-related email newsletters · 2020-05-01T13:07:07.722Z · score: 7 (4 votes) · EA · GW

Two more newsletters:

Comment by pablo_stafforini on "Music we lack the ears to hear" · 2020-04-20T02:35:28.468Z · score: 4 (3 votes) · EA · GW

It's nice to know the origin of that phrase.

I haven't read Lem's novel, but I very much enjoyed Andrei Tarkovsky's film adaptation. (I agree with Tyler Cowen that "all Takovsky movies are visually and conceptually brilliant".)

Comment by pablo_stafforini on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T14:06:15.984Z · score: 6 (3 votes) · EA · GW

Source for the screenshot: Samuel Karlin & Howard E. Taylor, A First Course in Stochastic Processes, 2nd ed., New York: Academic Press, 1975.

Comment by pablo_stafforini on Effects of anti-aging research on the long-term future · 2020-02-28T02:29:46.830Z · score: 3 (2 votes) · EA · GW

I'm also interested.

Anders Sandberg discusses the issue a bit in one of his conversations with Rob Wiblin for the 80k Podcast.

Comment by pablo_stafforini on Why SENS makes sense · 2020-02-22T21:01:46.434Z · score: 19 (10 votes) · EA · GW
I once read a comment on the effective altruism subreddit that tried to explain why aging didn't get much attention in EA despite being so important, and I thought it was quite enlightening.

For background, here's the comment I wrote:

Longevity research occupies an unstable position in the space of possible EA cause areas: it is very "hardcore" and "weird" on some dimensions, but not at all on others. The EAs in principle most receptive to the case for longevity research tend also to be those most willing to question the "common-sense" views that only humans, and present humans, matter morally. But, as you note, one needs to exclude animals and take a person-affecting view to derive the "obvious corollary that curing aging is our number one priority". As a consequence, such potential supporters of longevity research end up deprioritizing this cause area relative to less human-centric or more long-termist alternatives.
Comment by pablo_stafforini on Cost-Effectiveness of Aging Research · 2020-02-21T13:23:47.445Z · score: 2 (1 votes) · EA · GW
Crossposted from Hourglass Magazine

The entire "magazine" seems to have gone offline. SAD!

Comment by pablo_stafforini on Thoughts on electoral reform · 2020-02-20T18:46:10.324Z · score: 5 (3 votes) · EA · GW

Thanks to your comment, I can now endorse what you said as a more accurate and nuanced version of the position my previous comment tried to articulate. Agreed 100%.

Comment by pablo_stafforini on Thoughts on electoral reform · 2020-02-19T13:03:35.630Z · score: 3 (2 votes) · EA · GW

Yeah, see my reply to Tobias.

Comment by pablo_stafforini on Thoughts on electoral reform · 2020-02-19T13:02:03.734Z · score: 11 (5 votes) · EA · GW
I suspect that these results are very sensitive to model assumptions, such as tactical voting behaviour. But it would be interesting to see more work on VSE.

I agree with this. An approach I find promising is that of Nicolaus Tideman & Florenz Plassmann. In one study, the authors consider several different statistical models, use them to simulate actual elections, and rank the models by how best they approximate actual results. Then, in a subsequent study, the authors use the top-ranking model from their previous study to evaluate a dozen or so alternative voting rules, finding that plurality, anti-plurality, and Bucklin perform worst. As far as I'm aware, this is the only example of an attempt to assess voting rules by conducting simulations with a model that has been pre-fitted to actual election data. I believe that extending this approach may be among the most impactful research within this cause area.

Comment by pablo_stafforini on Thoughts on electoral reform · 2020-02-19T02:13:12.351Z · score: 31 (14 votes) · EA · GW

Thanks for writing this—I think electoral reform is an interesting and important cause area.

[Approval voting] fails the later-no-harm criterion

All voting systems violate intuitively desirable conditions, so noting that some system violates some condition is in itself no reason to favor other systems. One needs to look at the full picture, see what conditions are violated by what systems, and pick the system that minimizes weight-adjusted violations. (There is a clear parallel here between voting theory and population ethics: impossibility theorems have demonstrated in both fields that there exists no voting rule or population axiology that satisfies all intuitively plausible desiderata, so violation of a condition can't be adduced as a reason for rejecting the rule or axiology that violates it.)

But there is a much better approach, namely, to assess different systems by their "voter satisfaction efficiency" (VSE). Instead of relying on adequacy conditions, this approach considers the preferences that the electorate has for rival candidates and deals with them using the apparatus of expected utility theory. Each candidate is scored by the degree to which they satisfy the preferences of each voter, and then rival voting systems are scored by their probability of electing different candidates. Monte Carlo simulations independently performed by Warren Smith, Jameson Quinn and others generally find that approval voting has higher VSE than instant-runoff voting, and that both approval voting and instant-runoff voting have much higher VSE than plurality voting.

Given these results, I think the priority for EAs is to support whichever alternatives to plurality voting are most viable in a particular jurisdiction, rather than obsess over which of these alternatives to plurality is the absolute best. Of course, I also think it makes sense to continue to research the field, and especially refine the models used to compute VSE. What EAs definitely shouldn't do, in my opinion, is to spend considerable resources discrediting those alternatives to one's own preferred system, as FairVote has repeatedly done with respect to approval voting. Much more is gained by displacing plurality than is lost by replacing it with a suboptimal alternative (for all reasonable alternatives to plurality).

(In case it isn't obvious, I'm definitely not saying that you have done this in your essay; I'm rather highlighting a serious failure mode I see in the "voting reform" community that I believe we should strive to avoid.)

Comment by pablo_stafforini on Empirical data on value drift · 2020-02-17T14:00:32.977Z · score: 2 (1 votes) · EA · GW
a quick look would suggest ~75% moved from 50% to 10%

So, to confirm, are you saying that maybe 5 out of the 7 people who moved out of the 50% category moved in the 10% category? I think it's important to get clarity on this, since until encountering this comment I was interpreting your post (perhaps unreasonably) as saying that those 7 people had left the EA community entirely. If in fact only a couple of people in that class left the community, out of a total of 16, that's a much lower rate of drift than I was assuming, and more in line with anonymous's analysis of value drift in the original CEA team.

Comment by pablo_stafforini on Growth and the case against randomista development · 2020-02-16T22:35:11.596Z · score: 31 (13 votes) · EA · GW
Its interesting to note that I got downvoted for giving excellent sources. While you got upvoted for reading the articles and commenting. Basically I am outgroup/outcaste in EA.

I'm not sure I'm the right person to comment on this, given that I'm one of the parties involved, but I'll provide my perspective here anyway in case it is of any help or interest.

I don't think you are characterizing this exchange or the reasons behind the pattern of votes accurately. Bruno asked you to provide a source in support of the following claim, which you made four comments above:

One child policy had no effect on China's population size. It was their widespread education pre-1979 than reduced fertility.

In response to that request, you provided two sources. I looked at them and found that both failed to support the assertion that "It was [China's] widespread education pre-1979 than reduced fertility", and that one directly contradicted it.

I didn't downvote your comment, but I don't think it's unreasonable to expect some people to downvote it in light of this revelation. In fact, on reflection I'm inclined to favor a norm of downvoting comments that incorrectly claim that a scholarly source supports some proposition, since such a norm would incentivize epistemic hygiene and reduce the incidence of information cascades. I do agree with you that ingroup/outgroup dynamics sometimes explain observed behavior in the EA community, but I don't think this is one of those cases. As one datapoint confirming this, consider that a month or two ago, when I pointed out that someone had mischaracterized the main theses of a paper, that person's comment was heavily downvoted, despite this user being a regular commenter and not someone (I think) generally perceived to be an "outsider".

Moving to the object-level, in your recent comment you appear to have modified your original contention. Whereas before your stated that "widespread education" was the factor explaining China's reduced fertility, now you state that education was one factor among many. Although this difference may seem minor, in the present context it is crucial, because both in comments to this post and elsewhere in the Forum you have argued that EAs should prioritize education over growth. Yet if both of these factors account for the fertility reduction in China, your position cannot derive any support from this Chinese experience.

Comment by pablo_stafforini on Growth and the case against randomista development · 2020-02-15T18:26:41.001Z · score: 27 (10 votes) · EA · GW

I actually took the time to look at those two sources, and as far as I can tell they provide no support whatsoever for your claim that "It was [China's] widespread education pre-1979 that reduced fertility." The word 'education' occurs exactly once in the first article, and in a sentence that doesn't make any claims about education reducing fertility. As for the second article, to the extent that it attributes the fertility decline to anything, it attributes it not to "education", but to economic development (pp. 158-159):

The third fatal problem with the “400 million births prevented” claim is that it totally ignores the most significant source of fertility decline worldwide: economic development... China’s rapid economic development since 1980 deserves the lion’s share of the credit for the [fertility decline].
Comment by pablo_stafforini on EA Survey 2018 Series: How Long Do EAs Stay in EA? · 2020-02-15T16:45:40.353Z · score: 2 (1 votes) · EA · GW

I just thought it would be valuable to recalculate the estimated rates of attrition with this new data, though I think it's totally fine for you to deprioritize this.

Comment by pablo_stafforini on EA Survey 2018 Series: How Long Do EAs Stay in EA? · 2020-02-15T14:41:36.087Z · score: 4 (2 votes) · EA · GW
This is more accurate than email tracking in that it captures more people (such as those who didn’t give an email or those who changed emails), but less accurate in that it is possible that people who state they joined EA earlier could still show up just on later surveys and offset people who dropped off, making the retention rate appear higher than it actually is.

Why should the possibility of early EAs failing to take early surveys inflate the retention rate more than the possibility of early EAs failing to take later surveys deflate it? Shouldn't we expect these two effects to roughly cancel each other out? If anything, I would expect EAs in a given cohort to be slightly less willing to participate in the EA survey with each successive year, since completing the survey becomes arguably more tedious the more you do it. If so, this methodology should slightly underestimate, rather than overestimate, the true retention rate. Apologies if I'm misunderstanding the reasoning here.

Comment by pablo_stafforini on EA Survey 2018 Series: How Long Do EAs Stay in EA? · 2020-02-15T13:18:18.411Z · score: 2 (1 votes) · EA · GW

Are you planning to update the analysis with data from the 2019 survey?

Comment by pablo_stafforini on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-14T23:36:50.501Z · score: 3 (2 votes) · EA · GW

Note that there is now a Metaculus prize for questions and comments related to the coronavirus outbreak. Here you can see the existing questions in this series.

Comment by pablo_stafforini on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T00:49:16.217Z · score: 8 (4 votes) · EA · GW

I think he means that they do have to disclose if they're romantically involved. Perhaps replace 'or is' with 'nor is' to make it clearer.

Comment by pablo_stafforini on Fireside Chat with Philip Tetlock · 2020-02-05T13:40:15.342Z · score: 8 (5 votes) · EA · GW

Just wanted to say that I'm really glad all these talks are being transcribed!

Comment by pablo_stafforini on Announcing the Bentham Prize · 2020-02-04T13:28:38.153Z · score: 13 (5 votes) · EA · GW

First round of prizes announced. Congratulations to user haven and to our very own AABoyles and PeterHurford!

Comment by pablo_stafforini on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-03T02:50:51.959Z · score: 6 (3 votes) · EA · GW

Tyler Cowen has written about this in his post "A Bet is a Tax on Bullshit".

This doesn't affect your point, but I just wanted to note that the post—including the wonderful title—was written by Alex Tabarrok.