Posts

[link] Schubert, Caviola & Faber, 'The Psychology of Existential Risk' 2019-10-22T12:41:53.542Z · score: 42 (14 votes)
[link] How this year’s winners of the Nobel Prize in Economics influenced GiveWell’s work 2019-10-19T02:56:46.480Z · score: 15 (8 votes)
A bunch of new GPI papers 2019-09-25T13:32:37.768Z · score: 98 (36 votes)
[link] Andreas Mogensen's "Maximal Cluelessness" 2019-09-25T11:18:35.651Z · score: 45 (15 votes)
[link] 'Crucial Considerations and Wise Philanthropy', by Nick Bostrom 2017-03-17T06:48:47.986Z · score: 14 (14 votes)
Effective Altruism Blogs 2014-11-28T17:26:05.861Z · score: 4 (4 votes)
[link] The Economist on "extreme altruism" 2014-09-18T19:53:52.287Z · score: 4 (4 votes)
Effective altruism quotes 2014-09-17T06:47:27.140Z · score: 5 (5 votes)

Comments

Comment by pablo_stafforini on Conditional interests, asymmetries and EA priorities · 2019-10-22T20:10:01.573Z · score: 2 (1 votes) · EA · GW

Interesting example. I have never taken such pills, but if they simply intensify the ordinary experience of sleepiness, I'd say that the reason I (as a CU) don't try to stay awake is that I can't dissociate the pleasantness of falling asleep from actually falling asleep: if I were to try to stay awake, I would also cease to have a pleasant experience. (If anyone knows of an effective dissociative technique, please send it over to Harri Besceli, who once famously remarked that "falling asleep is the highlight of my day.")

More generally, I think cases of this sort have rough counterparts for negative experience, e.g. the act of scratching an itch, or of playing with a loose tooth, despite the concomitant pain induced by those activities. I think such cases are sufficiently marginal, and susceptible to alternative explanations, that they do not pose a serious problem to either (1) or (2).

Comment by pablo_stafforini on Conditional interests, asymmetries and EA priorities · 2019-10-22T17:35:18.118Z · score: 2 (1 votes) · EA · GW
I believe that Michael's point is that, while we cannot imagine suffering without some kind of interest to have it stop (at least in the moment itself), we can imagine a mind that does not care for further joy.

The relevant comparison, I think, is between (1) someone who experiences suffering and wants this suffering to stop and (2) someone who experiences happiness and wants this happiness not to stop. It seems that you and Michael think that one can plausibly deny only (2), but I just don't see why that is so, especially if one focuses on comparisons where the positive and negative experiences are of the same intensity. Like Paul, I think the two scenarios are symmetrical.

[EDIT: I hadn't seen Paul's reply when I first posted my comment.]

Comment by pablo_stafforini on Publication of Stuart Russell’s new book on AI safety - reviews needed · 2019-10-19T02:51:02.777Z · score: 8 (4 votes) · EA · GW

The latest Alignment Newsletter (published today) includes a review of Russell's book by Rohin Shah. Perhaps he can publish it on Amazon and/or GoodReads?

Comment by pablo_stafforini on Ineffective Altruism: Are there ideologies which generally cause there adherents to have worse impacts? · 2019-10-18T12:03:42.676Z · score: 8 (5 votes) · EA · GW

Pinker lists ideology as one of his five "inner demons" in The Better Angels of our Nature, together with predatory violence, dominance, sadism and revenge.

Comment by pablo_stafforini on What actions would obviously decrease x-risk? · 2019-10-18T11:50:21.766Z · score: 4 (2 votes) · EA · GW

Thank you for those references!

Comment by pablo_stafforini on What actions would obviously decrease x-risk? · 2019-10-08T18:30:02.131Z · score: 7 (5 votes) · EA · GW
ALSO, UN was created post WW2. Maybe we only have appetite for major international cooperation after nasty wars?

This seems like point worth highlighting, especially vis-à-vis Bostrom's own views about the importance of global governance in 'The Vulnerable World Hypothesis'. Worth also noting that the League of Nations was created in the aftermath of WW1.

Comment by pablo_stafforini on JP's Shortform · 2019-10-08T11:24:17.255Z · score: 5 (3 votes) · EA · GW
Although I do think it's possible the Forum shouldn't let you change away from those defaults.

I am in favor of these defaults and also in favor of disallowing people to change them. I know of two people on LW who have admitted to strong-upvoting their comments, and my sense is that this behavior isn't that uncommon (to give a concrete estimate: I'd guess about 10% of active users do this on a regular basis). Moreover, some of the people who may be initially disinclined to upvote themselves might start to do so if they suspect others are, both because the perception that a type of behavior is normal makes people more willing to engage in it, and because the norm to exercise restrain in using the upvote option may seem unfair when others are believed to not be abiding by it. This dynamic may eventually cause a much larger fraction of users to regularly self-upvote.

So I think these are pretty strong reasons for disallowing that option. And I don't see any strong reasons for the opposite view.

Comment by pablo_stafforini on [link] Andreas Mogensen's "Maximal Cluelessness" · 2019-10-08T10:55:34.452Z · score: 5 (3 votes) · EA · GW

Very interesting comment!

To explain, I think that the common sense justification for not wandering blindly into the road simply is that I currently have a preference against being hit by a car.

I don't think this defence works, because some of your current preferences are manifestly about future events. Insisting that all these preferences are ultimately about the most immediate causal antecedent (1) misdescribes our preferences and (2) lacks a sound theoretical justification. You may think that Parfit's arguments against S provide such a justification, but this isn't so. One can accept Parfit's criticism and reject the view that what is rational for an agent is to maximize their lifetime wellbeing, accepting instead a view on which it is rational for the agent to satisfy their present desires (which, incidentally, is not Parfit's view). This in no way rules out the possibility that some of these present desires are aimed at future events. So the possibility that you may be clueless about which course of action satisfies those future oriented desires remains.

Comment by pablo_stafforini on JP's Shortform · 2019-10-08T09:57:07.125Z · score: 8 (2 votes) · EA · GW

On the whole, I really like the search engine. But one small bug you may want to fix is that occasionally the wrong results appear under 'Users'. For example, if you type 'Will MacAskill', the three results that show up are posts where the name 'Will MacAskill' appears in the title, rather than the user Will MacAskill.

EDIT: Mmh, this appears to happen because a trackback to Luke Muehlhauser's post, 'Will MacAskill on Normative Uncertainty', is being categorized as the name of a user. So, not a bug with the search engine as such, but still something that the EA Forum tech team may want to fix.

Comment by pablo_stafforini on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-08T09:29:26.285Z · score: 25 (9 votes) · EA · GW

Thank you. Your comment has caused me to change my mind somewhat. In particular, I am now inclined to believe that getting people to actually read the material is, for a significant fraction of these people, a more serious challenge than I previously assumed. And if CFAR's goal is to selectively target folks concerned with x-risk, the benefits of insuring that this small, select group learn the material well may justify the workshop format, with its associated costs.

I would still like to see more empirical research conducted on this, so that decisions that involve the allocation of hundreds of thousands of EA dollars per year rest on firmer ground than speculative reasoning. At the current margin, I'd be surprised if a dollar given to CFAR to do object-level work achieves more than a dollar spent in uncovering "organizational crucial considerations"—that is, information with the potential to induce a major shift in the organization's direction or priority. (Note that I think this is true of some other EA orgs, too. For example, I believe that 80k should be using randomization to test the impact of their coaching sessions.)

Comment by pablo_stafforini on What actions would obviously decrease x-risk? · 2019-10-07T23:37:28.699Z · score: 7 (4 votes) · EA · GW

Personally, I don't find that skeptical comments like Max's discourage me from ideating. And the suggestion to keep ideation and evaluation separate might discourage the latter, since it's actually not obvious how to operationalize 'keeping separate'.

Comment by pablo_stafforini on What actions would obviously decrease x-risk? · 2019-10-07T23:30:11.320Z · score: 7 (5 votes) · EA · GW

In this talk on 'Crucial considerations and wise philanthropy', Nick Bostrom tentatively mentions some actions that appear to be robustly x-risk reducing, including promoting international peace and cooperation, growing the effective altruism movement, and working on solutions to the control problem.

Comment by pablo_stafforini on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-07T23:22:10.572Z · score: 18 (9 votes) · EA · GW

Ah, but should you familiarize yourself with the literature on familiarizing yourself with the literature before writing an EA Forum post?

Comment by pablo_stafforini on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-07T12:57:41.884Z · score: 28 (11 votes) · EA · GW

I agree that these are pretty valuable concepts to learn. At the same time, I also believe that these concepts can be learned easily by studying the corresponding written materials. At least, that's how I learned them, and I don't think I'm different from the average EA in this respect.

But I also think we shouldn't be speculating about this issue, given its centrality to CFAR's approach. Why not give CFAR a few tens of thousands of dollars to (1) create engaging online content that explains the concepts taught at their workshops and (2) run a subsequent RCT to test whether people learn these concepts better by attending a workshop than by exposing themselves to that content?

Comment by Pablo_Stafforini on [deleted post] 2019-10-05T23:49:23.833Z
I'm not planning on engaging further with the cluelessness literature because what I've seen makes me think GPI is off track.

I think your dismissal is premature. For one thing, the "debugging" approach you favor has been discussed by Will MacAskill, a Senior Research Fellow at GPI:

If you think that some future time will be much more influential than today, then a natural strategy is to ensure that future decision-makers, who you are happy to defer to, have as many resources as possible when some future, more influential, time comes.

For another, the cluelessness literature isn't exhausted by GPI's contributions to it, and it includes other, more extensive discussions of your favorite approach, notably by Brian Tomasik:

Focusing on the very robust projects often amounts to punting the hard questions to future generations who will be better equipped to solve them.
Comment by pablo_stafforini on Candy for Nets · 2019-09-29T13:25:10.967Z · score: 28 (14 votes) · EA · GW
Initially she didn't want to donate the whole amount, but wanted to set aside half to buy more candy so she could do this again.

Excellent reasoning!

Comment by pablo_stafforini on Is there a good place to find the "what we know so far" of the EA movement? · 2019-09-29T10:22:05.978Z · score: 65 (22 votes) · EA · GW

Hi, and welcome!

What I know about it can be summarized as "EA is a group of people that use science to figure out which charities are the most cost-effective".

This summary would describe the "effective giving" movement. EA is not restricted to cost-effective charitable donations, but extends to all ways of doing good. In other words, EA isn't only cause neutral, but also means neutral; it prejudges neither which causes are best nor which means should be pursued to promote those causes.

Where can I go to get caught up on what the EA movement has "figured out so far"? Is there something like an EA equivalent of the LessWrong sequences?

There is no equivalent of the "sequences". A good introduction is Will MacAskill's Doing Good Better (disclaimer: I helped Will with some of the research). Then you may want to take a look at 80,000 Hours' List of the most urgent global issues and follow the links to the relevant problems. In addition, at the end of this comment I list a bunch of posts that I believe exemplify some of the best writing of the EA blogosphere. Of course, this is just my own opinion, and others may question some of the inclusions or omissions. [Edit: You may also want to check out the EA Handbook. I didn't mention it initially because I'm only familiar with the 1st edition, and the current version has been substantially revised.]

Introductions to concepts that are important to the EA movement.

See this list of concepts put together by the Centre for Effective Altruism and this other list by Peter McIntyre. [Edit: See also 80,000 Hours' key ideas, which I hadn't noticed until both Kevin and Soren mentioned it.]

Insights concerning how we should measure and think about how "effective" a charity is.

There's a lot written on this. Perhaps see 80,000 Hours' How to compare different global problems in terms of impact. Note, again, that this is not restricted to charities, but is about problems/causes.

An overview of the world's biggest problems (according to the EA movement) or maybe the problems with the best ratio of marginal improvement to marginal effort.

A while ago I compiled master list of all existing lists of important problems; you can find it here.

----

Some recommended blog posts

Scott Alexander Ethics offsets

Scott Alexander Nobody is perfect, everything is commensurable

Scott Alexander No time like the present for AI safety work

Nick Beckstead A proposed adjustment to the astronomical waste argument

Nick Bostrom 3 ways to advance science

Paul Christiano An estimate of the expected influence of becoming a politician

Paul Christiano Astronomical waste

Paul Christiano Hyperbolic growth

Paul Christiano Influencing the far future

Paul Christiano Neglectedness and impact

Paul Christiano On redistribution

Paul Christiano Replaceability

Paul Christiano The best reason to give later

Paul Christiano The efficiency of modern philanthropy

Paul Christiano Three impacts of machine intelligence

Owen Cotton-Barratt How valuable is movement growth?

Holly Elmore The remembering self needs to get real about the experiencing self

Holly Elmore Humility

Ben Garfinkel How sure are we about this AI stuff?

Katja Grace Cause Prioritization Research

Katja Grace Estimation Is the Best We Have

Robin Hanson Marginal charity

Robin Hanson Parable of the multiplier hole

Holden Karnofsky Hits-Based Giving

Holden Karnofsky Passive vs. rational vs. quantified

Holden Karnofsky Sequence thinkings vs. cluster thinking

Holden Karnofsky Your Dollar Goes Further Overseas

Holden Karnofsky Worldview diversification

Jeff Kaufman Altruism isn’t about sacrifice

Jeff Kaufman The Unintuitive Power Laws of Giving

Greg Lewis Beware Surprising and Suspicious Convergence

Will MacAskill, Are we living at the most influential time in history?

Richard Ngo Disentangling arguments for the importance of AI safety

Toby Ord The Moral Imperative Towards Cost-Effectiveness

Carl Shulman Are pain and pleasure equally energy efficient?

Carl Shulman How hard is to become Prime Minister of the United Kingdom

Carl Shulman Flow-through effects of saving a life through the ages on life-years lived

Carl Shulman & Nick Beckstead A Long-run Perspective on Strategic Cause Selection and Philanthropy

Jonah Sinick Many Weak Arguments vs. One Relatively Strong Argument

Scott Siskind Dead children currency

Scott Siskind Efficient charity

Brian Tomasik Charity Cost Effectiveness in an Uncertain World

Julia Wise Cheerfully

Comment by pablo_stafforini on UK policy and politics careers · 2019-09-29T07:50:33.735Z · score: 3 (2 votes) · EA · GW

The image under 'Mapping of jobs in the UK civil service that relate to Animal Welfare' is not being displayed.

Comment by pablo_stafforini on [Link] Moral Interlude from "The Wizard and the Prophet" · 2019-09-28T08:01:47.260Z · score: 8 (4 votes) · EA · GW

For her calculations to be correct, present world GDP, absent growth, would have to be USD . Back in 1996, when the paper was published, world GDP was over USD . Unclear what's going on here.

Comment by pablo_stafforini on [Link] The Case for Charter Cities Within the EA Framework (CCI) · 2019-09-27T13:16:02.851Z · score: 12 (4 votes) · EA · GW

As a precedent, the micronation of Liberland was established in Gornja Siga in 2015, which prior to that was terra nullius.

That a territory is unclaimed by existing sovereign states, however, seems like a poor reason for establishing a charter city there. In a recent conversation with Tyler Cowen, urban planner Alain Bertaud noted that

Cities need a good location. This is a debate I had with Paul Romer when he was interested in charter cities. He had decided that he could create 50 charter cities around the world. And my reaction — maybe I’m wrong — but my reaction is that there are not 50 very good locations for cities around the world. There are not many left... cities like Singapore, Malacca, Mumbai are there for a good reason. And I don’t think there’s that many very good locations.
Comment by pablo_stafforini on Are we living at the most influential time in history? · 2019-09-27T07:50:29.031Z · score: 9 (6 votes) · EA · GW

Kelsey Piper has just published a Vox article, 'Is this the most important century in human history?', discussing this post.

Comment by pablo_stafforini on Are we living at the most influential time in history? · 2019-09-26T19:10:59.873Z · score: 3 (3 votes) · EA · GW

I just realized that there are actually two separate reasons for thinking that the hingiest times in history were periods of population bottlenecks. First, because tiny populations are much more vulnerable to extinction than much larger populations are. Second, because in smaller populations an individual person has a larger share of influence than they do in larger populations, holding total influence constant.

Compare population bottlenecks to one of Will's examples:

It could be the case [...] that the 20th century was a bigger deal than the 17th century, but that, because there were 1/5th as many people alive during the 17th century, a longtermist altruist could have had more direct impact in the 17th century than in the 20th century.

Unlike the 17th century, which is hingier only because comparatively fewer people exist, periods of population bottlenecks are hingier both because of their unusually low population and because they are "a bigger deal" than other periods.

Comment by pablo_stafforini on A bunch of new GPI papers · 2019-09-26T10:55:19.513Z · score: 4 (3 votes) · EA · GW

The publication status seems to differ from paper to paper. And there appears to be no quick way of determining the status of all papers at once; you need to run a Google search for each individual paper.

Comment by pablo_stafforini on A bunch of new GPI papers · 2019-09-26T10:48:01.183Z · score: 5 (4 votes) · EA · GW

New paper published:

Mogensen, Staking our future: deontic long-termism and the non-identity problem

Greaves and MacAskill argue for ​axiological longtermism​, according to which, in a wide class of decision contexts, the option that is ​ex ante best is the option that corresponds to the best lottery over histories from ​t onwards, where ​t ​is some date far in the future. They suggest that a ​stakes-sensitivity argument may be used to derive ​deontic longtermism from axiological longtermism, where deontic longtermism holds that in a wide class of decision contexts, the option one ought to choose is the option that corresponds to the best lottery over histories from ​t onwards, where ​t is some date far in the future. This argument appeals to the ​Stakes Principle​: when the axiological stakes are high, non-consequentialist constraints and prerogatives tend to be insignificant in comparison, so that what one ought to do is simply whichever option is best. I argue that there are strong grounds on which to reject the ​Stakes Principle​. Furthermore, by reflecting on the Non-Identity Problem, I argue that there are plausible grounds for denying the existence of a sound argument from axiological longtermism to deontic longtermism insofar as we are concerned with ways of improving the value of the future of the kind that are focal in Greaves and MacAskill’s presentation.
Comment by pablo_stafforini on [link] Andreas Mogensen's "Maximal Cluelessness" · 2019-09-26T08:50:06.169Z · score: 8 (6 votes) · EA · GW

Robin Hanson, Paul Christiano, and others have made similar points in the past.

Hanson (2014):

This post describes attempts to help the future as speculative and non-robust in contrast to helping people today. But it doesn’t at all address the very robust strategy of simply saving resources for use in the future. That may not be the best strategy, but surely one can’t complain about its robustness.

Christiano (2014):

There is some debate about this question today, of whether there are currently good opportunities to reduce existential risk. The general consensus appears to be that serious extinction risks are much more likely to exist in the future, and it is ambiguous whether we can do anything productive about them today.

However, there does appears to be a reasonable chance that such opportunities will exist in the future, with significant rather than tiny impacts. Even if we don’t do any work to identify them, the technological and social situation will change in unpredictable ways. Even foreseeable technological developments over the coming centuries present plausible extinction risks. If nothing else, there seems to be a good chance that the existence of machine intelligence will provide compelling opportunities to have a long-term impact unrelated to the usual conception of existential risk (this will be the topic of a future post).

If we believe this argument, then we can simply save money (and build other forms of capacity) until such an opportunity arises.

Comment by pablo_stafforini on [link] Andreas Mogensen's "Maximal Cluelessness" · 2019-09-25T12:40:03.071Z · score: 21 (11 votes) · EA · GW

Mogensen writes (p. 20):

We might be especially interested in assessing acts that are directly aimed at improving the long-run future of Earth-originating civilization...These might include efforts to reduce the risk of near-term extinction for our species: for example, by spreading awareness about dangers posed by synthetic biology or artificial intelligence.
The problem is that we do not have good evidence of the efficacy of such interventions in achieving their ultimate aims. Nor is such evidence in the offing. The idea that the future state of human civilization could be deliberately shaped for the better arguably did not take hold before the work of Enlightenment thinkers like Condorcet (1822) and Godwin (1793). Unfolding over time- scales that defy our ability to make observations, efforts to alter the long-run trajectory of Earth- originating civilization therefore resist evidence-based assessment, forcing us to fall back on intuitive conjectures whose track record in domains that are amenable to evidence-based assessment is demonstrably poor (Hurford 2013). This is not a case where it can be reasonably claimed that there is good evidence, readily available, to constrain our decision making.

These concerns are forceful, but don't seem to generalize to all intervention types aimed at improving the long-term future. If one believes that the readily available evidence is insufficient to constrain our decision making, one still can accumulate resources to be disbursed at a later time when good enough evidence emerges. Although we may at present be radically uncertain about the sign and the magnitude of most far-future interventions, the intervention of accumulating resources for future disbursal does not itself appear to be subject to such radical uncertainty.

Comment by pablo_stafforini on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-09-23T13:49:56.735Z · score: 8 (6 votes) · EA · GW

If the OP (arikr) can't fix the problem, here's a possible workaround:

1. Open the form used to collect the responses from Google Drive.

2. Create a spreadsheet to store the collected responses, by clicking on the green icon to the left of the three vertical dots.

3. Generate a public link to this spreadsheet, by clicking on the green button on the top right, then on 'Get shareable link' on the top right of the pop-up window.

4. Share this link with us.

Comment by pablo_stafforini on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-09-23T13:33:41.545Z · score: 10 (7 votes) · EA · GW

There are now 120 responses, but only the first 100 can be accessed from the URL above. The remaining 20 should be accessible by following the "Other (20)" link at the bottom of the form, but the link appears to be broken (a friend of mine also reports being unable to open it, so I conclude it's not a problem specific to my setup).

Comment by pablo_stafforini on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-09-15T20:22:00.631Z · score: 31 (16 votes) · EA · GW

I think there are more than "one or two" interesting things there.

Comment by pablo_stafforini on Are we living at the most influential time in history? · 2019-09-03T13:05:57.370Z · score: 24 (14 votes) · EA · GW
The most obvious implication, however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building.

In his excellent Charity Cost Effectiveness in an Uncertain World, first published in 2013, Brian Tomasik calls this approach 'Punting to the Future'. Unless there are strong reasons for introducing a new label, I suggest sticking to Brian's original name, both to avoid unnecessary terminological profusion and to credit those who pioneered discussion of this idea.

Comment by pablo_stafforini on Are we living at the most influential time in history? · 2019-09-03T10:06:56.469Z · score: 34 (15 votes) · EA · GW

As a side note, Derek Parfit was an early advocate of what you call the 'Hinge of History Hypothesis'. He even uses the expression 'hinge of history' in the following quote (perhaps that's the inspiration for your label):

We live during the hinge of history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere, spreading through this galaxy. (On What Matters, vol. 2, Oxford, 2011, p. 616)

Interestingly, he had expressed similar views already in 1984, though back then he didn't articulate why he believed that the present time is uniquely important:

the part of our moral theory... that covers how we affect future generations... is the most important part of our moral theory, since the next few centuries will be the most important in human history. (Reasons and Persons, Oxford, 1984, p. 351)
Comment by pablo_stafforini on Are we living at the most influential time in history? · 2019-09-03T09:34:49.456Z · score: 17 (10 votes) · EA · GW

I liked this post. One comment:

Or perhaps extinction risk is high, but will stay high indefinitely, in which case the future is not huge in expectation, and the grounds for strong longtermism fall away.

I don't think this necessarily follows. If the present generation can reduce the risk of extinction for all future generations, the present value of extinction reduction may still be high enough to vindicate strong longtermism. For example, suppose that each century will be exposed to a 2% constant risk of extinction, and that we can bring that risk down to 1% by devoting sufficient resources to extinction risk reduction. Assuming a stable population of 10 billion, then thanks to our efforts an additional 500 billion lives will exist in expectation, and most of these lives will exist more than 10,000 years from now. Relaxing the stable population assumption strengthens this conclusion.

Comment by pablo_stafforini on Ask Me Anything! · 2019-08-21T15:04:40.813Z · score: 20 (9 votes) · EA · GW

I am reminded of the story where Victor Hugo, who was away from Paris when Les misérables was first published, wrote his editor a letter inquiring about the sales of his much anticipated novel. The letter contained only one character: ?

A few days later, the reply arrived. It was equally brief: !

Les misérables was an immediate best-seller.

(Unfortunately, the story is likely apocryphal.)

Comment by pablo_stafforini on Ask Me Anything! · 2019-08-19T11:16:06.393Z · score: 15 (9 votes) · EA · GW

.

Comment by pablo_stafforini on Apology · 2019-03-24T22:05:18.230Z · score: 27 (11 votes) · EA · GW
There are no Real Apologies, it is naive to think otherwise and toxic to demand otherwise. Of course he is acknowledging wrongdoing, and he is acknowledging wrongdoing because he is being pressured to acknowledge wrongdoing.

What are you talking about? There's a clear difference between apologizing because one sincerely believes one acted wrongly, and apologizing only because one thinks the consequences will be graver if one fails to apologize. I am puzzled by your apparent failure to recognize this difference.

Comment by pablo_stafforini on Apology · 2019-03-24T20:11:55.687Z · score: 18 (10 votes) · EA · GW

Thanks for agreeing to state your credences explicitly (and strongly upvoted for that reason).

I thought it was important to get more precision given the evidence showing that qualifiers such as 'possible', 'likely', etc are compatible with a wide range of values. Before your subsequent clarification, I interpreted your 'quite plausible' as expressing a probability of ~60%.

Comment by pablo_stafforini on Apology · 2019-03-24T19:16:07.822Z · score: 18 (8 votes) · EA · GW

"Quite plausible"? What's your actual credence?

Comment by pablo_stafforini on Apology · 2019-03-24T13:14:08.587Z · score: 24 (18 votes) · EA · GW
it is not at all clear to me that the accusations that are being discussed here are separate from the accusations that appear to have caused his apology. I agree that if they were from separate disconnected communities, then that would be significant evidence

In his apology, Jacy says that he "know[s] very little of the details of these allegations." But he clearly knows the Brown allegations very well. So even ignoring the other evidence cited by Halstead, the allegations for which he is apologizing clearly can't include the Brown allegations.

EDIT: I now see it's also possible that Jacy was presented with so little information that he wouldn't be able to determine if the allegations CEA was concerned with included the Brown allegations, however well he knew the latter. My reasoning above ignores this possibility. Personally, I think the evidence Halstead offered is pretty conclusive, so I don't think this makes a practical difference, but it still seemed something worth mentioning.

Comment by pablo_stafforini on Candidate Scoring System, First Release · 2019-03-13T15:59:59.399Z · score: 4 (4 votes) · EA · GW

Thanks for doing this. Maybe add Andrew Yang? From a recent Vox article by Dylan Matthews:

Yang, a startup veteran and founder of the nonprofit Venture for America who has never run for elected office before, has made a $12,000-per-year basic income for all American adults the centerpiece of his campaign. He averages 0 to 1 percent in public opinion polls, but as of this writing, he’s surged on prediction markets, with bettors giving him slightly worse odds than Warren, Booker, and Klobuchar, and better odds than Tulsi Gabbard, Kirsten Gillibrand, or Julián Castro.
...successful or not, Yang is a fascinating cultural phenomenon. He blends a traditionally left-wing platform (a mass expansion of the safety net and a big new value-added tax, or VAT, to pay for it) with massive appeal to the young, predominantly male, and, in their unique way, socially conservative audiences of people like Joe Rogan and Sam Harris.
Comment by pablo_stafforini on What skills would you like 1-5 EAs to develop? · 2019-03-07T16:06:41.778Z · score: 3 (2 votes) · EA · GW

+1

To those interested in becoming better forecasters: I strongly recommend the list of prediction resources that Metaculus has put together.

Comment by pablo_stafforini on Making discussions in EA groups inclusive · 2019-03-05T01:03:04.040Z · score: 27 (9 votes) · EA · GW

If the topics to avoid are irrelevant to EA, it seems preferable to argue that these topics shouldn't be discussed because they are irrelevant than to argue that they shouldn't be discussed because they are offensive. In general, justifications for limiting discourse that appeal to epistemic considerations (such as bans on off-topic discussions) appear to generate less division and polarization than justifications that appeal to moral considerations.

Comment by pablo_stafforini on Making discussions in EA groups inclusive · 2019-03-04T20:33:21.219Z · score: 8 (3 votes) · EA · GW

That makes sense.

Comment by pablo_stafforini on Making discussions in EA groups inclusive · 2019-03-04T20:22:04.211Z · score: 6 (5 votes) · EA · GW
People who are down-voting, can you please explain why? To just down-vote seems unproductive.

Are you implying that every time someone downvotes a post they should provide an accompanying explanation of their decision? If not, what makes this post different from others?

Comment by pablo_stafforini on Do you have any suggestions for resources on the following research topics on successful social and intellectual movements similar to EA? · 2019-02-24T12:48:12.189Z · score: 7 (4 votes) · EA · GW

In connection to (1), a while ago I compiled a list of EA-relevant fields and movements, with associated EA and academic references. As I point out in the document, the list is incomplete, but it's already at a stage where others may perhaps find it useful. You can access the Google Doc here.

Comment by pablo_stafforini on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post · 2019-02-17T11:09:42.362Z · score: 17 (10 votes) · EA · GW

In case it helps others decide whether or not to take the Superforecasting Fundamentals course, I'm reposting a brief message I sent to the CEA Slack workspace back in August 2017:

I took it a year or so ago. The course is very good, but also very basic: I clearly wasn’t the target audience, since I was already quite familiar with most of the content. I wouldn't recommend it unless you don’t know anything about forecasting.
Comment by pablo_stafforini on Near-term focus, robustness, and flow-through effects · 2019-02-05T22:12:35.762Z · score: 1 (1 votes) · EA · GW

I see. Thanks.

Comment by pablo_stafforini on Near-term focus, robustness, and flow-through effects · 2019-02-05T19:20:24.681Z · score: 2 (2 votes) · EA · GW
Another object-level point, due to AGB

Would you mind linking to the comment left by that user, rather than to the user who left the comment? Thanks.

Comment by pablo_stafforini on What are some lists of open questions in effective altruism? · 2019-02-05T11:27:04.765Z · score: 13 (6 votes) · EA · GW

This post compiles lists of important questions and problems.

Comment by pablo_stafforini on Cost-Effectiveness of Aging Research · 2019-01-31T11:46:05.924Z · score: 2 (5 votes) · EA · GW

Owen's last name is 'Cotton-Barratt'.

Comment by pablo_stafforini on High-priority policy: towards a co-ordinated platform? · 2019-01-15T13:29:42.731Z · score: 2 (2 votes) · EA · GW
What would an EA policy platform look like?

You may want to expand your list to include some of the proposals here: