Posts

Stefan_Schubert's Shortform 2019-10-04T18:32:56.962Z · score: 12 (2 votes)
Considering Considerateness: Why communities of do-gooders should be exceptionally considerate 2017-05-31T22:41:27.190Z · score: 22 (17 votes)
Effective altruism: an elucidation and a defence 2017-03-22T17:06:50.202Z · score: 12 (12 votes)
Hard-to-reverse decisions destroy option value 2017-03-17T17:54:34.688Z · score: 16 (24 votes)
Understanding cause-neutrality 2017-03-10T17:43:51.345Z · score: 14 (13 votes)
Should people be allowed to ear-mark their taxes to specific policy areas for a price? 2015-09-13T11:01:32.358Z · score: 4 (4 votes)
Effective Altruism’s fact-value separation as a weapon against political bias 2015-09-11T14:58:04.983Z · score: 13 (10 votes)
Political Debiasing and the Political Bias Test 2015-09-11T14:52:47.510Z · score: 4 (8 votes)
Why the triviality objection to EA is beside the point 2015-07-20T19:29:13.261Z · score: 22 (17 votes)
Opinion piece on the Swedish Network for Evidence-Based Policy 2015-06-09T14:35:32.973Z · score: 7 (7 votes)
The effectiveness-alone strategy and evidence-based policy 2015-05-07T10:52:36.891Z · score: 11 (11 votes)

Comments

Comment by stefan_schubert on EA reading list: population ethics, infinite ethics, anthropic ethics · 2020-08-13T21:59:54.738Z · score: 2 (1 votes) · EA · GW

Here is a passage from Hilary Greaves's Population axiology.

In many decision situations, at least in expectation, an agent’s decision has no effect on the numbers and identities of persons born. For those situations, fixed-population ethics is adequate. But in many other decision situations, this condition does not hold. Should one have an additional child? How should life-saving resources be prioritised between the young (who might go on to have children) and the old (who are past reproductive age)? How much should one do to prevent climate change from reducing the number of persons the Earth is able to sustain in the future? Should one fund condom distribution in the developing world? In all these cases, one’s actions can affect both who is born and how many people are (ever) born. To deal with cases of this nature, we need variable-population ethics: ‘population ethics’ for short.
Comment by stefan_schubert on The Importance of Unknown Existential Risks · 2020-07-23T20:46:04.644Z · score: 7 (4 votes) · EA · GW

One possibility is that there aren't many risks that are truly unknown, in the sense that they fall outside of the categories Toby enumerates, for the simple reason that some of those categories are relatively broad, so cover much of the space of possible risks.

Even if that were true, there might still be (fine-grained) risks we haven't thought about within those categories, however - e.g. new ways in which AI could cause an existential catastrophe.

Comment by stefan_schubert on Nathan Young's Shortform · 2020-07-23T14:26:46.542Z · score: 8 (6 votes) · EA · GW

Are the two bullet points two alternative suggestions? If so, I prefer the first one.

Comment by stefan_schubert on Improving the future by influencing actors' benevolence, intelligence, and power · 2020-07-21T20:45:44.115Z · score: 9 (4 votes) · EA · GW

Right, so instead of (or maybe in addition to) giving flexible power to supposedly benevolent and intelligent actors (implication 3 above), you create structures, norms, and practices which enable anyone specifically to do good effectively (~give anyone power to do what's benevolent and intelligent).

Comment by stefan_schubert on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T20:44:10.590Z · score: 31 (15 votes) · EA · GW

What are the key issues or causes that longtermists should invest in, in your view? And how much should we invest in them, relatively speaking? What issues are we currently under-investing in?

Comment by stefan_schubert on BenMillwood's Shortform · 2020-07-12T11:56:58.367Z · score: 13 (5 votes) · EA · GW

Even if it's legal, some people may think it's unethical to lobby against an industry that you've shorted.

It could provide that industry with an argument to undermine the arguments against them. They might claim that their critics have ulterior motives.

Comment by stefan_schubert on edoarad's Shortform · 2020-07-06T12:48:31.025Z · score: 6 (3 votes) · EA · GW

Some parts of the world aren't closing in much on the US.

Regarding the global power structure, what matters is probably not overall global levels of convergence, but rather whether some large countries (e.g. China) converge with the US.

Regarding that question, it probably doesn't matter that much if a country is very poor or somewhat poor - since only relatively rich countries can compete militarily and politically anyway.

But from the perspective of global poverty and welfare, it obviously matters a lot whether a very poor country manages to reduce their level of poverty.

Comment by stefan_schubert on The Moral Value of Information - edited transcript · 2020-07-03T17:45:27.604Z · score: 5 (3 votes) · EA · GW

Thanks for doing this, I think it's a great talk.

The images ended up a bit too small, I think. Is it possible to make them larger somehow? I think that would be great. Thanks.

Comment by stefan_schubert on Study results: The most convincing argument for effective donations · 2020-07-01T12:59:43.512Z · score: 4 (2 votes) · EA · GW

Eric Schwitzgebel responded as follows to a similar comment on his wall:

According to the contest rules, the "winner" is just the argument with the highest mean donation, if it statistically beats the control. It didn't have to statistically beat the other arguments, and as you note it did not do so in this case.

But many won't interpret it that way and further clarification would have been good, yes.

Edit: Schwitzgebel's post actually had another title: "Contest Winner! A Philosophical Argument That Effectively Convinces Research Participants to Donate to Charity"

Comment by stefan_schubert on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-06-30T21:30:33.014Z · score: 16 (6 votes) · EA · GW

Relatedly, on the nature of expertise. What's the relative importance of domain-specific knowledge and domain-general forecasting abilities (and which facets of those are most important)?

Comment by stefan_schubert on Problem areas beyond 80,000 Hours' current priorities · 2020-06-29T23:09:01.974Z · score: 7 (4 votes) · EA · GW

Yes, though it's possible that some or all of the ideas and values of effective altruism could live on under other names or in other forms even if the name "effective altruism" ceased to be used much.

Comment by stefan_schubert on MichaelA's Shortform · 2020-06-26T14:32:42.269Z · score: 20 (5 votes) · EA · GW

I've written some posts on related themes.

https://www.lesswrong.com/posts/k54agm83CLt3Sb85t/clearerthinking-s-fact-checking-2-0

https://forum.effectivealtruism.org/posts/pYaYtCT3Fc5H4rfWS/opinion-piece-on-the-swedish-network-for-evidence-based

https://forum.effectivealtruism.org/posts/CYyaQ3N4ipLFR4fzX/effective-altruism-s-fact-value-separation-as-a-weapon

https://forum.effectivealtruism.org/posts/yPkiBNW49NZvGvJ3q/political-debiasing-and-the-political-bias-test

Comment by stefan_schubert on EA considerations regarding increasing political polarization · 2020-06-26T14:15:36.661Z · score: 42 (14 votes) · EA · GW

I agree with those who say that the analogy with the Cultural Revolution isn't ideal.

Yes, there are some relevant similarities with the Cultural Revolution. But the fact that many millions were killed in the Cultural Revolution, and that the regime was a dictatorship, are extremely salient features. It doesn't usually work to say that "I mean that it's like the Cultural Revolution in other respects - just not those respects". Those features are so central and so salient that it's difficult to dissociate them in that way.

Relatedly, I think that comparisons to the Cultural Revolution tend to function as motte and baileys (specifically, hyperboles). They have a rhetorical punch precisely because the Cultural Revolution was so brutal. People find the analogy powerful precisely because of the associations to that brutality.

But then when you get criticised, you can retreat and say "well, I didn't mean those features of the Cultural Revolution - I just meant that there was ideological conformity, etc" - and it's more defensible to say that parts of the US have those features today.

Comment by stefan_schubert on EA considerations regarding increasing political polarization · 2020-06-21T13:17:45.999Z · score: 10 (7 votes) · EA · GW

Good point. Maybe it could be possible to convince some pundits and thought leaders to participate in such tournaments, and maybe that could make them less polarised, and have other beneficial effects.

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2020-06-11T15:09:47.978Z · score: 6 (4 votes) · EA · GW

I wrote a blog post on utilitarianism and truth-seeking. Brief summary:

The Oxford Utilitarianism Scale defines tendency to accept utilitarianism in terms of two factors: acceptance of instrumental harm for the greater good, and impartial beneficence.

But there is another question, which is subtly different, namely: what psychological features do we need to apply utilitarianism, and to do it well?

Once we turn to application, truth-seeking becomes hugely important. The utilitarian must find the best ways of doing good. You can only do that if you're a devoted truth-seeker.

Comment by stefan_schubert on Cause Prioritization in Light of Inspirational Disasters · 2020-06-09T08:56:24.879Z · score: 3 (3 votes) · EA · GW

I think the word "inspirational" isn't ideal either, and in fact not very different from "inspiring". And I think the title matters massively for the interpretation of an article. So I think you haven't appropriately addressed David's legitimate point. I wouldn't use "inspiring", "inspirational", or similar words.

Comment by stefan_schubert on Adapting the ITN framework for political interventions & analysis of political polarisation · 2020-04-28T13:23:20.468Z · score: 2 (1 votes) · EA · GW

Thanks Tobias, that's helpful.

Comment by stefan_schubert on Adapting the ITN framework for political interventions & analysis of political polarisation · 2020-04-27T11:27:43.794Z · score: 12 (7 votes) · EA · GW

Looks interesting, though it's pretty long, whereas the abstract is very brief and not too informative. You might get more input if you write a summary roughly the length of a standard EA Forum post.

Comment by stefan_schubert on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-15T22:02:12.499Z · score: 6 (3 votes) · EA · GW

Minor: some recent papers argue the death toll from the Plague of Justinian has been exaggerated.

Existing mortality estimates assert that the Justinianic Plague (circa 541 to 750 CE) caused tens of millions of deaths throughout the Mediterranean world and Europe, helping to end antiquity and start the Middle Ages. In this article, we argue that this paradigm does not fit the evidence.

https://www.pnas.org/content/116/51/25546?fbclid=IwAR1bN1LgbMI-CVUNxGsm3QxCEhGMVMB50IkEVoKpEIfSySEmxY6Ug5IhRTE

It concludes that the Justinianic Plague had an overall limited effect on late antique society. Although on some occasions the plague might have caused high mortality in specific places, leaving strong impressions on contemporaries, it neither caused widespread demographic decline nor kept Mediterranean populations low.

https://academic.oup.com/past/article/244/1/3/5532056

(Two authors appear on both papers.)

Comment by stefan_schubert on The case for building more and better epistemic institutions in the effective altruism community · 2020-04-02T19:12:01.298Z · score: 3 (2 votes) · EA · GW

Thanks, makes sense.

Comment by stefan_schubert on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-31T00:14:44.442Z · score: 16 (8 votes) · EA · GW

Thanks, interesting.

1) One distinction one might want to make is between better versions of previous institutions and truly novel epistemic institutions. E.g. Global Priorities Institutes and Future of Humanity Institute are examples of the former - university research institutes isn't a novel institution. Other examples could be better expert surveys (that already exists), better data presentation, etc. My sense is that some people who think about better institutions are too focused on entirely new institutions, while neglecting better versions of existing institutions. Building something entirely novel is often very hard, whereas it's easier to build a new version of an existing institution.

2) One fallacy people who design new institutions often make is that they overestimate the amount of work people want to put into their schemes. E.g. suggested new institutions like post-publication peer review and some forms of prediction institutions suffer from the fact that people don't want to invest the time in them that they need. I think that's a key consideration that's often forgotten. This may be a particular problem for certain complex decentralised institutions, which depend on freely operating individuals (i.e. whom you don't employ full-time) either voluntarily or for profit investing time in your institution. Such decentralised institutions can be theoretically attractive, and I think there is a risk that people get nerd-sniped into putting more time into theorising about some such institutions than they're worth. By contrast, I'm more generally positive about professional institutions who employ people full-time (e.g. university departments). But obviously each suggestion should be evaluated on its own merits.

3) With regards to "norms and folkways", there is a discussion in economics and the other social sciences about the relative importance of "culture" and (formal) institutions for economic growth and other desirable developments. My view is that culture and norms are often under-rated relative to formal institutions. The EA community has developed a set of epistemic norms and an epistemic culture which is by and large pretty good. In fact, it seems we didn't develop too many formal institutions that are as valuable as those norms and that culture. That seems to me a reason to think more about how to foster better norms and a better culture, both within the EA community, and outside it.

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2020-03-09T15:15:13.429Z · score: 3 (2 votes) · EA · GW

Foreign Affairs discussing similar ideas:

One option would be to create a separate international fund for pandemic response paid for by national-level taxes on industries with inherent disease risk—such as live animal producers and sellers, forestry and extractive industries—that could support recovery and lessen the toll of outbreaks on national economies.
Comment by stefan_schubert on What are the key ongoing debates in EA? · 2020-03-09T10:25:48.477Z · score: 21 (14 votes) · EA · GW

Whether we're living at the most influential time in history, and associated issues (such as the probability of an existential catastrophe this century).

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2020-03-06T14:45:38.381Z · score: 15 (8 votes) · EA · GW

International air travel may contribute to spread of infectious diseases (cf. this suggestive tweet; though wealth may be a confounder; poor countries may have more undetected cases). That's an externality that travellers and airlines arguably should pay for, via a tax. The money would be used for defences against pandemics. Is this something that's considered in existing taxation? If there should be such a pandemic flight tax, how large should it optimally be?

One might also consider whether there are other behaviours that increase the risk of pandemics that should be taxed for the same reason. Seb Farquhar, Owen Cotton-Barratt, and Andrew Snyder-Beattie already suggested that risk externalities should be priced into research with public health risks.

Comment by stefan_schubert on Activism for COVID-19 Local Preparedness · 2020-03-03T10:43:05.584Z · score: 4 (2 votes) · EA · GW

Thanks, important info.

The second link is incorrect; should be: https://threadreaderapp.com/thread/1228373884027592704.html

Comment by stefan_schubert on Illegible impact is still impact · 2020-02-18T16:03:05.531Z · score: 8 (5 votes) · EA · GW

Cf. Katja Grace's Estimation is the best we have (which was re-published in the first version of the EA Handbook, edited by Ryan Carey).

Comment by stefan_schubert on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-10T12:38:28.403Z · score: 7 (4 votes) · EA · GW
That some of donors will be persuaded not to donate by the information is a feature, not a bug.

That isn't true as a matter of definition, as you seem to imply. Some donors being persuaded not to donate by the information can be a feature, but it can also be a bug. It has to be decided on a case-by-case-basis, by looking at what the disclosure statement actually says.

Comment by stefan_schubert on [Notes] Steven Pinker and Yuval Noah Harari in conversation · 2020-02-09T19:10:55.201Z · score: 3 (2 votes) · EA · GW

Thanks for this. Minor: should be Steven Pinker, not Stephen.

Comment by stefan_schubert on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-01-21T20:48:50.350Z · score: 13 (8 votes) · EA · GW

Sometimes the term "the Gricean maxims" (or "Grice's maxims") is used instead of "the Cooperative Principle" as the principal term. I personally find it more memorable, since "the Cooperative Principle" could mean so many things.

Comment by stefan_schubert on Khorton's Shortform · 2020-01-21T00:27:07.119Z · score: 4 (3 votes) · EA · GW

3. was discussed here. My impression of that discussion is that many of the forum readers thought that it's important that one familiarises oneself with the literature before commenting. Like I say in my comment, that's certainly my view.

I agree that too many EA Forum posts fail to appropriately engage with relevant literature.

Comment by stefan_schubert on Growth and the case against randomista development · 2020-01-16T10:17:58.360Z · score: 4 (2 votes) · EA · GW

Thanks for this! You might want to make clearer who the authors are; I take it that John Halstead is a co-author, but his last name doesn't appear as far as I can see.

Comment by stefan_schubert on Khorton's Shortform · 2020-01-15T01:03:51.116Z · score: 4 (2 votes) · EA · GW

In some cases, I think people feel that they have a nuanced position that isn't captured by broad labels. I think that reasoning can go to far, however: if that argument is pushed far enough, no one will count as a socialist, postmodernist, effective altruist, etc. And as you imply, these kinds of broad categories are useful, even while in some respects imperfect.

Comment by stefan_schubert on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T15:36:07.190Z · score: 4 (3 votes) · EA · GW

Which actors do you think one should try to influence to make sure that a potential transition to a world with AGI goes well (e.g. so that it leads to widely shared benefits)? For instance, do you think one should primarily focus on influencing private companies or governments? I'd be interested in learning more about the arguments for whatever conclusions you have. Thanks!

Comment by stefan_schubert on In praise of unhistoric heroism · 2020-01-08T11:38:31.574Z · score: 9 (6 votes) · EA · GW

A recent book discusses the evolutionary causes of "bad feelings", and to what extent they have instrumental benefits: Good Reasons for Bad Feelings: Insights from the Frontier of Evolutionary Psychiatry.

Comment by stefan_schubert on The Center for Election Science Year End EA Appeal · 2020-01-06T21:25:30.252Z · score: 6 (3 votes) · EA · GW

Maybe this discussion is a bit tangential to The Center for Election Science's fund-raising.

Comment by stefan_schubert on List of EA-related email newsletters · 2020-01-06T12:15:03.538Z · score: 4 (2 votes) · EA · GW

A new newsletter of potential interest: Reasonable People, by cognitive scientist Tom Stafford.

The plan is to collect in one place things I write on human rationality, reason and persuasion, sharing links and evidence on these topics as I try and understand advertising, bias, misinformation, influence and decision making.
Comment by Stefan_Schubert on [deleted post] 2020-01-03T23:54:49.387Z

Was posted here.

Comment by stefan_schubert on Thoughts on doing good through non-standard EA career pathways · 2019-12-30T10:48:08.474Z · score: 50 (25 votes) · EA · GW

Thanks for this post. I think discussions about career prioritisation often become quite emotional and personal in a way that clouds people's judgements. Sometimes I think I've observed the following dynamic.

1. It's argued, more or less explicitly, that EAs should switch career into one of a small number of causes.

2. Some EAs are either not attracted to those careers, or are (or at least believe that they are) unable to successfully pursue those careers.

3. The preceding point means that there is a painful tension between the desire to do the most good, and one's personal career prospects. There is a strong desire to resolve that tension.

4. That gives strong incentives to engage in motivated reasoning: to arrive at the conclusion that actually, this tension is illusory; one doesn't need to engage in tough trade-offs to do the most good. One can stay on doing roughly what one currently does.

5. The EAs who believe in point 1 - that EAs should switch career to other causes - are often unwilling to criticise the reasoning described in 4. That's because these issues are rather emotional and personal, and that some may think it's insensitive to criticise people's personal career choices.


I think similar dynamics play out with regards to cause prioritisation more generally, decisions whether to fund specific projects which many feel strongly about, and so on. The key aspects of these dynamics are 1) that people often are quite emotional about their choice, and therefore reluctant to give up on it even in the face of better evidence and 2) that others are reluctant to engage in serious criticism of the former group, precisely because the issue is so clearly emotional and personal to them.


One way to mitigate these problems and to improve the level of debate on these issues is to discuss the object-level considerations in a detached, unemotional way (e.g. obviously without snark); and to do so in some detail. That's precisely what this post does.

Comment by stefan_schubert on 8 things I believe about climate change · 2019-12-29T10:33:50.484Z · score: 3 (2 votes) · EA · GW

I agree that consensus is unlikely regarding AI safety but I rather meant that it's useful when individuals make clear claims about difficult questions, and that's possible whether others agree with them or not. In AI Impacts' interview series, such claims are made (e.g. here: https://aiimpacts.org/conversation-with-adam-gleave/).

Comment by stefan_schubert on Aligning Recommender Systems as Cause Area · 2019-12-29T09:48:20.485Z · score: 6 (4 votes) · EA · GW

A new article (referring to this new paper) claims that New York Times' claims about algorithmic radicalization are flawed (the OP links to a NYT article on such issues):

By looking at recommendation flows between various political orientations and subcultures, we show how YouTube’s late 2019 algorithm is not a radicalization pipeline, but in fact
Removes almost all recommendations for conspiracy theorists, provocateurs and white Identitarians
Benefits mainstream partisan channels such as Fox News and Last Week Tonight
Disadvantages almost everyone else

Comment by stefan_schubert on 8 things I believe about climate change · 2019-12-28T11:00:13.982Z · score: 15 (8 votes) · EA · GW

Thanks for this. I think it's valuable when well-informed EAs make easily interpretable claims about difficult questions (another such question is AI risk). This post (including the "appendices" in the comments) strikes a good balance; it is epistemically responsible, yet has clear conclusions.



Comment by stefan_schubert on More info on EA Global admissions · 2019-12-27T15:23:13.557Z · score: 11 (4 votes) · EA · GW

You don't have to provide a complete ranking of candidates. You only have to decide which candidates to accept and which not to in the bucket that you would prefer to randomise. And it seems to me that such decisions could in principle be made extremely quickly, particularly since you must already have assimilated some information about the candidates in order to put them in the right bucket (though speed probably affects quality adversely; but I still think some signal will remain).

Comment by stefan_schubert on More info on EA Global admissions · 2019-12-27T13:40:36.219Z · score: 14 (5 votes) · EA · GW

If time is an issue, organisers can make quick snap judgements. It's not clear to me that randomisation would be much faster, particularly since you anyway have to make a first rough scoring on your approach. And it seems reasonable, in my view, that organisers are better than chance at picking the better applicants, even when using snap judgements, and even among applicants in the same bucket.

Comment by stefan_schubert on aarongertler's Shortform · 2019-12-27T10:32:20.587Z · score: 2 (1 votes) · EA · GW

Could the option to strongly upvote one's own comments (and posts, in case you remove the automatic strong upvotes on posts) be disabled, as discussed here? Thanks.

Comment by stefan_schubert on Max_Daniel's Shortform · 2019-12-17T13:30:04.599Z · score: 30 (13 votes) · EA · GW

Awesome post, Max, many thanks for this. I think it would be good if these difficult questions were discussed more on the forum by leading researchers like yourself.

I think you should post this as a normal post; it's far too good and important to be hidden away on the shortform.

Comment by stefan_schubert on EA Updates for November 2019 · 2019-11-29T14:17:14.932Z · score: 5 (3 votes) · EA · GW

Great stuff. In one place it says "Natalia Cargill"; should be "Natalie Cargill".

Comment by stefan_schubert on A list of EA-related podcasts · 2019-11-28T00:32:56.539Z · score: 4 (4 votes) · EA · GW

Yes, agree that it would have been natural to include hyperlinks in this otherwise very helpful post.

Pablo's list does include links.

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2019-11-20T11:33:00.382Z · score: 3 (2 votes) · EA · GW

Marginal Revolution:

Due to a special grant, there has been a devoted tranche of Emergent Ventures to individuals, typically scholars and public intellectuals, studying the nature and causes of progress.

Nine grantees, including one working on X-risk:

Leopold Aschenbrenner, 17 year old economics prodigy, to spend the next summer in the Bay Area and for general career development. Here is his paper on existential risk.
Comment by stefan_schubert on Stefan_Schubert's Shortform · 2019-11-18T14:09:10.855Z · score: 2 (1 votes) · EA · GW

Eric Schwitzgebel:

We Might Soon Build AI Who Deserve Rights
Talk for Notre Dame, November 19:
Abstract: Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.
Comment by stefan_schubert on AGI safety and losing electricity/industry resilience cost-effectiveness · 2019-11-17T17:48:00.389Z · score: 2 (1 votes) · EA · GW

Yes, it seems there are recurrent problems with uploading of images (cf.).