Posts

Stefan_Schubert's Shortform 2019-10-04T18:32:56.962Z · score: 12 (2 votes)
Considering Considerateness: Why communities of do-gooders should be exceptionally considerate 2017-05-31T22:41:27.190Z · score: 22 (17 votes)
Effective altruism: an elucidation and a defence 2017-03-22T17:06:50.202Z · score: 12 (12 votes)
Hard-to-reverse decisions destroy option value 2017-03-17T17:54:34.688Z · score: 16 (24 votes)
Understanding cause-neutrality 2017-03-10T17:43:51.345Z · score: 14 (13 votes)
Should people be allowed to ear-mark their taxes to specific policy areas for a price? 2015-09-13T11:01:32.358Z · score: 4 (4 votes)
Effective Altruism’s fact-value separation as a weapon against political bias 2015-09-11T14:58:04.983Z · score: 13 (10 votes)
Political Debiasing and the Political Bias Test 2015-09-11T14:52:47.510Z · score: 4 (8 votes)
Why the triviality objection to EA is beside the point 2015-07-20T19:29:13.261Z · score: 22 (17 votes)
Opinion piece on the Swedish Network for Evidence-Based Policy 2015-06-09T14:35:32.973Z · score: 7 (7 votes)
The effectiveness-alone strategy and evidence-based policy 2015-05-07T10:52:36.891Z · score: 11 (11 votes)

Comments

Comment by stefan_schubert on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-18T12:05:49.758Z · score: 5 (3 votes) · EA · GW

Some argue, however, that partisan TV and radio was helped by the abolition of the FCC fairness doctrine in 1987. That amounts to saying that polarisation was driven at least partly by legal changes rather than by technological innovations.

Obviously media influences public opinion. But the question is whether specific media technologies (e.g. social media vs TV vs radio vs newspapers) cause more or less polarisation, fake news, partisanship, filter bubbles, and so on. That's a difficult empirical question, since all those things can no doubt be mediated to some degree through each of these media technologies.

Comment by stefan_schubert on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-18T09:40:27.852Z · score: 21 (10 votes) · EA · GW

This study looked at nine countries and found that polarisation had decreased in five. The US was an outlier, having seen the largest increase in polarisation. That may suggest that American polarisation is due to US-specific factors, rather than universal technological trends.

Here are some studies suggesting the prevalence of technology-driven echo chambers and filter bubbles may be exaggerated.

Comment by stefan_schubert on Open and Welcome Thread: October 2020 · 2020-10-06T23:11:47.262Z · score: 10 (3 votes) · EA · GW

Yeah, this has been discussed before. I think that it should not be possible to strongly upvote one's own comments.

Comment by stefan_schubert on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-10-01T13:18:57.650Z · score: 6 (4 votes) · EA · GW

Interesting. It may be worth noting how support for consequentialism is measured in this paper.

In our first study, we use a self-report measure of consequentialist (vs. deontological) thinking to examine participant responses to a range of morally questionable actions (beyond sacrifice), many of which people are likely to encounter in real life (e.g., lying, breaking a promise, engaging in malicious gossip, or breaking the law) .
[Study 2] ... a series of moral dilemmas—analogous to trolley/footbridge problems—that were either congruent or incongruent in terms their representation of deontological and consequentialist principles.
[W]e caution that our inferences are warranted for consequentialism, but perhaps not for utilitarianism. We have shown that intellect predicts moral judgments based upon a consideration of consequences (Study 1) and the acceptability of instrumental harm in increasing aggregate welfare (Study 2). Neither of these capture additional aspects of utilitarianism concerned with impartial maximization of the greater good (see Kahane et al., 2018). Future research might thus extend our present focus to explore the role of personality in predicting multiple dimensions of utilitarianism (e.g., impartiality versus instrumental harm; Kahane et al., 2018) and, indeed, different forms of consequentialism (e.g., those grounded in hedonistic versus non-hedonistic conceptions of the good) and deontology (e.g., agent-centered versus patient-centered).
Comment by stefan_schubert on RyanCarey's Shortform · 2020-09-30T14:05:42.001Z · score: 9 (2 votes) · EA · GW

A quite obvious point that may still be worth making is that the balance of the considerations will look very different for different people. E.g. if you're able to have a connection with a top university while being a professor elsewhere, that could change the calculus. There could be numerous idiosyncratic considerations worth taking into account.

Comment by stefan_schubert on Suggestions for Online EA Discussion Norms · 2020-09-24T19:52:49.847Z · score: 4 (2 votes) · EA · GW

The extraordinary value of ordinary norms by Emily Tench is a bit related. Several of the norms she covers concern good discussions and adjacent issues.

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2020-09-19T12:40:29.278Z · score: 2 (1 votes) · EA · GW

Yeah, I agree that there are differences between different fields - e.g. physics and sociology - in this regard. I didn't want to go into details about that, however, since it would have been a bit a distraction from the main subject (global priorities research).

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2020-09-19T12:13:14.944Z · score: 10 (3 votes) · EA · GW

On encountering global priorities research (from my blog).


People who are new to a field usually listen to experienced experts. Of course, they don’t uncritically accept whatever they’re told. But they tend to feel that they need fairly strong reasons to dismiss the existing consensus.

But people who encounter global priorities research - the study of what actions would improve the world the most - often take a different approach. Many disagree with global priorities researchers’ rankings of causes, preferring a ranking of their own.

This can happen for many reasons, and there’s some merit to several of them. First, as global priorities researchers themselves acknowledge, there is much more uncertainty in global priorities research than in most other fields. Second, global priorities research is a young and not very well-established field.

But there are other factors that may make people defer less to existing global priorities research than is warranted. I think I did, when I first encountered the field.

First, people often have unusually strong feelings about global priorities. We often feel strongly for particular causes or particular ways of improving the world, and don’t like to hear that they are ineffective. So we may not listen to rankings of causes that we disagree with.

Second, most intellectually curious people usually have put some thought into the questions that global priorities research studies, even if they’ve never heard of the field itself. This is especially so since most academic disciplines have some relation with global priorities research. So people typically have a fair amount of relevant knowledge. That’s good in some ways, but can also make them overconfident of their abilities to judge existing global priorities research. Identifying the most effective ways of improving the world requires much more systematic thinking than most people will have done prior to encountering the field of global priorities research.

Third, people may underestimate how much thinking global priorities researchers have done over the past 10-20 years, and how sophisticated that thinking is. This is to some extent understandable, given how young the field is. But if you start to truly engage with the best global priorities research, you realize that they have an answer to most of your objections. And you’ll discover that they’ve come up with many important considerations that you’ve likely never thought of. This was definitely my personal experience.

For these reasons, people who are new to global priorities research may come to dismiss existing research prematurely. Of course, that’s not the only mistake you can make. You can also go too far in the other direction, and be overly deferential. It’s a tricky balance to strike. But in my experience, premature dismissal is relatively common - and maybe especially so among smart and experienced people. So it’s something to watch out for.

Thanks to Ryan Carey for comments.

Comment by stefan_schubert on Long-Term Future Fund: April 2020 grants and recommendations · 2020-09-18T18:44:42.037Z · score: 21 (10 votes) · EA · GW

I'd say most PhD students don't publish in the Journal of Philosophy or other journals of a similar or better quality (it's the fourth best general philosophy journal according to a poll by Brian Leiter).

This blog post seems to suggest it has an acceptance rate of about 5%.

Comment by stefan_schubert on Long-Term Future Fund: September 2020 grants · 2020-09-18T13:48:48.257Z · score: 18 (9 votes) · EA · GW

Yes. Also, regarding this issue:

you could find someone with a similar talent level ... who could produce many more videos

It seems that the Long-Term Future Fund isn't actively searching for people to do specific tasks, if I understand the post correctly. Instead, it's reviewing applications that come to them. (It's more labour-intensive to do an active search.) That means that it can be warranted to fund an applicant even if it's possible that there could be better candidates for the same task somewhere out there. (Minor edits.)

Comment by stefan_schubert on How do political scientists do good? · 2020-09-15T23:19:02.195Z · score: 10 (3 votes) · EA · GW

Great suggestions.

Tyler John and Will MacAskill also have this paper, "Longtermist Institutional Reform" (in the forthcoming book The Long View, edited by Natalie Cargill).

Comment by stefan_schubert on Are social media algorithms an existential risk? · 2020-09-15T21:09:22.688Z · score: 24 (7 votes) · EA · GW

There are some studies suggesting fake news isn't quite the problem some think.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3316768

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3107731

There are also a number of papers which are sceptical of there being pervasive social media "echo chambers" or "filter bubbles".

http://eprints.lse.ac.uk/87402/

https://www.sciencedirect.com/science/article/abs/pii/S0747563216309086

Cf also this recent book by Hugo Mercier, which argues that people are less gullible than many think.

I don't know this literature well and am not quite sure what conclusions to draw. My impression is, however, that some claims of the dangers of fake news on social media are exaggerated.

Cf also my comment on the post on recommender systems, relating to other effects of social media.

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2020-09-15T16:06:11.470Z · score: 18 (5 votes) · EA · GW

I've written a blog post on naive effective altruism and conflict.


A very useful concept is naive effective altruism. The naive effective altruist fails to take some important social or psychological considerations into account. Therefore, they may end up doing harm, rather than good.

The standard examples of naive effective altruism are maybe lies and theft for the greater good. But there are other and less salient examples. Here I want to discuss one of them: the potential tendency to be overly conflict-oriented. There are several ways this may occur.

First, people may neglect the costs of conflict - that it’s psychologically draining for them and for others, that it reduces the potential for future collaboration, that it may harm community culture, and so on. Typically, you enter into a conflict because you think that some individual or organisation is making a poor decision - e.g. that reduces impact. My hunch is that people often decide to take the conflict because they exclusively focus on this (supposed) direct impact cost, and don’t consider the costs of the conflict itself.

Second, people often have unrealistic expectations of how others will react to criticism. Rightly or wrongly, people tend to feel that their projects are their own, and that others can only have so much of a say over them. They can take a certain amount of criticism, but if they feel that you’re invading their territory too much, they will typically find you abrasive. And they will react adversely.

Third, overconfidence may lead you to think that a decision is obviously flawed, where there’s actually reasonable disagreement. That can make you push more than you should.

*

These considerations don’t mean that you should never enter into a conflict. Of course you should. Exactly when to do so is a tricky problem. All I want to say is that we should be aware that there’s a risk that we enter into too many conflicts if we apply effective altruism naively.

Comment by stefan_schubert on How have you become more (or less) engaged with EA in the last year? · 2020-09-11T17:36:35.989Z · score: 38 (14 votes) · EA · GW

In contrast to some of the responses here, I think that EA has become more intellectually sophisticated in recent years. It's true that there were many new ideas at the beginning. But it feels a bit unfair to just look at the number of new ideas, given that it's easier at the start - when there's more low-hanging fruit.

Relatedly, it seems to me that EA organisations also are getting more mature and skilled. There are several new impressive organisations, and others have expanded considerably.

Comment by stefan_schubert on Asking for advice · 2020-09-09T18:19:17.458Z · score: 6 (4 votes) · EA · GW

Maybe one option would be to both send the Calendly and write a more standard email? E.g.:

"When would suit you? How about Tuesday 3pm or Wednesday 4pm? Alternatively, you could check my Calendly, if you prefer."

Maybe some find that overly roundabout.

Comment by stefan_schubert on Asking for advice · 2020-09-09T14:27:58.329Z · score: 5 (3 votes) · EA · GW

I think that for many , it's primarily the act of sending a calendly link that is off-putting (for social, potentially status-related, reasons), rather than the experience of interacting with the software. My hunch is that people don't have the same aversion to, e.g. Doodle, which is more symmetric (it's not that one person sends their preferences to the other, but everyone lists their preferences). (But you may be different.)

Comment by stefan_schubert on Misha_Yagudin's Shortform · 2020-09-07T23:59:45.009Z · score: 5 (3 votes) · EA · GW

Fwiw, I started reading this book but found it long-winded and not carefully argued so put it aside.

Comment by stefan_schubert on "Disappointing Futures" Might Be As Important As Existential Risks · 2020-09-03T11:06:05.992Z · score: 17 (7 votes) · EA · GW
I don't actually believe the naively extrapolated future is the most plausible outcome—more on that later—but I do think if you asked most people what they expect the world to look like a thousand years from now, they'd predict something like it.

In a recent paper, we asked participants:

Suppose that humanity does not go extinct, but survives for a very long time. How good do you think that the world will become in that future, compared with the present world? 

Results:

Participants believed that provided that humanity will not go extinct, the future is going to be roughly as good as the present (1 = much worse than the present world, 4 = about as good as the present world, 7 = much better than the present world; M = 4.48, SD = 1.57)...
Comment by stefan_schubert on Some history topics it might be very valuable to investigate · 2020-08-28T14:02:17.639Z · score: 2 (1 votes) · EA · GW

Thanks, yes I'd be interested.

Comment by stefan_schubert on Some history topics it might be very valuable to investigate · 2020-08-28T11:12:09.498Z · score: 6 (3 votes) · EA · GW
"Will humanity achieve its full potential, as long as existential catastrophe is prevented?"
I think an argument in favour of "Yes" is that it might be highly likely that, if we don’t suffer an existential catastrophe, there will be positive trends across the long-term future in all key domains.

That there will be positive trends doesn't necessarily entail that humanity (or some other entities) will achieve its full potential, however. It's possible that the future will be better than the present, without humanity achieving its full potential. And the value difference between such a future and a future where humanity achieves its full potential may be vast.

I agree that there is an historical argument for positive future trends, but it seems that one needs additional steps to conclude that humanity will achieve its full potential.

Comment by stefan_schubert on A curriculum for Effective Altruists · 2020-08-28T09:57:58.233Z · score: 22 (9 votes) · EA · GW

Julia Wise provided a list of EA syllabi and teaching materials here.

(RSP = Future of Humanity Institute's Research Scholars Programme.)

Comment by stefan_schubert on How can good generalist judgment be differentiated from skill at forecasting? · 2020-08-22T10:44:01.134Z · score: 6 (3 votes) · EA · GW

Cambridge Dictionary defines judgement as:

the ability to form valuable opinions and make good decisions

Forecasting isn't (at least not directly) about decision-making (cf. instrumental rationality) but just about knowledge and understanding (epistemic rationality).

A bit tangential, but may still be of interest: a recent paper argued that there are two competing standards of good judgement: rationality and reasonableness.

Normative theories of judgment either focus on rationality (decontextualized preference maximization) or reasonableness (pragmatic balance of preferences and socially conscious norms). ... Normative theories of judgment either focus on rationality (decontextualized preference maximization) or reasonableness (pragmatic balance of preferences and socially conscious norms). ... ay rationality is reductionist and instrumental, whereas reasonableness integrates preferences with particulars and moral concerns.
Comment by stefan_schubert on What FHI’s Research Scholars Programme is like: views from scholars · 2020-08-21T19:44:41.471Z · score: 9 (5 votes) · EA · GW

It could be good if someone wrote an overview of the growing number of fellowships and scholarships in EA (and maybe also other forms of professional EA work). It could include the kind of info given above, and maybe draw inspiration from Larks' overviews of the AI Alignment landscape. I don't think I have seen anything quite like that, but please correct me if I'm wrong.

Comment by stefan_schubert on EA reading list: population ethics, infinite ethics, anthropic ethics · 2020-08-13T21:59:54.738Z · score: 10 (5 votes) · EA · GW

Here is a passage from Hilary Greaves's Population axiology.

In many decision situations, at least in expectation, an agent’s decision has no effect on the numbers and identities of persons born. For those situations, fixed-population ethics is adequate. But in many other decision situations, this condition does not hold. Should one have an additional child? How should life-saving resources be prioritised between the young (who might go on to have children) and the old (who are past reproductive age)? How much should one do to prevent climate change from reducing the number of persons the Earth is able to sustain in the future? Should one fund condom distribution in the developing world? In all these cases, one’s actions can affect both who is born and how many people are (ever) born. To deal with cases of this nature, we need variable-population ethics: ‘population ethics’ for short.
Comment by stefan_schubert on The Importance of Unknown Existential Risks · 2020-07-23T20:46:04.644Z · score: 7 (4 votes) · EA · GW

One possibility is that there aren't many risks that are truly unknown, in the sense that they fall outside of the categories Toby enumerates, for the simple reason that some of those categories are relatively broad, so cover much of the space of possible risks.

Even if that were true, there might still be (fine-grained) risks we haven't thought about within those categories, however - e.g. new ways in which AI could cause an existential catastrophe.

Comment by stefan_schubert on Nathan Young's Shortform · 2020-07-23T14:26:46.542Z · score: 9 (7 votes) · EA · GW

Are the two bullet points two alternative suggestions? If so, I prefer the first one.

Comment by stefan_schubert on Improving the future by influencing actors' benevolence, intelligence, and power · 2020-07-21T20:45:44.115Z · score: 9 (4 votes) · EA · GW

Right, so instead of (or maybe in addition to) giving flexible power to supposedly benevolent and intelligent actors (implication 3 above), you create structures, norms, and practices which enable anyone specifically to do good effectively (~give anyone power to do what's benevolent and intelligent).

Comment by stefan_schubert on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T20:44:10.590Z · score: 31 (15 votes) · EA · GW

What are the key issues or causes that longtermists should invest in, in your view? And how much should we invest in them, relatively speaking? What issues are we currently under-investing in?

Comment by stefan_schubert on BenMillwood's Shortform · 2020-07-12T11:56:58.367Z · score: 13 (5 votes) · EA · GW

Even if it's legal, some people may think it's unethical to lobby against an industry that you've shorted.

It could provide that industry with an argument to undermine the arguments against them. They might claim that their critics have ulterior motives.

Comment by stefan_schubert on edoarad's Shortform · 2020-07-06T12:48:31.025Z · score: 6 (3 votes) · EA · GW

Some parts of the world aren't closing in much on the US.

Regarding the global power structure, what matters is probably not overall global levels of convergence, but rather whether some large countries (e.g. China) converge with the US.

Regarding that question, it probably doesn't matter that much if a country is very poor or somewhat poor - since only relatively rich countries can compete militarily and politically anyway.

But from the perspective of global poverty and welfare, it obviously matters a lot whether a very poor country manages to reduce their level of poverty.

Comment by stefan_schubert on The Moral Value of Information - edited transcript · 2020-07-03T17:45:27.604Z · score: 6 (4 votes) · EA · GW

Thanks for doing this, I think it's a great talk.

The images ended up a bit too small, I think. Is it possible to make them larger somehow? I think that would be great. Thanks.

Comment by stefan_schubert on Study results: The most convincing argument for effective donations · 2020-07-01T12:59:43.512Z · score: 6 (3 votes) · EA · GW

Eric Schwitzgebel responded as follows to a similar comment on his wall:

According to the contest rules, the "winner" is just the argument with the highest mean donation, if it statistically beats the control. It didn't have to statistically beat the other arguments, and as you note it did not do so in this case.

But many won't interpret it that way and further clarification would have been good, yes.

Edit: Schwitzgebel's post actually had another title: "Contest Winner! A Philosophical Argument That Effectively Convinces Research Participants to Donate to Charity"

Comment by stefan_schubert on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-06-30T21:30:33.014Z · score: 16 (6 votes) · EA · GW

Relatedly, on the nature of expertise. What's the relative importance of domain-specific knowledge and domain-general forecasting abilities (and which facets of those are most important)?

Comment by stefan_schubert on Problem areas beyond 80,000 Hours' current priorities · 2020-06-29T23:09:01.974Z · score: 7 (4 votes) · EA · GW

Yes, though it's possible that some or all of the ideas and values of effective altruism could live on under other names or in other forms even if the name "effective altruism" ceased to be used much.

Comment by stefan_schubert on MichaelA's Shortform · 2020-06-26T14:32:42.269Z · score: 20 (5 votes) · EA · GW

I've written some posts on related themes.

https://www.lesswrong.com/posts/k54agm83CLt3Sb85t/clearerthinking-s-fact-checking-2-0

https://forum.effectivealtruism.org/posts/pYaYtCT3Fc5H4rfWS/opinion-piece-on-the-swedish-network-for-evidence-based

https://forum.effectivealtruism.org/posts/CYyaQ3N4ipLFR4fzX/effective-altruism-s-fact-value-separation-as-a-weapon

https://forum.effectivealtruism.org/posts/yPkiBNW49NZvGvJ3q/political-debiasing-and-the-political-bias-test

Comment by stefan_schubert on EA considerations regarding increasing political polarization · 2020-06-26T14:15:36.661Z · score: 42 (14 votes) · EA · GW

I agree with those who say that the analogy with the Cultural Revolution isn't ideal.

Yes, there are some relevant similarities with the Cultural Revolution. But the fact that many millions were killed in the Cultural Revolution, and that the regime was a dictatorship, are extremely salient features. It doesn't usually work to say that "I mean that it's like the Cultural Revolution in other respects - just not those respects". Those features are so central and so salient that it's difficult to dissociate them in that way.

Relatedly, I think that comparisons to the Cultural Revolution tend to function as motte and baileys (specifically, hyperboles). They have a rhetorical punch precisely because the Cultural Revolution was so brutal. People find the analogy powerful precisely because of the associations to that brutality.

But then when you get criticised, you can retreat and say "well, I didn't mean those features of the Cultural Revolution - I just meant that there was ideological conformity, etc" - and it's more defensible to say that parts of the US have those features today.

Comment by stefan_schubert on EA considerations regarding increasing political polarization · 2020-06-21T13:17:45.999Z · score: 10 (7 votes) · EA · GW

Good point. Maybe it could be possible to convince some pundits and thought leaders to participate in such tournaments, and maybe that could make them less polarised, and have other beneficial effects.

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2020-06-11T15:09:47.978Z · score: 7 (5 votes) · EA · GW

I wrote a blog post on utilitarianism and truth-seeking. Brief summary:

The Oxford Utilitarianism Scale defines tendency to accept utilitarianism in terms of two factors: acceptance of instrumental harm for the greater good, and impartial beneficence.

But there is another question, which is subtly different, namely: what psychological features do we need to apply utilitarianism, and to do it well?

Once we turn to application, truth-seeking becomes hugely important. The utilitarian must find the best ways of doing good. You can only do that if you're a devoted truth-seeker.

Comment by stefan_schubert on Cause Prioritization in Light of Inspirational Disasters · 2020-06-09T08:56:24.879Z · score: 3 (3 votes) · EA · GW

I think the word "inspirational" isn't ideal either, and in fact not very different from "inspiring". And I think the title matters massively for the interpretation of an article. So I think you haven't appropriately addressed David's legitimate point. I wouldn't use "inspiring", "inspirational", or similar words.

Comment by stefan_schubert on Adapting the ITN framework for political interventions & analysis of political polarisation · 2020-04-28T13:23:20.468Z · score: 2 (1 votes) · EA · GW

Thanks Tobias, that's helpful.

Comment by stefan_schubert on Adapting the ITN framework for political interventions & analysis of political polarisation · 2020-04-27T11:27:43.794Z · score: 12 (7 votes) · EA · GW

Looks interesting, though it's pretty long, whereas the abstract is very brief and not too informative. You might get more input if you write a summary roughly the length of a standard EA Forum post.

Comment by stefan_schubert on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-15T22:02:12.499Z · score: 6 (3 votes) · EA · GW

Minor: some recent papers argue the death toll from the Plague of Justinian has been exaggerated.

Existing mortality estimates assert that the Justinianic Plague (circa 541 to 750 CE) caused tens of millions of deaths throughout the Mediterranean world and Europe, helping to end antiquity and start the Middle Ages. In this article, we argue that this paradigm does not fit the evidence.

https://www.pnas.org/content/116/51/25546?fbclid=IwAR1bN1LgbMI-CVUNxGsm3QxCEhGMVMB50IkEVoKpEIfSySEmxY6Ug5IhRTE

It concludes that the Justinianic Plague had an overall limited effect on late antique society. Although on some occasions the plague might have caused high mortality in specific places, leaving strong impressions on contemporaries, it neither caused widespread demographic decline nor kept Mediterranean populations low.

https://academic.oup.com/past/article/244/1/3/5532056

(Two authors appear on both papers.)

Comment by stefan_schubert on The case for building more and better epistemic institutions in the effective altruism community · 2020-04-02T19:12:01.298Z · score: 3 (2 votes) · EA · GW

Thanks, makes sense.

Comment by stefan_schubert on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-31T00:14:44.442Z · score: 16 (8 votes) · EA · GW

Thanks, interesting.

1) One distinction one might want to make is between better versions of previous institutions and truly novel epistemic institutions. E.g. Global Priorities Institutes and Future of Humanity Institute are examples of the former - university research institutes isn't a novel institution. Other examples could be better expert surveys (that already exists), better data presentation, etc. My sense is that some people who think about better institutions are too focused on entirely new institutions, while neglecting better versions of existing institutions. Building something entirely novel is often very hard, whereas it's easier to build a new version of an existing institution.

2) One fallacy people who design new institutions often make is that they overestimate the amount of work people want to put into their schemes. E.g. suggested new institutions like post-publication peer review and some forms of prediction institutions suffer from the fact that people don't want to invest the time in them that they need. I think that's a key consideration that's often forgotten. This may be a particular problem for certain complex decentralised institutions, which depend on freely operating individuals (i.e. whom you don't employ full-time) either voluntarily or for profit investing time in your institution. Such decentralised institutions can be theoretically attractive, and I think there is a risk that people get nerd-sniped into putting more time into theorising about some such institutions than they're worth. By contrast, I'm more generally positive about professional institutions who employ people full-time (e.g. university departments). But obviously each suggestion should be evaluated on its own merits.

3) With regards to "norms and folkways", there is a discussion in economics and the other social sciences about the relative importance of "culture" and (formal) institutions for economic growth and other desirable developments. My view is that culture and norms are often under-rated relative to formal institutions. The EA community has developed a set of epistemic norms and an epistemic culture which is by and large pretty good. In fact, it seems we didn't develop too many formal institutions that are as valuable as those norms and that culture. That seems to me a reason to think more about how to foster better norms and a better culture, both within the EA community, and outside it.

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2020-03-09T15:15:13.429Z · score: 3 (2 votes) · EA · GW

Foreign Affairs discussing similar ideas:

One option would be to create a separate international fund for pandemic response paid for by national-level taxes on industries with inherent disease risk—such as live animal producers and sellers, forestry and extractive industries—that could support recovery and lessen the toll of outbreaks on national economies.
Comment by stefan_schubert on What are the key ongoing debates in EA? · 2020-03-09T10:25:48.477Z · score: 21 (14 votes) · EA · GW

Whether we're living at the most influential time in history, and associated issues (such as the probability of an existential catastrophe this century).

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2020-03-06T14:45:38.381Z · score: 15 (8 votes) · EA · GW

International air travel may contribute to spread of infectious diseases (cf. this suggestive tweet; though wealth may be a confounder; poor countries may have more undetected cases). That's an externality that travellers and airlines arguably should pay for, via a tax. The money would be used for defences against pandemics. Is this something that's considered in existing taxation? If there should be such a pandemic flight tax, how large should it optimally be?

One might also consider whether there are other behaviours that increase the risk of pandemics that should be taxed for the same reason. Seb Farquhar, Owen Cotton-Barratt, and Andrew Snyder-Beattie already suggested that risk externalities should be priced into research with public health risks.

Comment by stefan_schubert on Activism for COVID-19 Local Preparedness · 2020-03-03T10:43:05.584Z · score: 4 (2 votes) · EA · GW

Thanks, important info.

The second link is incorrect; should be: https://threadreaderapp.com/thread/1228373884027592704.html

Comment by stefan_schubert on Illegible impact is still impact · 2020-02-18T16:03:05.531Z · score: 8 (5 votes) · EA · GW

Cf. Katja Grace's Estimation is the best we have (which was re-published in the first version of the EA Handbook, edited by Ryan Carey).

Comment by stefan_schubert on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-10T12:38:28.403Z · score: 7 (4 votes) · EA · GW
That some of donors will be persuaded not to donate by the information is a feature, not a bug.

That isn't true as a matter of definition, as you seem to imply. Some donors being persuaded not to donate by the information can be a feature, but it can also be a bug. It has to be decided on a case-by-case-basis, by looking at what the disclosure statement actually says.