Posts

Stefan_Schubert's Shortform 2019-10-04T18:32:56.962Z
Considering Considerateness: Why communities of do-gooders should be exceptionally considerate 2017-05-31T22:41:27.190Z
Effective altruism: an elucidation and a defence 2017-03-22T17:06:50.202Z
Hard-to-reverse decisions destroy option value 2017-03-17T17:54:34.688Z
Understanding cause-neutrality 2017-03-10T17:43:51.345Z
Should people be allowed to ear-mark their taxes to specific policy areas for a price? 2015-09-13T11:01:32.358Z
Effective Altruism’s fact-value separation as a weapon against political bias 2015-09-11T14:58:04.983Z
Political Debiasing and the Political Bias Test 2015-09-11T14:52:47.510Z
Why the triviality objection to EA is beside the point 2015-07-20T19:29:13.261Z
Opinion piece on the Swedish Network for Evidence-Based Policy 2015-06-09T14:35:32.973Z
The effectiveness-alone strategy and evidence-based policy 2015-05-07T10:52:36.891Z

Comments

Comment by Stefan_Schubert on Propose and vote on potential EA Wiki entries · 2021-07-26T18:42:08.351Z · EA · GW

Yes, I think your sense is correct.

Comment by Stefan_Schubert on Research into people's willingness to change cause *areas*? · 2021-07-25T12:58:45.023Z · EA · GW

They also view charitable donation as highly personal and subjective (e.g. a matter of personal choice)

Yeah - I think this paper also supports that.

Comment by Stefan_Schubert on Research into people's willingness to change cause *areas*? · 2021-07-24T14:31:45.922Z · EA · GW

There are some studies suggesting people sometimes donate to less effective charities even when informed that other charities are more effective. E.g. this paper found that people prefer to donate to cancer research even when told that arthritis research is more effective. We made similar findings in this paper

These papers just ask one-off questions, though - they don't concern whether sustained persuasion would cause people to change cause area. But they do indicate that preferences for particular cause areas often override effectiveness information.

Comment by Stefan_Schubert on Buck's Shortform · 2021-07-23T19:37:53.637Z · EA · GW

The GWWC pledge is akin to a flat tax, as opposed to a progressive tax - which gives you a higher tax rate when you earn more.

I agree that there are some arguments in favour of "progressive donations".

One consideration is that extremely high "donation rates" - e.g. donating 100% of your income above a certain amount - may affect incentives to earn more adversely, depending on your motivations. But in a progressive donation rate system with a more moderate maximum donation rate that would probably not be as much of a problem.

Comment by Stefan_Schubert on EA cause areas are just areas where great interventions should be easier to find · 2021-07-17T12:42:22.457Z · EA · GW

It would help if you provided examples.

Comment by Stefan_Schubert on What are things everyone here should (maybe) read? · 2021-07-16T14:19:03.043Z · EA · GW

Interesting. Can you say a bit more about what aspects of EA Ramsey had thought of, in your view? His views on discounting and probability?

Comment by Stefan_Schubert on Effects of anti-aging research on the long-term future · 2021-07-12T17:47:06.638Z · EA · GW

Thanks for this. Regarding moral and cultural progress, I think there is some research that suggests that this largely occurs through generational replacement.

[O]n six of the eight questions we examined—all save gay marriage and marijuana legalisation—demographic shifts accounted for a bigger share of overall movement in public opinion than changes in beliefs within cohorts. On average, their impact was about twice as large.

Regarding the selfish incentives:

Politically, dramatically increased lifespans should give people much stronger personal incentives to care about the long-term future

Potentially, but initially, lifespan extension would be much more muted, and would not give particularly strong selfish incentives for people to care about the long-term future. My sense is that this factor would initially be swamped by the negative effects on moral progress of slower generational replacement.

Comment by Stefan_Schubert on Multilateral Lock-In · 2021-07-08T23:05:20.783Z · EA · GW

Thanks for this original post.

1. Lock-in is supposed to be highly stable. As far as I understand, your argument therefore is, or rests on, the notion that competitive dynamics between multiple agents can become highly stable. But I wonder whether that's usually the case. 

For instance, you mention the wars/competition between European countries. However, these wars eventually stopped - and currently, most European countries rather cooperate as members of the European Union. I think that we have some reason to believe that that's the default - particular competitive dynamics won't be stable, but will eventually evolve into something else. So one would like more details on what specific mechanisms would give rise to a locked-in competitive dynamic. (By contrast, it seems to me that we do have a hunch of how a powerful global autocracy could cause a lock-in - e.g. they could use advanced surveillance, meticulously control transfers of power, etc.) 

2. The post is nominally about multilateral lock-in, but it seems to me that some parts of it (e.g. section V) are concerned with demonstrating that multilateral systems have downsides in general, rather than with lock-in specifically. Though maybe I'm missing some aspect of the dialectic.

3.

But lock-in, as it is understood by EAs, contains an additional component: that the future of humanity must be locked into a highly negative end-state. 

As far as I can tell, effective altruists haven't generally seen a negative end-state as part of the definition of "lock-in". It seems possible to be locked into a positive end-state.

4. 

> [U]nless we have good reason to assume a selective process is heavily biased towards desirable states, we ought to assume that it will produce undesirable states.

I guess that sometimes we do have such reasons. E.g. the selection process may be biased towards wealth (since wealth is useful in competition) or towards making your country attractive to migrants from competitors (thereby typically making it attractive to natives as well).

Comment by Stefan_Schubert on [deleted post] 2021-07-08T19:26:51.309Z

That seems too broad - this is a more specific topic.

Comment by Stefan_Schubert on [deleted post] 2021-07-08T19:26:34.245Z

Similar phrases (e.g. "income and happiness", "income inequality and happiness") do generate a fair number of hits.

"The relationship between giving and happiness" is another possibility.

Comment by Stefan_Schubert on [deleted post] 2021-07-08T15:00:10.754Z

Thanks, makes sense.

Comment by Stefan_Schubert on What should we call the other problem of cluelessness? · 2021-07-03T21:55:05.313Z · EA · GW

Yeah, I agree that one would need to add some adjective (e.g. "total" or "radical") to several of these.

"Unknowability" sounds good at first glance; I'd need to think about use cases.

I see now that you made the agent-decision situation distinction that I also made above. I do think that "unknowable" putting an emphasis on the decision situation is to its advantage.

Comment by Stefan_Schubert on What should we call the other problem of cluelessness? · 2021-07-03T21:51:48.123Z · EA · GW

Yeah, I'm unsure. I think that the term "clueless" is usually used to refer to people who are incompetent (cf. the synonyms). (That's why they have no knowledge.) But in this case we don't lack knowledge because we're incompetent, but because the task at hand is hard. And one might consider using a term or phrase that implies that. But there are pros and cons of all candidates.

Comment by Stefan_Schubert on What should we call the other problem of cluelessness? · 2021-07-03T21:13:23.689Z · EA · GW

I agree that that shouldn't be the main strategy. But my sense is that this issue isn't a disadvantage of using a term like "predictability" or a synonym.

I think one advantage of such a term is that it relates to major areas of research, that many people know about.

Another term is "uncertainty"; cf. "radical uncertainty".

Comment by Stefan_Schubert on What should we call the other problem of cluelessness? · 2021-07-03T20:20:08.766Z · EA · GW

I agree that this distinction is important and that it would be good to have two terms for these different concepts.

I see the motivation for terms like "weak cluelessness" or "the practical problem of cluelessness". To me it sounds slightly odd to use the word "clueless" for (2), however, given the associations that word has (cf. Cambridge dictionary).

(1) is not a gradable concept - if we're clueless, then in Hilary Greaves' words, we "can never have even the faintest idea" which of two actions is better.

(2), on the other hand, is a gradable concept - it can be more or less difficult to find the best strategies. Potentially it would be good to have a term that is gradable, for that reason.

One possibility is something relating to (un)predictability or (un)foreseeability. That has the advantage that it relates to forecasting. 

(Note that absolute cluelessness can also be expressed in terms of (un)predictability - you can say that it's totally unpredictable which strategies have the highest impact.)

Comment by Stefan_Schubert on [deleted post] 2021-06-16T08:26:58.933Z

No, I haven't removed any references, but I agree that it's better to remove references that aren't directly related to EA.

I think it would be good if this article was integrated with the psychology of effective altruism article.

Here is a potential alternative article.

"Moral psychology is the study of how people think and feel about moral issues. It is a field of study in both philosophy and psychology, and covers many topics, including childhood moral development, how people reason about moral issues, and the evolutionary roots of morality.

Effective altruists have taken a special interest in some applied topics in moral psychology. They include the psychology of effective giving (Caviola et al. 2014; Caviola, Schubert & Nemirow 2020; Burum, Nowak & Hoffman 2020; Caviola, Schubert & Greene 2021); the psychology of existential risk (Schubert, Caviola & Faber 2019), and the psychology of speciesism (Caviola 2019; Caviola, Everett & Faber 2019; Caviola & Capraro 2020). See psychology of effective altruism for more details."

Note that I've just copy-pasted the penultimate sentence from the psychology of effective altruism article (I also cut some of it in order not to make that sentence overly long.)

I included some Wikipedia-links in the first paragraph; if you don't think that's  a good idea, then please remove them.

Comment by Stefan_Schubert on [deleted post] 2021-06-15T14:56:46.012Z

I think one naturally thinks that "Constraints on effective altruism" concern some principled or otherwise permanent constraints on effective altruism (cf. moral side-constraints), but actually this article rather seems to concern temporary bottlenecks, such as funding, talent, or vetting.

Alternatives could be "Constraints within the effective altruism community" or "Constraints within effective altruism" ("Constraints in effective altruism" is another possibility - I see now that Pablo mentioned that). Or one could try to find an alternative term to "constraints" - maybe there is a term, e.g. in economics.

Comment by Stefan_Schubert on [deleted post] 2021-06-15T14:44:23.430Z

Some of the topics and papers referred to her don't seem to have a very direct relationship with effective altruism. Should such topics be included, or should these entries focus on topics more directly related to effective altruism?

Comment by Stefan_Schubert on Propose and vote on potential EA Wiki entries · 2021-06-08T16:03:30.936Z · EA · GW

Fwiw, I think that "scalably using labour" doesn't sound quite like a wiki entry. I find virtually no article titles including the term "using" on Wikipedia.

If one wants to retain the concept, I think that "Large-scale use of labour" or something similar would be better. There are may Wikipedia article titles including the term "use of [noun]". (Potentially nouns are generally better than verbs in Wikipedia article titles? Not sure.)

Comment by Stefan_Schubert on [deleted post] 2021-06-07T20:47:11.155Z

Potentially this entry could include a discussion of Future Perfect and how it was launched to cover effective altruist ideas and causes.

Comment by Stefan_Schubert on Propose and vote on potential EA Wiki entries · 2021-06-07T16:48:28.790Z · EA · GW

Effective Altruism on Facebook and Effective Altruism on Twitter (and more - maybe Goodreads, Instagram, LinkedIn, etc). Alternatively Effective Altruism on Social Media, though I probably prefer tags/entries on particular platforms.

A few relevant articles:

https://forum.effectivealtruism.org/posts/8knJCrJwC7TbhkQbi/ea-twitter-job-bots-and-more

https://forum.effectivealtruism.org/posts/6aQtRkkq5CgYAYrsd/ea-twitterbot

https://forum.effectivealtruism.org/posts/mvLgZiPWo4JJrBAvW/longtermism-twitter

https://forum.effectivealtruism.org/posts/BtptBcXWmjZBfdo9n/ea-facebook-group-greatest-hits-top-50-posts-by-total

Multiple articles about Giving Tuesday.

 

Also, quite a lot of EA discussion is and has taken place on Twitter and Facebook; there are many EA Facebook groups, etc. Therefore, it  seems natural to have entries on EA Twitter and EA Facebook.

Comment by Stefan_Schubert on Propose and vote on potential EA Wiki entries · 2021-06-05T10:13:18.682Z · EA · GW

Vetting constraints dovetails nicely with talent vs. funding constraints. I'm not totally convinced by the scalably using labour entry, though. One possibility would be to just replace it by a vetting constraints entry. Alternatively, it could be retained but renamed/reconceptualised.

Comment by Stefan_Schubert on Propose and vote on potential EA Wiki entries · 2021-06-05T09:58:15.085Z · EA · GW

I agree that humanities disciplines tend to be less EA-relevant than the social sciences. But I think that the humanities are quite heterogeneous, so it feels more natural to me to have entries for particular humanities disciplines, than humanities as a whole.

But I'm not sure any such entries are warranted; it depends on how much has been written.

Comment by Stefan_Schubert on [deleted post] 2021-06-05T09:50:32.196Z

Actually, Googling "ethics of existential risk" does yield a fair number of hits at FHI, 80,000 Hours, etc. So I think calling it that isn't at risk of being original research.

Regarding your last paragraph, I think that it's in general a good idea if people flag on the Discussion page when they want to make big and non-obvious edits or additions (the threshold can be discussed). But that's a more general issue (doesn't just pertain to edits that could be seen as original research). I don't have a clear sense of exactly how it should be done, though.

Comment by Stefan_Schubert on Constructive Criticism of Moral Uncertainty (book) · 2021-06-04T23:40:09.852Z · EA · GW

That conclusion doesn't necessarily have to be as pessimistic as you seem to imply ("we do what is most convenient to us"). An alternative hypothesis is that people to some extent do want to do the right thing, and are willing to make sacrifices for it - but not large sacrifices. So when the bar is lowered, we tend to act more on those altruistic preferences. Cf. this recent paper:

[Subjective well-being] mediates the relationship between two objective measures of well-being (wealth and health) and altruism...results indicate that altruism increases when resources and cultural values provide objective and subjective means for pursuing personally meaningful goals.

Comment by Stefan_Schubert on MichaelA's Shortform · 2021-06-04T15:58:16.643Z · EA · GW

Aron Vallinder has put together a comprehensive bibliography on the psychology of the future.

Comment by Stefan_Schubert on [deleted post] 2021-06-04T11:55:01.764Z

Yes, I think I prefer that (see my subsequent comment).

Comment by Stefan_Schubert on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T10:41:09.865Z · EA · GW

Fwiw I think that looking at the work that's been done so far, the EA Wiki is very promising.

Comment by Stefan_Schubert on [deleted post] 2021-06-04T10:37:57.222Z

Thanks! Yeah I get that it may look slightly clunky but also agree that that's outweighed by the advantages of sounding more formal.

Comment by Stefan_Schubert on [deleted post] 2021-06-04T10:08:44.969Z

Great! Or just "ethics of existential risk".

Also, my hunch is that "existential risk" is better than "x-risk" in Wiki articles, since I think the Wiki should have a somewhat formal tone.

Comment by Stefan_Schubert on [deleted post] 2021-06-04T08:53:59.089Z

How about "ethics of existential risk reduction"?

"Ethics of X" is a standard phrase.

Comment by Stefan_Schubert on Exporting EA discussion norms · 2021-06-01T14:41:39.552Z · EA · GW

I guess you could see, e.g. Julia Galef's The Scout Mindset as doing that, in part.

Comment by Stefan_Schubert on [deleted post] 2021-06-01T13:51:48.783Z

More orgs that could be added (I don't have a strong view; please decide as you see fit):

Alignment Research Center

The Jewish Effective Giving Initiative

Anthropic

The Quantified Uncertainty Research Institute

Metaculus

Comment by Stefan_Schubert on [deleted post] 2021-06-01T12:37:40.999Z

I agree that "infodemics" is too jargony. I think the same is true of "epistemic hygiene".

Comment by Stefan_Schubert on [deleted post] 2021-05-31T20:56:14.337Z

HIPE, WANBAM, CEEALAR, and Legal Priorities Project may fit better under Infrastructure. And Sentience Politics under Animal Advocacy.

Comment by Stefan_Schubert on [deleted post] 2021-05-31T20:50:41.327Z

Agree - I moved it now to "others" (it seems to go well with Our World in Data, CFAR, etc ).

I also suggest that the "Far future" heading be called "The long-term future" (the relevant EA fund has already undergone that name change, and more generally "the long-term future" seems to have replaced "the far future").

Comment by Stefan_Schubert on [deleted post] 2021-05-25T12:59:44.330Z

I would probably also prefer 1, at least initially. Maybe you could have separate sections on the different concepts. That should make it easy to split the article into several, if that seems warranted.

Comment by Stefan_Schubert on [deleted post] 2021-05-23T13:01:50.922Z

My hunch is that an entry on the broader phenomenon may be better, unless there is more on disinformation specifically than I suspect.

"Epistemic norms" could be one option, though maybe it would not cover everything that you have in mind.

Comment by Stefan_Schubert on [deleted post] 2021-05-21T13:47:49.453Z

Right. Here's one way to think about it. There's a simple model according to which you divide your resources into two buckets:

a) X resources that you use for yourself. 

You can use them however you like (though presumably only as long as you don't harm others, follow laws, etc). You don't consider the interests of others when you're using these resources.

b) 1-X resources that you use for your others.

You're not considering your own interests when you're using these resources.

But I take it that Hanson is saying that sometimes when you're using resources for yourself, there are opportunities to help others greatly at relatively small cost for yourself. If you take those opportunities, then your actions effectively have mixed motives - they are partly selfishly motivated, and partly altruistically motivated.

(Note that the converse also holds - sometimes when you're helping others, you have opportunities to substantially benefit yourself at a small altruistic cost.)

You could create a more advanced version of the "budgeting for yourself and for others" model, where each action is classified on a continuum from 0% selfish/100% altruistic to 100% selfish/0% altruistic. So if an action that costs Y resources is 70% selfish and 30% altruistic, you've used up .7Y of the selfish resources and .3Y of the altruistic resources. The total amount that you budget for yourself could remain at  X - the only thing that has changed is that you can use specific resources in a hybrid way.

It seems tricky to put percentages on these hybrid actions, however. The simple model is much more straightforward, which is indeed an advantage.

Comment by Stefan_Schubert on [deleted post] 2021-05-21T00:21:32.889Z

The Knobe effect may give some support to Simler and Hanson's speculation. It says that while bad side-effects are assumed to have been brought about intentionally, good side-effects are assumed to have been brought about unintentionally. Marginal charity may be perceived as (good) side-effects, and as such unintentional.

Comment by Stefan_Schubert on [deleted post] 2021-05-20T19:04:07.013Z

There is a longer bibliography here. Potentially that bibliography, and/or some of the entries, could be included.

Comment by Stefan_Schubert on [deleted post] 2021-05-20T13:57:24.500Z

As discussed in the "ethics of personal consumption" entry, some have suggested that we should divide our resources into a "budget for ourselves" and a "budget for others". At least on one interpretation, that is in some tension with the notion of marginal charity - which says that you can sometimes have an outsize impact by shifting your selfishly motivated actions (part of the "budget for yourself") in a prosocial direction. Marginal charity-considerations suggest that we should be alert to altruistic opportunities even when using the resources that we've budgeted for ourselves. Potentially this should be briefly pointed out.

Comment by Stefan_Schubert on [deleted post] 2021-05-20T13:46:33.967Z

Most members of the community budget reasonable portions of their income for themselves, to stay motivated, prevent burnout, and increase productivity (Kaufman, 2013).

The link here is supposed to lead to an EA concepts page called "budgeting for yourself and others". However, when one clicks on that link, one is redirected to this page ("ethics of personal consumption"). So the link seems superfluous.

Comment by Stefan_Schubert on What are things everyone here should (maybe) read? · 2021-05-20T13:32:26.823Z · EA · GW

I would focus on reading key EA content (including cause-specific content, depending on what cause they choose).

E.g. if they're longtermists, I'd say they should read Superintelligence and much of Bostrom's other output, The Precipice, various articles and blog posts on AI timelines, and so on. 

For more general EA concepts and ideas I'd refer them to Doing Good Better, the new Wiki, and/or a few online talks like Owen's "Prospecting for Gold".

Some of the other recommendations in this thread are not as directly related to EA. While I generally like them, they don't seem as key to me as those that I've listed.

Comment by Stefan_Schubert on Ben_Snodin's Shortform · 2021-05-19T11:56:21.977Z · EA · GW

When everyone knows that there’s a basically solid argument for only donating to effective charities if you want to benefit others, when people donate to ineffective charities it’ll transparently be due to selfish motives.

I'm not sure that's necessarily true. People may have motives for donating to ineffective charities that are better characterised as moral but not welfare-maximising (special obligations, expressing a virtue, etc).

Also, if everyone knows that there's a solid argument for only donating to effective charities, then it seems that one would suffer reputationally for donating to ineffective charities. That may, in a sense, rather provide people with a selfish motive to donate to effective charities, meaning that we might expect donations to ineffective charities to be due to other motives.

Comment by Stefan_Schubert on [deleted post] 2021-05-18T15:28:22.474Z

Looking at the reference list, it's noteworthy that there aren't more articles that introduce the Long reflection more systematically and in greater detail. I think that such an article would be good.

That's another use case for this Wikipedia: to identify gaps in the EA literature.

Comment by Stefan_Schubert on SeanEngelhart's Shortform · 2021-05-14T20:51:03.809Z · EA · GW

CEA has people working on this. See, e.g. this article.

Comment by Stefan_Schubert on [deleted post] 2021-05-14T00:51:30.181Z

Thanks, that sounds good.

Comment by Stefan_Schubert on Launching the EAF Fund · 2021-05-13T17:43:51.350Z · EA · GW

I'm wondering a bit about this definition. One interpretation of it is that you're saying something like this:

"The expected future suffering is X. The risk that event E occurs is an S-risk if and only if E occurring raises the expected future suffering significantly above X."

But I think that definition doesn't work. Suppose that it is almost certain (99,9999999%) that a particular event E will occur, and that it would cause a tremendous amount of suffering. Then the expected future suffering is already very large (if I understand that concept correctly). And, because E is virtually certain to occur, it occurring will not actually bring about suffering in cosmically significant amounts relative to expected future suffering. And yet intuitively this is an S-risk, I'd say.

Another interpretation of the definition is:

"The expected future suffering is X. The risk that event E occurs is an S-risk if and only if the difference in suffering between E occurring and E not occurring is significant relative to X."

That does take care of that issue, since, by hypothesis, the difference between E occurring and E not occurring is  a tremendous amount of suffering.

Alternatively, you may want to say that the risk that E occurs is an S-risk if and only if occurring brings about a significant amount of suffering relative to what we expect to occur from other causes. That may be a more intuitive way of thinking about this.

A feature of this definition is that the risk of an event E1 occurring can be S-risk even if it occurring would cause much less suffering than  another event E2 would, provided that E1 is much more likely to occur than E2. But if we increase our credence that E2 will occur, then the risk of E1 occurring will cease to be an S-risk, since it no longer will cause a significant amount of suffering relative to expected future suffering.

I guess that some would find that unintuitive, and that something being an S-risk shouldn't depend on us adjusting our credences in independent events occurring in this way. But it depends a bit what perspective you have.

Comment by Stefan_Schubert on RyanCarey's Shortform · 2021-05-12T11:42:29.030Z · EA · GW

I don't have a view of the level of moderation in general, but think that warning Halstead was incorrect. I suggest that the warning be retracted.

It also seems out of step with what the forum users think - at the time of writing, the comment in question has 143 Karma (56 votes).