Posts

Stefan_Schubert's Shortform 2019-10-04T18:32:56.962Z · score: 12 (2 votes)
Considering Considerateness: Why communities of do-gooders should be exceptionally considerate 2017-05-31T22:41:27.190Z · score: 15 (15 votes)
Effective altruism: an elucidation and a defence 2017-03-22T17:06:50.202Z · score: 12 (12 votes)
Hard-to-reverse decisions destroy option value 2017-03-17T17:54:34.688Z · score: 15 (23 votes)
Understanding cause-neutrality 2017-03-10T17:43:51.345Z · score: 14 (13 votes)
Should people be allowed to ear-mark their taxes to specific policy areas for a price? 2015-09-13T11:01:32.358Z · score: 4 (4 votes)
Effective Altruism’s fact-value separation as a weapon against political bias 2015-09-11T14:58:04.983Z · score: 13 (10 votes)
Political Debiasing and the Political Bias Test 2015-09-11T14:52:47.510Z · score: 5 (7 votes)
Why the triviality objection to EA is beside the point 2015-07-20T19:29:13.261Z · score: 16 (16 votes)
Opinion piece on the Swedish Network for Evidence-Based Policy 2015-06-09T14:35:32.973Z · score: 7 (7 votes)
The effectiveness-alone strategy and evidence-based policy 2015-05-07T10:52:36.891Z · score: 11 (11 votes)

Comments

Comment by stefan_schubert on The case for building more and better epistemic institutions in the effective altruism community · 2020-04-02T19:12:01.298Z · score: 3 (2 votes) · EA · GW

Thanks, makes sense.

Comment by stefan_schubert on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-31T00:14:44.442Z · score: 12 (5 votes) · EA · GW

Thanks, interesting.

1) One distinction one might want to make is between better versions of previous institutions and truly novel epistemic institutions. E.g. Global Priorities Institutes and Future of Humanity Institute are examples of the former - university research institutes isn't a novel institution. Other examples could be better expert surveys (that already exists), better data presentation, etc. My sense is that some people who think about better institutions are too focused on entirely new institutions, while neglecting better versions of existing institutions. Building something entirely novel is often very hard, whereas it's easier to build a new version of an existing institution.

2) One fallacy people who design new institutions often make is that they overestimate the amount of work people want to put into their schemes. E.g. suggested new institutions like post-publication peer review and some forms of prediction institutions suffer from the fact that people don't want to invest the time in them that they need. I think that's a key consideration that's often forgotten. This may be a particular problem for certain complex decentralised institutions, which depend on freely operating individuals (i.e. whom you don't employ full-time) either voluntarily or for profit investing time in your institution. Such decentralised institutions can be theoretically attractive, and I think there is a risk that people get nerd-sniped into putting more time into theorising about some such institutions than they're worth. By contrast, I'm more generally positive about professional institutions who employ people full-time (e.g. university departments). But obviously each suggestion should be evaluated on its own merits.

3) With regards to "norms and folkways", there is a discussion in economics and the other social sciences about the relative importance of "culture" and (formal) institutions for economic growth and other desirable developments. My view is that culture and norms are often under-rated relative to formal institutions. The EA community has developed a set of epistemic norms and an epistemic culture which is by and large pretty good. In fact, it seems we didn't develop too many formal institutions that are as valuable as those norms and that culture. That seems to me a reason to think more about how to foster better norms and a better culture, both within the EA community, and outside it.

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2020-03-09T15:15:13.429Z · score: 3 (2 votes) · EA · GW

Foreign Affairs discussing similar ideas:

One option would be to create a separate international fund for pandemic response paid for by national-level taxes on industries with inherent disease risk—such as live animal producers and sellers, forestry and extractive industries—that could support recovery and lessen the toll of outbreaks on national economies.
Comment by stefan_schubert on What are the key ongoing debates in EA? · 2020-03-09T10:25:48.477Z · score: 19 (12 votes) · EA · GW

Whether we're living at the most influential time in history, and associated issues (such as the probability of an existential catastrophe this century).

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2020-03-06T14:45:38.381Z · score: 15 (8 votes) · EA · GW

International air travel may contribute to spread of infectious diseases (cf. this suggestive tweet; though wealth may be a confounder; poor countries may have more undetected cases). That's an externality that travellers and airlines arguably should pay for, via a tax. The money would be used for defences against pandemics. Is this something that's considered in existing taxation? If there should be such a pandemic flight tax, how large should it optimally be?

One might also consider whether there are other behaviours that increase the risk of pandemics that should be taxed for the same reason. Seb Farquhar, Owen Cotton-Barratt, and Andrew Snyder-Beattie already suggested that risk externalities should be priced into research with public health risks.

Comment by stefan_schubert on Activism for COVID-19 Local Preparedness · 2020-03-03T10:43:05.584Z · score: 4 (2 votes) · EA · GW

Thanks, important info.

The second link is incorrect; should be: https://threadreaderapp.com/thread/1228373884027592704.html

Comment by stefan_schubert on Illegible impact is still impact · 2020-02-18T16:03:05.531Z · score: 8 (5 votes) · EA · GW

Cf. Katja Grace's Estimation is the best we have (which was re-published in the first version of the EA Handbook, edited by Ryan Carey).

Comment by stefan_schubert on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-10T12:38:28.403Z · score: 7 (4 votes) · EA · GW
That some of donors will be persuaded not to donate by the information is a feature, not a bug.

That isn't true as a matter of definition, as you seem to imply. Some donors being persuaded not to donate by the information can be a feature, but it can also be a bug. It has to be decided on a case-by-case-basis, by looking at what the disclosure statement actually says.

Comment by stefan_schubert on [Notes] Steven Pinker and Yuval Noah Harari in conversation · 2020-02-09T19:10:55.201Z · score: 3 (2 votes) · EA · GW

Thanks for this. Minor: should be Steven Pinker, not Stephen.

Comment by stefan_schubert on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-01-21T20:48:50.350Z · score: 13 (8 votes) · EA · GW

Sometimes the term "the Gricean maxims" (or "Grice's maxims") is used instead of "the Cooperative Principle" as the principal term. I personally find it more memorable, since "the Cooperative Principle" could mean so many things.

Comment by stefan_schubert on Khorton's Shortform · 2020-01-21T00:27:07.119Z · score: 4 (3 votes) · EA · GW

3. was discussed here. My impression of that discussion is that many of the forum readers thought that it's important that one familiarises oneself with the literature before commenting. Like I say in my comment, that's certainly my view.

I agree that too many EA Forum posts fail to appropriately engage with relevant literature.

Comment by stefan_schubert on Growth and the case against randomista development · 2020-01-16T10:17:58.360Z · score: 4 (2 votes) · EA · GW

Thanks for this! You might want to make clearer who the authors are; I take it that John Halstead is a co-author, but his last name doesn't appear as far as I can see.

Comment by stefan_schubert on Khorton's Shortform · 2020-01-15T01:03:51.116Z · score: 4 (2 votes) · EA · GW

In some cases, I think people feel that they have a nuanced position that isn't captured by broad labels. I think that reasoning can go to far, however: if that argument is pushed far enough, no one will count as a socialist, postmodernist, effective altruist, etc. And as you imply, these kinds of broad categories are useful, even while in some respects imperfect.

Comment by stefan_schubert on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T15:36:07.190Z · score: 4 (3 votes) · EA · GW

Which actors do you think one should try to influence to make sure that a potential transition to a world with AGI goes well (e.g. so that it leads to widely shared benefits)? For instance, do you think one should primarily focus on influencing private companies or governments? I'd be interested in learning more about the arguments for whatever conclusions you have. Thanks!

Comment by stefan_schubert on In praise of unhistoric heroism · 2020-01-08T11:38:31.574Z · score: 9 (6 votes) · EA · GW

A recent book discusses the evolutionary causes of "bad feelings", and to what extent they have instrumental benefits: Good Reasons for Bad Feelings: Insights from the Frontier of Evolutionary Psychiatry.

Comment by stefan_schubert on The Center for Election Science Year End EA Appeal · 2020-01-06T21:25:30.252Z · score: 6 (3 votes) · EA · GW

Maybe this discussion is a bit tangential to The Center for Election Science's fund-raising.

Comment by stefan_schubert on List of EA-related email newsletters · 2020-01-06T12:15:03.538Z · score: 4 (2 votes) · EA · GW

A new newsletter of potential interest: Reasonable People, by cognitive scientist Tom Stafford.

The plan is to collect in one place things I write on human rationality, reason and persuasion, sharing links and evidence on these topics as I try and understand advertising, bias, misinformation, influence and decision making.
Comment by Stefan_Schubert on [deleted post] 2020-01-03T23:54:49.387Z

Was posted here.

Comment by stefan_schubert on Thoughts on doing good through non-standard EA career pathways · 2019-12-30T10:48:08.474Z · score: 48 (23 votes) · EA · GW

Thanks for this post. I think discussions about career prioritisation often become quite emotional and personal in a way that clouds people's judgements. Sometimes I think I've observed the following dynamic.

1. It's argued, more or less explicitly, that EAs should switch career into one of a small number of causes.

2. Some EAs are either not attracted to those careers, or are (or at least believe that they are) unable to successfully pursue those careers.

3. The preceding point means that there is a painful tension between the desire to do the most good, and one's personal career prospects. There is a strong desire to resolve that tension.

4. That gives strong incentives to engage in motivated reasoning: to arrive at the conclusion that actually, this tension is illusory; one doesn't need to engage in tough trade-offs to do the most good. One can stay on doing roughly what one currently does.

5. The EAs who believe in point 1 - that EAs should switch career to other causes - are often unwilling to criticise the reasoning described in 4. That's because these issues are rather emotional and personal, and that some may think it's insensitive to criticise people's personal career choices.


I think similar dynamics play out with regards to cause prioritisation more generally, decisions whether to fund specific projects which many feel strongly about, and so on. The key aspects of these dynamics are 1) that people often are quite emotional about their choice, and therefore reluctant to give up on it even in the face of better evidence and 2) that others are reluctant to engage in serious criticism of the former group, precisely because the issue is so clearly emotional and personal to them.


One way to mitigate these problems and to improve the level of debate on these issues is to discuss the object-level considerations in a detached, unemotional way (e.g. obviously without snark); and to do so in some detail. That's precisely what this post does.

Comment by stefan_schubert on 8 things I believe about climate change · 2019-12-29T10:33:50.484Z · score: 3 (2 votes) · EA · GW

I agree that consensus is unlikely regarding AI safety but I rather meant that it's useful when individuals make clear claims about difficult questions, and that's possible whether others agree with them or not. In AI Impacts' interview series, such claims are made (e.g. here: https://aiimpacts.org/conversation-with-adam-gleave/).

Comment by stefan_schubert on Aligning Recommender Systems as Cause Area · 2019-12-29T09:48:20.485Z · score: 6 (4 votes) · EA · GW

A new article (referring to this new paper) claims that New York Times' claims about algorithmic radicalization are flawed (the OP links to a NYT article on such issues):

By looking at recommendation flows between various political orientations and subcultures, we show how YouTube’s late 2019 algorithm is not a radicalization pipeline, but in fact
Removes almost all recommendations for conspiracy theorists, provocateurs and white Identitarians
Benefits mainstream partisan channels such as Fox News and Last Week Tonight
Disadvantages almost everyone else

Comment by stefan_schubert on 8 things I believe about climate change · 2019-12-28T11:00:13.982Z · score: 15 (8 votes) · EA · GW

Thanks for this. I think it's valuable when well-informed EAs make easily interpretable claims about difficult questions (another such question is AI risk). This post (including the "appendices" in the comments) strikes a good balance; it is epistemically responsible, yet has clear conclusions.



Comment by stefan_schubert on More info on EA Global admissions · 2019-12-27T15:23:13.557Z · score: 11 (4 votes) · EA · GW

You don't have to provide a complete ranking of candidates. You only have to decide which candidates to accept and which not to in the bucket that you would prefer to randomise. And it seems to me that such decisions could in principle be made extremely quickly, particularly since you must already have assimilated some information about the candidates in order to put them in the right bucket (though speed probably affects quality adversely; but I still think some signal will remain).

Comment by stefan_schubert on More info on EA Global admissions · 2019-12-27T13:40:36.219Z · score: 14 (5 votes) · EA · GW

If time is an issue, organisers can make quick snap judgements. It's not clear to me that randomisation would be much faster, particularly since you anyway have to make a first rough scoring on your approach. And it seems reasonable, in my view, that organisers are better than chance at picking the better applicants, even when using snap judgements, and even among applicants in the same bucket.

Comment by stefan_schubert on aarongertler's Shortform · 2019-12-27T10:32:20.587Z · score: 2 (1 votes) · EA · GW

Could the option to strongly upvote one's own comments (and posts, in case you remove the automatic strong upvotes on posts) be disabled, as discussed here? Thanks.

Comment by stefan_schubert on Max_Daniel's Shortform · 2019-12-17T13:30:04.599Z · score: 20 (10 votes) · EA · GW

Awesome post, Max, many thanks for this. I think it would be good if these difficult questions were discussed more on the forum by leading researchers like yourself.

I think you should post this as a normal post; it's far too good and important to be hidden away on the shortform.

Comment by stefan_schubert on EA Updates for November 2019 · 2019-11-29T14:17:14.932Z · score: 5 (3 votes) · EA · GW

Great stuff. In one place it says "Natalia Cargill"; should be "Natalie Cargill".

Comment by stefan_schubert on A list of EA-related podcasts · 2019-11-28T00:32:56.539Z · score: 2 (3 votes) · EA · GW

Yes, agree that it would have been natural to include hyperlinks in this otherwise very helpful post.

Pablo's list does include links.

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2019-11-20T11:33:00.382Z · score: 3 (2 votes) · EA · GW

Marginal Revolution:

Due to a special grant, there has been a devoted tranche of Emergent Ventures to individuals, typically scholars and public intellectuals, studying the nature and causes of progress.

Nine grantees, including one working on X-risk:

Leopold Aschenbrenner, 17 year old economics prodigy, to spend the next summer in the Bay Area and for general career development. Here is his paper on existential risk.
Comment by stefan_schubert on Stefan_Schubert's Shortform · 2019-11-18T14:09:10.855Z · score: 2 (1 votes) · EA · GW

Eric Schwitzgebel:

We Might Soon Build AI Who Deserve Rights
Talk for Notre Dame, November 19:
Abstract: Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.
Comment by stefan_schubert on AGI safety and losing electricity/industry resilience cost-effectiveness · 2019-11-17T17:48:00.389Z · score: 2 (1 votes) · EA · GW

Yes, it seems there are recurrent problems with uploading of images (cf.).

Comment by stefan_schubert on Institutions for Future Generations · 2019-11-16T13:42:41.461Z · score: 8 (5 votes) · EA · GW
...its settlement value will be based on the degree to which 2119 people approve of the actions of people in the 2019-2119 timespan, as determined by a standardised survey - say, on a scale from 0 to 10.

A potential risk is that people might not be very good at assessing whether the last century's actions/policies have, on average, been good for them or not. To study that risk one could run such surveys today, testing whether people in different countries approve of the actions of people (in their country) in the 1919-2019 time span. Then one could match those survey results against expert judgements of how well different countries have been run during that period. (The experts aren't necessarily right, but agreement or disagreement with the experts should still give some evidence.)

Comment by stefan_schubert on Institutions for Future Generations · 2019-11-16T13:28:36.072Z · score: 2 (1 votes) · EA · GW

I agree that some institutions will do both. I'm not sure that age-weighted voting will change voters' tendency, weighted by voting power, to seek good information about the future much, though.

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2019-11-13T21:33:46.153Z · score: 3 (2 votes) · EA · GW

"Veil-of-ignorance reasoning favors the greater good", by Karen Huang, Joshua Greene, and Max Bazerman (all at Harvard).

The “veil of ignorance” is a moral reasoning device designed to promote impartial decision making by denying decision makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here, we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across 7 experiments (n = 6,261), 4 preregistered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by anchoring, probabilistic reasoning, or generic perspective taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision makers who wish to make more impartial and/or socially beneficial choices.
Comment by stefan_schubert on Institutions for Future Generations · 2019-11-12T13:40:11.144Z · score: 6 (5 votes) · EA · GW

One distinction one might make is that between institutions that:

a) Generate knowledge about how to help future generations effectively.

b) Give more power to people who want to help future generations, or whose task is to help future generations.

Using a belief-preference framework, one might say that a) generates true beliefs (and corrects false beliefs), whereas b) effectively makes the government's preferences more future-oriented.

An In-government Think Tank would be an example of a), and age-weighted voting an example of b). Some of the other institutions may be mixes; have both components.

Impartiality with respect to time is often compared with impartiality with respect to gender, ethnicity, etc. However, it seems to me that there is an important policy disanalogy, namely that it's probably more difficult to know how to advance the interests of future generations, than to know how to advance the interests of an underprivileged gender or ethnic group (even though that isn't trivial either). There's a risk that many policies that people might advocate for the sake of future generations aren't especially effective. One upshot of that is that when it comes to helping future generations, institutions that generate more knowledge may be unusually important.

Comment by stefan_schubert on Deliberation May Improve Decision-Making · 2019-11-07T07:27:14.268Z · score: 3 (2 votes) · EA · GW

Thanks for your response!

Comment by stefan_schubert on Formalizing the cause prioritization framework · 2019-11-06T17:41:47.614Z · score: 2 (1 votes) · EA · GW

Now they load.

Comment by stefan_schubert on Formalizing the cause prioritization framework · 2019-11-06T16:16:58.009Z · score: 2 (1 votes) · EA · GW

I still can't see them. This is what it looks like now.

As mentioned here, copying images from Google Doc and pasting them seems to work reliably.

It would be good if there were more visible guides on how to post, as discussed in that thread.

Comment by stefan_schubert on Formalizing the cause prioritization framework · 2019-11-05T19:02:44.819Z · score: 7 (5 votes) · EA · GW

I think some images don't display for me. This is what it looks like for me:

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2019-11-05T15:38:51.226Z · score: 2 (1 votes) · EA · GW

Thanks!

Comment by stefan_schubert on We should choose between moral theories based on the scale of the problem · 2019-11-05T12:51:49.645Z · score: 3 (2 votes) · EA · GW

Thanks, Darius. I would advise the OP to read up on this literature. As stated, this has been extensively discussed.

Comment by stefan_schubert on EA Hotel Fundraiser 6: Concrete outputs after 17 months · 2019-11-05T12:41:17.226Z · score: 31 (13 votes) · EA · GW

I agree that the epistemic dynamics of discussions about the EA Hotel aren't optimal. I would guess that there are selection effects; that critics aren't heard to the same extent as supporters.

Relatedly, the amount of discussion about the EA Hotel relative to other projects may be a bit disproportionate. It's a relatively small project, but there are lots of posts about it (see OP). By contrast, there is far less discussion about larger EA orgs, large OpenPhil grants, etc. That seems a bit askew to my mind. One might wonder about the cost-effectiveness of relatively long discussions about small donations, given opportunity costs.

Comment by stefan_schubert on Deliberation May Improve Decision-Making · 2019-11-05T10:25:57.434Z · score: 7 (6 votes) · EA · GW

I'm not an expert, but my impression is that some experts are more critical of deliberative democracy. For instance, Jason Brennan argued in his recent book Against Democracy that:

...deliberation often stultifies or corrupts us, that it often exacerbates our biases and leads to greater conflict.

Like Matt_Lerner, I wonder how you selected what evidence to cite, and whether the side that is more sceptical of deliberative democracy got a fair hearing.

With regards to this statement:

Empirical research shows that both politicians and average citizens have the capacity to deliberate when institutions are appropriate.

That seems to depend on what standards you have for "capacity to deliberate". At one point you use the phrase "rigorous analytic reasoning", and depending on what cut-off point one has for that, one might argue that capacity for such reasoning isn't that common.

A recent Swedish paper showed that politicians are "on average significantly smarter and better leaders than the population they represent". To the extent that that is true, politicians may be better at deliberating than the general public. I haven't looked at other countries, however.

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2019-11-04T13:06:14.025Z · score: 6 (4 votes) · EA · GW

New paper in Personality and Individual Differences finds that:

Timegiving behaviors (i.e. caregiving, volunteering, giving support) and prosocial traits were associated with a lower mortality risk in older adults, but giving money was not.
Comment by stefan_schubert on Americans give ~4%, not 2% · 2019-11-03T22:44:55.439Z · score: 5 (3 votes) · EA · GW

Thanks. The graph you link to (the sentence that starts "Here") is interesting. (I take it that this is the one you mean?)

Possibly you could have highlighted that more in your post.

Comment by stefan_schubert on Attempt at understanding the role of moral philosophy in moral progress · 2019-10-31T11:04:11.208Z · score: 3 (2 votes) · EA · GW

A new post on the history of the New Atheism movement by Slatestarcodex may be of interest.

Comment by stefan_schubert on Attempt at understanding the role of moral philosophy in moral progress · 2019-10-28T17:54:18.846Z · score: 8 (5 votes) · EA · GW

The question "to what extent did a specific moral philosopher cause moral progress/change?" (not the exact question you pose, but close) is an instance of the more general question "to what extent have individuals influenced history?" (e.g. Luther, Napoleon, Stalin). It could be useful to look at what people have written on that more general issue, both to generate priors, and to gain insights about various methodological and conceptual issues (which I suspect can be pretty tricky).

Comment by stefan_schubert on EA Hotel Fundraiser 5: Out of runway! · 2019-10-25T15:21:42.367Z · score: 22 (12 votes) · EA · GW

Can you say what those sticking points are? I guess that could be relevant to know for other potential donors.

Comment by stefan_schubert on Helen Toner: Building Organizations · 2019-10-24T13:39:26.342Z · score: 3 (2 votes) · EA · GW

It seems to me that some of those issues have been discussed quite a bit; e.g. how to recruit, how to communicate, and how to give feedback.

Comment by stefan_schubert on Stefan_Schubert's Shortform · 2019-10-24T13:10:34.066Z · score: 14 (6 votes) · EA · GW
Philosophy Contest: Write a Philosophical Argument That Convinces Research Participants to Donate to Charity
Can you write a philosophical argument that effectively convinces research participants to donate money to charity?
Prize: $1000 ($500 directly to the winner, $500 to the winner's choice of charity)
Background
Preliminary research from Eric Schwitzgebel's laboratory suggests that abstract philosophical arguments may not be effective at convincing research participants to give a surprise bonus award to charity. In contrast, emotionally moving narratives do appear to be effective.
However, it might be possible to write a more effective argument than the arguments used in previous research. Therefore U.C. Riverside philosopher Eric Schwitzgebel and Harvard psychologist Fiery Cushman are challenging the philosophical and psychological community to design an argument that effectively convinces participants to donate bonus money to charity at rates higher than they do in a control condition.

Link