Wei_Dai's Shortform

post by Wei_Dai · 2019-12-11T21:07:57.056Z · score: 9 (3 votes) · EA · GW · 5 comments

5 comments

Comments sorted by top scores.

comment by Wei_Dai · 2020-03-14T19:13:51.855Z · score: 20 (7 votes) · EA(p) · GW(p)

Missed opportunity for EA: I posted my coronavirus trade [LW(p) · GW(p)] in part to build credibility/reputation, but someone should have done it on a larger scale, for example taken out a full page ad in the NY Times in the very early stages of the outbreak to warn the public about it. Then the next time EAs need to raise the alarm about something even bigger, they might be taken a lot more seriously. It's too late now for this outbreak, but keep this in mind for the future?

comment by Milan_Griffes · 2020-03-14T19:37:39.863Z · score: 2 (1 votes) · EA(p) · GW(p)

+1

Such a good point. "Courage of our convictions" and all that...

comment by Wei_Dai · 2019-12-11T21:07:57.211Z · score: 15 (4 votes) · EA(p) · GW(p)

A post that I wrote on LW that is also relevant to EA: What determines the balance between intelligence signaling and virtue signaling? [LW · GW]

comment by Wei_Dai · 2020-02-26T05:51:59.845Z · score: 4 (4 votes) · EA(p) · GW(p)

Someone who is vNM-rational with a utility function that is partly-altruistic/partly-selfish wouldn't give a fixed percentage of their income to charity (or have a lower bound on giving, like 10%), because such a person would dynamically adjust their relative spending on selfish interests and altruistic causes depending on empirical contingencies, for example spending more on altruistic causes when new evidence arises that shows altruistic causes are more cost-effective than previously expected, and conversely lowering spending on altruistic causes if they become less cost-effective than previously expected. (See Is the potential astronomical waste in our universe too small to care about? for a related idea.)

I think this means we have to find other ways [LW(p) · GW(p)] of explaining/modeling charity giving, including the kind encouraged in the EA community.

comment by MichaelStJules · 2020-02-26T08:52:19.911Z · score: 1 (1 votes) · EA(p) · GW(p)

As a specific case, counterfactual donation matches should cause you to donate more, too.

It could be the case that people's utility functions are pretty sharp near X% of income, so that new information makes little difference. They're probably directly valuing giving X% of income, perhaps as a personal goal. Some might think that they are spending as much as they want on themselves, and the rest should go to charity.

https://slate.com/human-interest/2011/01/go-ahead-give-all-your-money-to-charity.html


Or maybe their utility functions just change with new information?