Comment by calebwithers on Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift · 2018-05-07T12:33:14.911Z · score: 27 (20 votes) · EA · GW

Thanks for writing this - it seems worthwhile to be strategic about potential "value drift", and this list is definitely useful in that regard.

I have the tentative hypothesis that a framing with slightly more self-loyalty would be preferable.

In the vein of Denise_Melchin's comment on Joey's post, I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think. In the vein of The Replacing Guilt series, I don't think that attempting to override these other values is generally sustainable for long-term motivation.

This hypothesis would point away from pledges or 'locking in' (at least for the sake of avoiding value drift) and, I think, towards a slightly different framing of some suggestions: for example, rather than spending time with value-aligned people to "reduce the risk of value drift", we might instead recognize that spending time with value-aligned people is an opportunity to both meet our social needs and cultivate one's impactfulness.

Comment by calebwithers on How to get a new cause into EA · 2018-01-14T00:35:07.000Z · score: 0 (0 votes) · EA · GW

In the same vein as this comment and its replies: I'm disposed to framing the three as expansions of the "moral circle". See, for example:

Comment by CalebWithers on [deleted post] 2017-12-14T02:17:53.513Z

I'm weakly confident that EA thought leaders who would consider seriously the implication of ideas like quantum immortality generally take a less mystic, reductionist view of quantum mechanics, consciousness and personal identity, along the lines of the following:

Comment by calebwithers on EA Survey 2017 Series: Cause Area Preferences · 2017-09-04T02:31:12.635Z · score: 3 (3 votes) · EA · GW

It seems that the numbers in the top priority paragraph don't match up with the chart

Comment by calebwithers on Reading recommendations for the problem of consequentialist scope? · 2017-08-03T01:32:45.259Z · score: 3 (3 votes) · EA · GW

I'll throw in Bostrom's 'Crucial Considerations and Wise Philanthropy', on "considerations that radically change the expected value of pursuing some high-level subgoal".

Comment by calebwithers on EA Funds Beta Launch · 2017-03-17T00:23:12.420Z · score: 5 (5 votes) · EA · GW

A thought: EA funds could be well-suited for inclusion in wills, given that they're somewhat robust to changes in the charity effectiveness landscape

Comment by calebwithers on Strategic considerations about different speeds of AI takeoff · 2017-02-12T03:43:35.029Z · score: 1 (1 votes) · EA · GW

Second, we should generally focus safety research today on fast takeoff scenarios. Since there will be much less safety work in total in these scenarios, extra work is likely to have a much larger marginal effect.

Does this assumption depend on how pessimistic/optimistic one is about our chances of achieving alignment in different take-off scenarios, i.e. what our position on a curve something like this is expected to be for a given takeoff scenario?

Comment by calebwithers on Donor lotteries: demonstration and FAQ · 2017-01-06T07:51:14.672Z · score: 0 (0 votes) · EA · GW

Thanks Paul and Carl for getting this off the ground!

I unfortunately haven't been able to arrange to contribute tax-deductibly in time (I am outside of the US), but for anyone considering running future lotteries:

I think this is a great idea, and intend to contribute my annual donations - currently in the high 4-figures - through donation lotteries such as this if they are available in the future.

Comment by calebwithers on Thoughts on the "Meta Trap" · 2016-12-20T23:13:31.084Z · score: 0 (0 votes) · EA · GW

Relevant to #1b. Overestimating impact:

Comment by calebwithers on The Best of EA in 2016: Nomination Thread · 2016-12-18T07:52:06.884Z · score: 0 (0 votes) · EA · GW

This series of talks on the Effective Altruism movement at EA Global 2016:

The Effective Altruism Ecosystem

Embracing the Intellectual Challenge of Effective Altruism

Improving the Effective Altruism Network

Comment by calebwithers on Can we set up a system for international donation trading? · 2016-12-15T08:33:35.272Z · score: 1 (1 votes) · EA · GW

Does anyone else think that a column structure along the lines of:

Name | Contact | Your Country | Charities that are tax-deductible in your country | Charities you want to donate to | Countries where these charities are tax-deductible

would be more comprehensible?

I had to do more than a quick glance to understand the current structure, which worries me a little bit, but it might just be me.

Comment by calebwithers on CEA Staff Donation Decisions 2016 · 2016-12-08T08:13:12.947Z · score: 4 (4 votes) · EA · GW

Michelle Hutchinson mentioned that Nick Beckstead plans to email her donation advice. Is it possible for others to receive this advice?

Comment by calebwithers on The Best of EA in 2016: Nomination Thread · 2016-11-13T08:55:14.169Z · score: 1 (1 votes) · EA · GW

Making sense of long-term indirect effects - Robert Wiblin, EA Global 2016

Comment by calebwithers on What does Trump mean for EA? · 2016-11-11T02:54:34.065Z · score: 3 (3 votes) · EA · GW

I think the message of SlateStarCodex's "Tuesday Shouldn't Change The Narrative" is particularly relevant to EAs - any large updates to one's beliefs about the world should have come before the election.

Comment by calebwithers on What does Trump mean for EA? · 2016-11-11T02:49:35.642Z · score: 2 (2 votes) · EA · GW

Has there been consideration of electoral reform with mind to proportionality as a worthwhile EA cause?

Comment by calebwithers on Is there a hedonistic utilitarian case for Cryonics? (Discuss) · 2015-08-27T22:29:29.223Z · score: 6 (6 votes) · EA · GW

This has also been discussed at Effective Altruism and Cryonics at Lesswrong

Comment by calebwithers on Charity Redirect - A proposal for a new kind of Effective Altruist organization · 2015-08-25T05:59:04.902Z · score: 2 (2 votes) · EA · GW

I feel like Joey's comment here is broadly applicable enough to warrant bringing it top level:

"I think part of the reason [meta-charity is] not publicized as much as say donating directly to GW charities is for marketing/PR reasons. e.g. Many people who are new to EA might be confused or turned off by the idea of a 100% overhead charity."

Comment by calebwithers on Charity Redirect - A proposal for a new kind of Effective Altruist organization · 2015-08-24T01:15:54.477Z · score: 4 (6 votes) · EA · GW

In addition to Charity Science, Giving What We Can also has this meta charity logic ingrained:

Comment by calebwithers on A way of thinking about saving vs improving lives · 2015-08-09T23:43:16.592Z · score: 0 (0 votes) · EA · GW

I certainly agree with the general point that one must consider the experiential value of the life saved. However, I'm skeptical of presuming a log-relationship for consumption and happiness, both for the reason you identified (definition problems at low-incomes), and issues around self-reporting as a measure of happiness, the Easterlin Paradox, and tentative data supporting that much of the happiness from consumption may about feeling richer than other people.