Will CEA have an open donor lottery by Giving Tuesday, December 3rd, 2019? 2019-10-07T21:38:47.073Z · score: 17 (6 votes)
#GivingTuesday: Counter-Factual Donation Matching is the Lowest-Hanging Fruit in Effective Giving 2017-11-25T20:40:12.834Z · score: 14 (14 votes)
What is the expected effect of poverty alleviation efforts on existential risk? 2015-10-02T20:43:30.808Z · score: 5 (5 votes)
Charity Redirect - A proposal for a new kind of Effective Altruist organization 2015-08-23T15:35:29.717Z · score: 3 (7 votes)


Comment by williamkiely on Are we living at the most influential time in history? · 2019-10-13T23:36:01.380Z · score: 1 (1 votes) · EA · GW

Did you mean to say "assuming a 1% risk of extinction per century for 1000 centuries"?

Yes, thank you for the correction!

Comment by williamkiely on Why did MyGiving need to be replaced? And why is the replacement so bad? · 2019-10-05T22:10:08.151Z · score: 27 (11 votes) · EA · GW

Functionality I would like added to the Pledge Dashboard (note: I didn't use MyGiving):

  • A comment field next to each donation. Currently I use the "Recipient" field to write the organization name plus extra notes I want to record (e.g. whether the donation was counter-factually matched).
  • The ability to see total amount I've given to each organization I've given to.
  • A way to label each donation as being associated with a certain cause area.
  • Bar chart of my donations over time, and chart of my donations per organization, and chart of my donations per cause area (by my labeling).
  • The ability to share one's donation page with others.
Comment by williamkiely on Long-term Donation Bunching? · 2019-10-01T18:55:54.518Z · score: 4 (3 votes) · EA · GW

On the LessWrong mirror of this post, Jeff_Kaufman replied to my above comment:

I'm pretty pessimistic about GivingTuesday persisting as a way for EAs to have a large counterfactually valid impact. "Free money for sufficiently quick and organized folks" won't last.

I replied:

I agree, which is why the large benefit of getting one's donations matched compared to the tax benefits of bunching provides another (stronger) reason (in addition to the value drift reason) for people like the GWWC-donor in your original post to donate this year (on Giving Tuesday) rather than bunch by taking the standard deduction this year and giving in 2020 (or later) instead. (This is the implication I had in mind when I wrote my first comment; sorry for not writing it out then.)
I myself am in this situation. As such:
- If it turns out that Facebook doesn't offer an exploitable donation match this year, then I plan to not donate and take the standard deduction instead.
- In the hypothetical world where free matching money was guaranteed to always be available every year, I would also plan to not donate this year and would take the standard deduction instead.
- However, as seems most likely to be the case, if Facebook does offer an exploitable match this Giving Tuesday and it seems significantly less likely that I could get matched again in 2020 (as we both agree seems to be the case) then I will donate this Giving Tuesday to take advantage of the free money while it lasts.
Comment by williamkiely on Long-term Donation Bunching? · 2019-09-28T22:06:32.237Z · score: 9 (3 votes) · EA · GW

It's worth noting that the possible tax benefits are small compared to the benefit of getting one's donations matched:

Comment by williamkiely on The Long-Term Future: An Attitude Survey · 2019-09-19T00:15:02.113Z · score: 5 (3 votes) · EA · GW
However, we also collected qualitative responses to this question, and found that many of the people who preferred the smaller civilization over the bigger were unwilling to accept the stipulations.

Perhaps a way to avoid this problem would be to use numbers that are both significantly less than the current population, such as 2B vs 3B rather than 1B vs 10B.

Comment by williamkiely on Are we living at the most influential time in history? · 2019-09-13T06:43:56.534Z · score: 4 (4 votes) · EA · GW

Claim: The most influential time in the future must occur before civilization goes extinct.

Thoughts on whether this is true or not?

Comment by williamkiely on Are we living at the most influential time in history? · 2019-09-13T06:03:57.416Z · score: 1 (1 votes) · EA · GW

Thanks for the reply, Will. I go by Will too by the way.

for simplicity, just assume a high-population future, which are the action-relevant futures if you're a longtermist

This assumption seems dubious to me because it seems to ignore the nontrivial possibility that there is something like a Great Filter in our future that requires direct-work to overcome (or could benefit from direct-work).

That is, maybe if we solve one challenge right in our near-term future right (e.g. hand-off the future to benevolent AGI) then it will be more or less inevitable that life will flourish for billions of years, and if we fail to overcome that challenge then we will go extinct fairly soon. As long as you put a nontrivial probability on such a challenge existing in the short-term future and it being tractable, then even longtermist altruists in the small-population worlds (possibly ours) who try punting to the future / passing the buck instead of doing direct work and thus fail to make it past the Great-Filter-like challenge can (I claim, contrary to you by my understanding) be said to be living in an action-relevant world despite living in a small-population universe. This is because they had the power (even though they didn't exercise it) to make the future a big-population universe.

Comment by williamkiely on Are we living at the most influential time in history? · 2019-09-07T21:56:27.096Z · score: 3 (3 votes) · EA · GW

Under Toby's prior, what is the prior probability that the most influential century ever is in the past?

Comment by williamkiely on Are we living at the most influential time in history? · 2019-09-04T18:07:48.279Z · score: 14 (5 votes) · EA · GW

Another reason to think that MacAskill's method of determining the prior is flawed that I forgot to write down:

If one uses the same approach to come up with a prior that the second, third, fourth, X century is the hingiest century of the future, and then adds these priors together one ought to get 100%. This is true because exactly one of the set of all future centuries must be the hingiest century of the future. Yet with MacAskill's method of determining the priors, when one sums all the individual priors that the hingiest century is century X, one gets a number far greater than 100%. That is, MacAskill's estimate is that there are 1 million expected centuries ahead, so he uses a prior of 1 in 1 million that the first century is the hingiest (before the arbitrary 10x adjustment). However, his model assumes that it's possible that civilization could last as long as 10 billion centuries (1 trillion years). So what is his prior that e.g. the 2 billionth century is the hingiest? 1 in 1 million also? Surely this isn't reasonable, for if one uses a prior of 1 in 1 million for all 10 billion possible centuries then, one's prior expectation that one of the 10 billion centuries that civilization will possible live through is 10,000 (aka 1,000,000%). One's credence in this ought to be 1 (100%) by definition.

My method of determining the prior doesn't have this problem. On the contrary, as Column J of my linked spreadsheet from the previous comment shows, the prior probability that the Hingiest Century is somewhere in the Century 1-1000 range (which I calculate by summing the individual priors for those thousand centuries) approaches 100% as the probability that civilization goes extinct in those first 1000 centuries approaches 100%.

Comment by williamkiely on Are we living at the most influential time in history? · 2019-09-04T11:56:06.769Z · score: 20 (12 votes) · EA · GW
1. It’s a priori extremely unlikely that we’re at the hinge of history
Claim 1

I want to push back on the idea of setting the "ur-prior" at 1 in 100,000, which seems far too low to me. I also will critique the method that arrived at that number, and propose a method of determining the prior that seems superior to me.

(One note before that: I'm going to ignore the possibility that the hingiest century could be in the past and assume that we are just interested in the question of how probable it is that the current century is hingier than any future century.)

First, to argue that 1 in 100,000 is too low: The hingiest century of the future must occur before civilization goes extinct. Therefore, one's prior that the current century is the hingiest century of the future must be at least as high as one's credence that civilization will go extinct in the current century. I think this is already (significantly) greater than 1 in 100,000.

I'll come back to this idea when I propose my method of determining the prior, but first to critique yours:

The method you used to come up with the 1 in 100,000 prior that our current century is hingier than any future century was to estimate the expected number of centuries that civilization will survive (1,000,000) and then to try to "[restrict] ourselves to a uniform prior over the first 10%" of that expected number of centuries because "the number of future people is decreasing every century."

(Note that while I think the adjustment from 10^-6 to 10^-5 is an adjustment for a good reason in the right direction, I think it can be left out of the prior: You can update on the fact that "the number of future people is decreasing every century" (and other things) later after determining the prior.)

Now to critique the method Will used of arriving at the 1 in 1,000,000 prior. It basically starts with an implicit probability distribution for when civilization is going to go extinct (good), but then compresses that into an average expected number of centuries that civilization is going to survive and (mistakenly) essentially assumes that civilization is going to last precisely that long. It then computes one over the average expected number of centuries to get the base rate that a given century is the hingiest (determining a base rate is good, but this isn't the right way).

I propose that a better method is that one should start with the same implicit probability distribution for the expected lifespan of civilization, except make it explicit, and do the same base rate calculation but for each discrete possible length of civilization (1 century, 2 centuries, etc) instead of compressing the probability distribution for the expected lifespan of civilization into an average expected number of centuries.

That is, I'd argue that one's prior that the current century is the hingiest century of the future should be equal to one's credence that civilization will go extinct in the current century plus 1/2 times one's credence that civilization will go extinct in the second century (since there will then be two possible centuries and we are calculating a base rate), plus 1/3 times one's credence that civilization will go extinct in the third century (this is the third base rate we are summing), etc.

I've modeled an example of this here:

From my "1000 Century Model", assuming a 1% per century risk of extinction per century for 1000 centuries, the prior that the first century is the hingiest is ~4.65%.

From my "90% Likely to Survive 999 Centuries Model", assuming a 10% chance of extinction in the first century, and a 0% chance of extinction every year thereafter until the 1000th century, and a 100% chance of extinction in the 1000th century, my method gives a prior of ~10.09% that the first century is the hingiest. On the other hand, since the expected number of centuries is ~900 years, MacAskill's method gives an initial prior of ~0.111% and a prior of ~1.111% after "[restricting] ourselves to a uniform prior over the first 10% [of expected centuries]". Both priors calculated using MacAskill's method are below the 10% rate of extinction in the first century, which (I claim again) obviously means they are too low.

Comment by williamkiely on Are we living at the most influential time in history? · 2019-09-04T04:59:05.144Z · score: 2 (2 votes) · EA · GW

Typo corrections:

Lots of things are a priori extremely [unlikely] yet we should have high credence in them


so I should update towards the cards having [not] been shuffled.


All other things being equal, this gives us reason to give resources to future people than to use rather than to use those resources now.

This doesn't show up on the sidebar Table of Contents:

#3: The simulation update argument against HoH
Comment by williamkiely on How do you, personally, experience "EA motivation"? · 2019-08-22T20:24:14.653Z · score: 3 (2 votes) · EA · GW

You might find Nate Soares' series on Replacing Guilt helpful:

Comment by williamkiely on How do you, personally, experience "EA motivation"? · 2019-08-22T20:21:40.273Z · score: 5 (3 votes) · EA · GW

I'll add that when I want to help people effectively I feel like Nate Soares' character Daniel in his post "On Caring" after he has undergone his mental shift:

Daniel doesn't wind up giving $50k to the WWF, and he also doesn't donate to ALSA or NBCF. But if you ask Daniel why he's not donating all his money, he won't look at you funny or think you're rude. He's left the place where you don't care far behind, and has realized that his mind was lying to him the whole time about the gravity of the real problems.
Now he realizes that he can't possibly do enough. After adjusting for his scope insensitivity (and the fact that his brain lies about the size of large numbers), even the "less important" causes like the WWF suddenly seem worthy of dedicating a life to. Wildlife destruction and ALS and breast cancer are suddenly all problems that he would move mountains to solve — except he's finally understood that there are just too many mountains, and ALS isn't the bottleneck, and AHHH HOW DID ALL THESE MOUNTAINS GET HERE?
In the original mindstate, the reason he didn't drop everything to work on ALS was because it just didn't seem… pressing enough. Or tractable enough. Or important enough. Kind of. These are sort of the reason, but the real reason is more that the concept of "dropping everything to address ALS" never even crossed his mind as a real possibility. The idea was too much of a break from the standard narrative. It wasn't his problem.
In the new mindstate, everything is his problem. The only reason he's not dropping everything to work on ALS is because there are far too many things to do first.
Comment by williamkiely on How do you, personally, experience "EA motivation"? · 2019-08-22T20:18:03.908Z · score: 8 (3 votes) · EA · GW

Wonderful post by Holly, thank you for sharing.

To answer Aaron's OP question, to me it just feels good in the same way that making good decisions in a game or winning a game feels good, except in a deeper more rewarding sense (with games the good feeling can quickly fade when I realize that winning the game has trivial real-world value) because I think that doing EA is essentially the life game that actually matters according to our values. It feels like I'm doing the right thing.

Note that I get my warm fuzzies from striving to do good in an EA sense. To the extent that I realize that an act of helping someone is not optimal for me to do in an EA sense, I feel less good about doing it.

Comment by williamkiely on EA-aligned podcast with Lewis Bollard · 2019-08-21T16:21:01.339Z · score: 6 (5 votes) · EA · GW

Title advice: Instead of "EA-aligned podcast with Lewis Bollard" (since by posting on the EA Forum we already assume it is EA-related) something like the title of the episode would have been better, e.g. "Podcast: Lewis Bollard on Ending Factory Farming" or if you want to distinguish it from the 80,000 Hours podcast episode on the same subject, perhaps add the date "Podcast: Lewis Bollard on Ending Factory Farming (Aug 2019)".

Comment by williamkiely on EA-aligned podcast with Lewis Bollard · 2019-08-21T15:34:30.923Z · score: 5 (4 votes) · EA · GW

I enjoyed the podcast! I think that assuming your audience is familiar with EA like the 80,000 Hours podcast does is a good thing. Two questions I really like were (1) when you asked how Lewis feels when he sees people eating factory-farmed meat all the time and (2) when you asked Lewis to describe some of the horrible conditions that factory farmed animals live in. I really liked the thought experiment Lewis gave involving the neighbor barbecuing a piglet and your related comment about a Parks and Recreation scene.

Comment by williamkiely on Ask Me Anything! · 2019-08-20T02:34:14.787Z · score: 1 (1 votes) · EA · GW
If now isn’t the right time for longtermism (because there isn’t enough to do) and instead it would be better if there were a push around longtermism at some time in the future

Have you thought about whether there's a way you could write your book on longtermism to make it robustly beneficial even if it turns out that it's not yet a good time for a push around longtermism?

Comment by williamkiely on How likely is a nuclear exchange between the US and Russia? · 2019-06-27T03:37:05.502Z · score: 2 (2 votes) · EA · GW

Re footnote 15, did Luisa assume that the two events were independent and that's how she got the 0.02%? (In reality I would think that they are strongly correlated.)

Comment by williamkiely on Reasons to eat meat · 2019-05-12T18:25:21.157Z · score: 1 (1 votes) · EA · GW

I'm very skeptical of negative utilitarianism. There are other ways it makes sense if other non-utilitarian considerations matter, as I was saying above.

To try to point you in the direction I was thinking, I'll quote Michael Huemer below and clarify that I lean toward Huemer's view that the appropriate thing to do is "draw a line somewhere in the middle" rather than take the extreme view of strict consequentialism:

"“How large must the benefits be to justify a rights violation?” (For instance, for what number n is it permissible to kill one innocent person to save n innocent lives?) One extreme answer is “Rights violations are never justified,” but for various reasons, I think this answer [is] indefensible. Another extreme answer is consequentialism, “Rights violations are justified whenever the benefits exceed the harms” – which is really equivalent to saying there are no such things as rights. This is not indefensible, but it is very counter-intuitive. So we’re left with a seemingly arbitrary line somewhere in the middle."

When drawing the line somewhere in the middle murdering one person to save two may not be permissible (even though under utilitarianismit is), but murdering one to save 1000 may be, say.

Similarly, under one of these "line somewhere in the middle" views killing a sentient cattle for beef may be permissible if one could be certain that the cattle definitely had a net positive life, however killing the cattle may be impermissible given a certain amount of doubt (say 10%) about whether the cattle's life is net positive (even if one still thinks the cattle's life is net positive in expectation).

Comment by williamkiely on Reasons to eat meat · 2019-04-24T04:56:08.021Z · score: 3 (3 votes) · EA · GW

It seems to me that the fact that grass-fed beef cattle might not have net positive lives is a strong argument in favor of not eating grass-fed beef. My values are roughly utilitarian but I have a fair amount of moral uncertainty and it seems to me that avoiding eating meat seems like the cautious thing to do given this uncertainty.

Comment by williamkiely on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-12T00:50:08.585Z · score: 4 (3 votes) · EA · GW

Great points. I agree re Double Up Drive that it was worthy of much deeper investigation to figure out the counterfactual nature of it. Perhaps there was even a way donors could have *made it* more counterfactual via doing a great job using their donation as an opportunity to signal and influence others' behaviors. I briefly considered donating to it just so that I could write a message to the donors who were offering the match encouraging them to donate the full amount regardless of whether or not the match was reached and have my message possibly be heard.

More generally one thing I have updated a lot on after this past giving season is that I now believe that for small donors the signaling value of their donations matter a lot more. For example, a GWWC member earning $50k/year and donating $5k/year has a certain amount of credibility that conceivably could be used to help influence much larger donors to donate more and more effectively. How one's donation and one's communications around one's donation is going to be perceived by much larger donations may in fact count for more than the value of one's donation itself.

Comment by williamkiely on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-11T01:15:43.181Z · score: 2 (2 votes) · EA · GW

I am currently more than 30% confident that I will get at least one donation matched by Facebook on Giving Tuesday 2019. This is not conditioned on Facebook doing another match this year or on anything else.

Comment by williamkiely on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-11T00:57:50.556Z · score: 4 (3 votes) · EA · GW

Minor point: I don't think that regression to the mean is a reason to expect the probability of an EA's donation being matched in 2019 to be less than the probability of it getting matched in 2018.

Comment by williamkiely on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-11T00:55:08.014Z · score: 6 (5 votes) · EA · GW

Thanks for writing this up. This is what I did in 2018 and I have no regrets. It clearly seemed to me to be the right thing to do after I saw my Giving Tuesday 2017 donations get matched. I considered writing a similar post to this one after Giving Tuesday 2017 to encourage others to hold off on donating until Giving Tuesday 2018, but I don't think I ever did due to (if I recall correctly):

(1) a worry that I would dissuade people from donating and worried that some of those people would suffer from value drift and no longer want to donate at all come Giving Tuesday, and

(2) I wasn't sure if I had sufficient evidence to give others confidence that come Giving Tuesday 2018 they'd be able to get their donations matched.

This year for a US donor who is not concerned about value drift being a significant risk I agree with Cullen that this is probably a great bet again.

Comment by williamkiely on EA Giving Tuesday Donation Matching Initiative 2018 Retrospective · 2019-01-06T23:55:04.502Z · score: 9 (5 votes) · EA · GW

Thanks for mentioning that the EA Meta Fund might be interested in funding things like this ("A project experimenting with novel fundraising strategies or target groups").

The question of whether having external funding would have helped seems complicated.

I think that there was a lot more valuable work that we could have done to make this initiative an even greater success, but I don't think that having external funding would have caused me or Avi to do more of this work. Firstly because we were capable of funding ourselves, but primarily (at least for me personally) because the challenge I was facing was how I could find the time/energy to do extra work on this project while maintaining my standing at my day job, which seemed important so I could continue working there in the future.

Due to a lack of foresight I had taken a 19-day vacation (Oct 19 - Nov 6) to Europe to attend EA Global London and then do some more traveling, which meant that I couldn't take any more time off from my job later in November upon my return without potentially significantly affecting my standing at my company. This meant that I just had nights and weekends to do work for Giving Tuesday.

I considered that I ought to have taken the extra time off from my job anyway despite this cost, but at the time this seemed too extreme a measure given that I didn't have any plans for what I would do come December/January had things taken a turn for the worse at my job due to my absence.

As I'm writing this now, it actually seems like I definitely should have taken more time off so I could do more work for Giving Tuesday despite the consequences of that for my day job. But my bias is to be risk-averse and not do things that seem to crazy or extreme, so it's not surprising to me that I didn't do this.

Perhaps external funding would have been a benefit in that it would have provided social approval for me taking more time off from my day job to work on the project. My employer probably also would have been more understanding of me doing this if I had external funding from a reputable organization to signal that what I was taking time off to do was valuable.

Actually, now that I think about it, I don't even know for sure how my employer would have reacted to me e.g. taking the entire month of November off since I didn't ask. I think it's likely that I would have been let go, but I could easily be wrong, especially if I took the time to explain why I wanted to take the time off and why I wanted to come back afterward.

Comment by williamkiely on New edition of "Rationality: From AI to Zombies" · 2018-12-17T05:49:52.374Z · score: 3 (2 votes) · EA · GW

I know someone who I think would enjoy HPMOR but refuses to read any book-length text in anything but paper book format. Other than going through the effort of printing out a PDF myself, does anyone know of any way I can get a hard copy? I'd be willing to make a counter-factual donation to MIRI or wherever for the trouble if that would help.

Comment by williamkiely on EA Survey 2018 Series: Donation Data · 2018-12-09T19:44:44.240Z · score: 2 (2 votes) · EA · GW

The two tables showing percentiles are missing column labels:

Comment by williamkiely on Doing vs Talking at EA Events · 2018-09-01T03:37:03.252Z · score: 9 (11 votes) · EA · GW

My main question for you: How would you discuss the point from a friend new to EA: “Nothing was practically accomplished at the meeting. Ideas were discussed.”

I would encourage the friend to take a longer-term view to see the value in discussing ideas.

I would also encourage the friend to try out doing the direct-work meetup concept. I suspect there are good reasons why that style meetup is not popular, but would be delighted to see them pioneer it successfully.

The following EA Forum post may be relevant to you. I'm not sure how Madison vs Harvard differ, though I suspect they are both more similar to each other than they are to the contract-work-and-donate model of an EA meetup you are conceptualizing:

Some value I see in the social discussion style EA meetups I am familiar with (I co-organize EA Austin and have been to Chicago a couple times):

  • Prevent value drift through facilitating the development of friendships and closer ties between community members

  • Inform EA community members about new ideas which they can then go home and read further about online (high fidelity learning)

  • Have more personalized discussions about career planning than is easy/typical in online EA discussions

Comment by williamkiely on EA Hotel with free accommodation and board for two years · 2018-06-06T03:59:19.828Z · score: 1 (1 votes) · EA · GW

Good point. Do you think EAs with more money ought to consider living in group houses for the sake of reducing the cost of living to enable them to donate more?

Comment by williamkiely on EA Hotel with free accommodation and board for two years · 2018-06-06T03:35:32.559Z · score: 1 (1 votes) · EA · GW

Alex K Chen says: "You should like talk to people who do summer camp housing too, like SPARC"

Comment by williamkiely on EA Hotel with free accommodation and board for two years · 2018-06-06T03:16:01.394Z · score: 2 (2 votes) · EA · GW

Is the main value of this coordination to cause EAs to live together in a group? Or is it causing poor EAs to be able to do direct work without having to build up savings first?

If the former, it's unclear to me why there would only be value in grouping together EAs who don't have much money/income (would getting other EAs with money to live together not be equally as valuable?).

And if it's the latter, it's unclear to me why this idea would be better than just funding poor EAs directly and letting them decide where to live -- e.g. Alex K. Chen has proposed that paying for talented young people with high potential to live in the Harvard/MIT area so they could unschool themselves there is potentially very high value.

Comment by williamkiely on EA #GivingTuesday Fundraiser Matching Retrospective · 2018-03-04T16:18:40.988Z · score: 0 (0 votes) · EA · GW

Thanks Avi.

Comment by williamkiely on #GivingTuesday: Counter-Factual Donation Matching is the Lowest-Hanging Fruit in Effective Giving · 2017-11-27T00:37:31.303Z · score: 4 (4 votes) · EA · GW

Facebook saw over 100,000 people donate to thousands of fundraisers that raised $6.79 million on Giving Tuesday across the United States. (Source)

This year I expect it to be more, though I'm not well-informed on how much more. Perhaps $10-$20MM is a reasonable expectation.

Also, the match last year was for $500K instead of $2MM. From the same source:

After the initial match of $500,000 was reached within hours, The Bill & Melinda Gates Foundation increased their pledge to $900,000 total to match more Giving Tuesday Fundraisers on Facebook.

Note that last year's matching campaign was also announced in advance.

So I think 3 minutes is overkill. While apriori I would be expecting people to take advantage of this such that $2MM in donations are made in the first ~3 minutes, I think that last year shows that this is unlikely to happen. I would be surprised if the $2MM match is reached in less than 30 minutes. I'll assign a 20% probability to that happening somewhat arbitrarily. And maybe a 5% chance to it being reached in less than 10 minutes. My median estimate would be around 9:30 EST (1.5 hours). And maybe a 20% chance that it takes more than 3 hours. Although I don't really know, so my suggestion is to donate ASAP. If you're donating more than just a small amount it's worth it even if it's inconvenient.

I intend to make all of my donations ASAP after 8:00 AM EST. (I am going to try to make 10 separate $1,000 donations before 8:10 AM EST).

Comment by williamkiely on An Argument for Why the Future May Be Good · 2017-07-22T19:57:08.037Z · score: 2 (2 votes) · EA · GW

I wasn't proposing that (I in fact think the present is already good), but rather was just trying to better understand what you meant.

Your comment clarified my understanding.

Comment by williamkiely on An Argument for Why the Future May Be Good · 2017-07-21T00:58:27.517Z · score: 2 (2 votes) · EA · GW

7 - Therefore, the future will contain less net suffering

8 - Therefore, the future will be good

Could this be rewritten as "8. Therefore, the future will be better than the present" or would that change its meaning?

If it would change the meaning, then what do you mean by "good"? (Note: If you're confused about why I'm confused about this, then note that it seems to me that 8 does not follow from 7 for the meaning of "good" I usually hear from EAs (something like "net positive utility").)

Comment by williamkiely on Why the Open Philanthropy Project Should Prioritize Wild Animal Suffering · 2016-08-27T04:35:37.968Z · score: 0 (2 votes) · EA · GW

Hmm. I do believe I discount vertebrates much less than I discount insects, however I also think there's a huge difference between say chickens and chimpanzees or chimpanzees and humans. Even among humans (who have quite similar brains to one another compared to inter-comparisons), I think that the top 10% of Americans probably live lives that I value inherently (by which I mean ignoring the effects that they have on other things and only counting the quality of their conscious life experience) at least one order of magnitude (if not several) more than the bottom 10% of Americans. I believe this is an unpopular view also, but one consideration I might be able to give in support of it is if you reflect on how much you value your own conscious experience during some parts of your life compared to others you may find as I do that some moments or short periods seem to be of much greater value than others of equal duration.

An exercise I tried recently was making a plot of "value realized / time" vs "time" for my own conscious life experience (so again: not including the effects of my actions, which is the vast majority of what I value) and found that there were some years I valued multiple times more than other years and some moments I valued many times more than all years on net. The graph was also all positive value and trending upwards. Sleeping much less than awake. (I don't think I have very vivid dreams relative to others, but even if I did, I would probably still tend to value waking moments much more than sleeping ones.) Also, remembering or reflecting on great moments fondly can be of high value too in my evaluation. There's also the problem of not knowing now what certain experiences were like in the past to actually experience them since I'm relying on my memory of what they were like, which for all I know could be faulty. I think in general I choose to value experiences based on how I remember them being rather than how I think they were when I lived them (if there is a discrepancy between the two).

Also note that I'm a moral anti-realist and so I don't think there are correct answers, so to a certain extent how much I value some periods of my conscious life experience relative to others is a choice, since I don't believe that there are completely defined definite values that are mine that I can discover either.

A general thing I'd be really interested in seeing is peoples' estimates of how much they value (whether positively or negatively) the total life experiences of say, mosquitoes, X, Y, Z, chickens, cows, humans (and what that distribution looks like), oneself over time, a typical human over time, etc. And also "What would a graph of (value realized per unit time) vs (time) look like for Earth's history?" which would answer the question "How much value has been realized since life began on Earth?" (note: I'd ignore estimates of value realized elsewhere in the universe, which may actually be quite significant, for the sake of the question). If you'd like to indulge me on your own views on an of this I would be very interested, but of course no need if you don't want to. I'll estimate and write my own answers up sometime.

Comment by williamkiely on Why the Open Philanthropy Project Should Prioritize Wild Animal Suffering · 2016-08-27T03:35:48.432Z · score: 0 (4 votes) · EA · GW

How many painful mosquito deaths would you have to be offered to prevent to choose that over causing one new human life (of quality equal to that of a typical person today) to be lived (all instrumental effects / consequences aside)?[1][2][3] (For my answer see [2].)

What would the distribution of EAs' answers look like? College graduates' answers? Everyone's answers?

What range of answers does the OP assume?

Or more broadly, for what range of moral theories can a case be made that WAS should be prioritized?

I ask these questions because, while I find the OP argument intriguing, my current values (or my current beliefs about my values, depending on how you want to think about it) are such that preventing mosquito suffering is very insignificant relative to many other things (e.g. there being more humans that live good lives, or humans living better lives) and is therefore far from being a high priority for me.

While I haven't dived deeply into arguments for negative utilitarianism or other arguments that could conceivably change my view significantly, I think it's unlikely (~10%, reported in [2]) that doing so would lead me to change my view significantly.[4]

It seems to me that the most probable way that my view could be changed to believe that (e.g.) OPP ought to prioritize WAS would be to persuade me that I should adopt a certain view on how to deal with moral uncertainty that would, if adopted, imply that OPP ought to prioritize WAS even given my current beliefs about how much I value the suffering of mosquitoes relative to other things (e.g. the lives of humans).

Is there a case to be made for prioritizing WAS if one assigns even a small probability (e.g. 1%) to a negative utilitarian-like view being correct given that they also subscribe to certain plausible views on moral uncertainty?

My views on how to deal with moral uncertainty are very underdeveloped. I think I currently have a tendency to evaluate situations or decide on actions on the basis of the moral view I deem most probable, however as the linked LessWrong wiki article points out, this has potential problems. (I'm also not aware of a less problematic view, so I will probably continue to do this until I encounter something else that appeals to me more. Bostrom's parliamentary model seems like a reasonable candidate, although I'm unsure how this negotiation process works exactly or would play out. Would have to think about it more.

Lastly, let me just note that I don't challenge the non-normative factual claims of the OP. Rather, I'm simply stating that my hesitation to take the view that OPP should prioritize WAS comes from my belief that I value things significantly differently than I would have to in order for WAS to be something that OPP should prioritize.

{1] A similar question was asked in the Effective Altruism Facebook group. My version gets at how much one values the life of a typical person today relative to the life of a typical mosquito rather than how much one values extreme pleasure relative to extreme suffering.

[2] Since I'm asking for others' answers, I should estimate my own answer. Hmm. If I had to make the decision right now I would choose to create the new human life, even if the number of painful mosquito deaths I was offered to prevent was infinite. Although note that I am not completely confident in this view, perhaps only ~60%. Then maybe ~30% to 10^10-infinity and ~10% to <10^10 mosquitoes, where practically all of that 10% uncertainty comes from the possibility that a more enlightened version of myself would undergo a paradigm shift or significant change in my fundamental values / moral views. In other words, I'm pretty uncertain (~40/60) about whether mosquitoes are net negative or not, but I'm pretty certain (~75%=30%/40%) that if I do value them negatively that the magnitude of their negative value is quite small (e.g. relative to the positive value I place on (the conscious experience of) human life).

[3] Knowing that my view is controversial among EAs (see the link at [1]), perhaps I should meta-update significantly towards the consensus view that not only is the existence of suffering inherently bad, but it's also a much greater magnitude bad than I think in the ~30% scenario that it is. I'll refrain from doing this for now, or figuring out how much I should update if I only think there's an X% that it's proper to update. (I'm also not sure how much my intuitions / current reported estimates already take into account others estimates or not.)

[4] The basis of my view that the goodness of a human life is much greater than the possible (~40% in my view) badness of a mosquito's suffering or painful death (and the basis of more general versions of this view) is my intuition. Thinking about the question from different angles I have been unable to shift my view significantly towards placing substantially more value on mosquitoes' significance or preventing mosquito suffering.

Comment by williamkiely on End-Relational Theory of Meta-ethics: A Dialogue · 2016-07-04T23:23:11.872Z · score: -1 (1 votes) · EA · GW

Noting that I didn't find this essay useful (although I'm not giving it a thumbs-down vote).

In fact I found it counter-productive because it lead me to spend (waste, IMO) more time thinking about this topic. Of course that's not your fault, but I just wanted to mention this. (I have an imperfect brain. Even though my better judgment says that I shouldn't spend more time thinking about this topic I am often lured in by it when I come across it and am unable to resist thinking about it.) Moral realism has always seemed obviously wrong to me, and my lack of success historically at understanding why many smart people apparently do think that there is a One True Moral Standard that magically compels people to do things independent of their desires/values/preferences has caused much frustration for me. I find it frustrating not only because I have been unable to make progress understanding why moral realists believe moral realism is true, but also because if I am right that moral realism is simply wrong then time spent thinking and writing about the topic (including the time that you spent writing the OP essay) is largely wasted AND if moral realism is true then it still seems to me that the time spent discussing the topic is largely wasted time since I'm still going to go on caring about what I care about and acting to effectively achieve what I value rather than adjust my actions to adhere to the particular Moral Standard that is apparently somehow "correct."

Comment by williamkiely on End-Relational Theory of Meta-ethics: A Dialogue · 2016-07-04T22:04:17.608Z · score: 0 (0 votes) · EA · GW

Comment 2:

My one criticism to offer after reading this is in regard to the way you choose to answer "Yes" to the question of whether people "have obligations" (which I put in quotes to communicate the fact that the phrase could be interpreted in different ways such that the correct answer to the question could be either yes or no depending on the interpretation):

"So am I obligated to do anything?

"Yes. You have legal obligations to follow the laws, have epistemic obligations to believe the truth, have deontological obligations not to lie under any circumstance, have utilitarian obligations to donate as much of your income as you can manage, etc… You’re under millions of potential obligations – one for each possible standard that can evaluate actions. Some of these may be nonsensical, like an anti-utilitarian obligation to maximize suffering or an obligation to cook spaghetti for each meal. But all of these obligations are there, even if they’re contradictory. Chances are you just don’t care about most of them."

While I can see how this way of defining what it means to have an obligation can definitely be useful when discussing moral philosophy and bring clarity to said discussions, I think it's worth pointing out that how it could potentially be quite confusing when talking with people who aren't familiar with your specific definition / the specific meaning you use.

For example, if you ask most people, "Am I obligated to not commit murder?" they would say, "Yes, of course." And if you ask them, "Am I obligated to commit murder?" they would say, "No, of course not."

You would answer yes to both, saying that you are obligated to not commit murder by (or according to) some moral standards/theories and are obligated to commit murder by some others.

To most people (who are not familiar with how you are using the language), this would appear contradictory (again: to say that you are obligated both to do and not to do X).

And the second note is that when laypeople say, "No, I am not obligated to commit murder," you wouldn't be inclined to say that they are wrong (because you don't interpret what they are trying to say so uncharitably), but rather would see that clearly they meant something else than the meaning that you explained in the article above that you would assign to these words.

My interpretation of their statement that they are not obligated to commit murder would be (said in one way) that they do not care about any of the moral standards that obligate them to commit murder. Said differently, they are saying that in order to fulfill or achieve their values, people shouldn't murder others (at least in general), because murdering people would actually be a counter-productive way to cause what they desire to happen to happen.

Comment by williamkiely on End-Relational Theory of Meta-ethics: A Dialogue · 2016-07-04T22:03:49.791Z · score: 0 (0 votes) · EA · GW

I hold the same view as yours described here (assuming of course that I understand you correctly, which I believe I do).

FWIW I would label this view "moral anti-realist" rather than "moral realist," although of course whether it actually qualifies as "anti-realism" or "realism" depends on what one means by those phrases, as you pointed out.

Here are two revealing statements of yours that would have lead me to strongly update my view towards you being a moral anti-realist without having to read your whole article (emphasis added):

(1) "that firm conviction is the “expressive assertivism” we talked about earlier, not a magic force of morality."

(2) "I disagree that there is One True Moral Standard." "I disagree that these obligations have some sort of compelling force independent of desire."

Comment by williamkiely on Evaluation Frameworks (or: When Importance / Neglectedness / Tractability Doesn't Apply) · 2016-06-10T23:27:17.818Z · score: 0 (0 votes) · EA · GW

Related: "The Important/Neglected/Tractable framework needs to be applied with care"

Comment by WilliamKiely on [deleted post] 2016-05-31T00:39:48.539Z

Thank you, I think you're right. Thanks for the feedback.

Comment by williamkiely on GiveWell's Charity Recommendations Require Taking a Controversial Stance on Population Ethics · 2016-05-19T23:02:30.050Z · score: 4 (4 votes) · EA · GW

But more importantly, donors should be aware of how questions of population ethics affect the expected value of different interventions.

Thank you for emphasizing this--I think it's very important.

I've realized lately that my views on questions of population ethics are very underdeveloped, which is problematic because it leaves me very uncertain about the relative importance of different causes and the expected value of different interventions, which leads me to postpone donating more until I have better information (and also possibly leads me to not engage in direct-impact work that I possibly should be engaging in due to not knowing what that work that I should be engaging in is).

Note that because questions of population ethics can change the expected value of possible interventions from positive to negative (or vice versa) and by orders of magnitude rather than just a few percentage points, my lack of confident answers to questions of population ethics seems to be a good reason to postpone making any further donations until I have better information on my views on those questions.

If donors understood these assumptions, I expect that many of them would prioritize their donations differently.

I wonder... To what extent it is true that donor ignorance about their views on questions of population ethics (and related questions about their values) leads donors to confidently choose one charity or intervention over another when in fact if they understood their views on population ethics correctly then they would have chosen the other charity or intervention?

I used to think that I knew what I valued well enough to choose where to donate, but now I realize that I have to think more on certain questions of population ethics to at least figure out what approximate probability I would assign to each possible way of valuing things before I can know which cause and intervention I believe has the highest expected value and is worth donating to.

Comment by williamkiely on Lesswrong Diaspora survey · 2016-04-03T23:25:20.829Z · score: 0 (2 votes) · EA · GW

What is the purpose of the survey? It doesn't seem to be a very worthwhile EA activity to take it. I stopped at question 40.

Comment by williamkiely on Is there a hedonistic utilitarian case for Cryonics? (Discuss) · 2016-03-26T02:07:46.584Z · score: 0 (0 votes) · EA · GW

(Note: I found this old thread after Eliezer recently shared this Wait But Why post on his Facebook: Why Cryonics Makes Sense)

"I'm so afraid of dying and believe in cyronics so much that signing up for cryonics would end many of my worries and let me be far more productive"

I don't find this argument humorous, but I do see it as perhaps the most plausible argument defending cryonics from an EA perspective.

That said, I don't think the argument succeeds for myself or (I would presume) a large majority of other people.

(It seems to me that the exceptions that may exist would tend to be the people who are very high producers (such that even a very small percentage increase in their good-production would outweigh the cost of their signing up for cryonics) rather than people who are exceptionally afraid of death and love the idea of possibly waking up in the distant future and living longer so much to the point that not signing up for cryonics would be debilitating to them and a sufficiently large hindrance on their productivity (e.g. due to feeling depressed and being unable to concentrate on EA work, knowing that this cryonics option exists that would give them hope) to outweigh the cost of signing up for cryonics.)

So I don't see cryonics as being very defensible from an EA perspective.

Comment by williamkiely on Should you start your own project now rather than later? · 2016-02-26T05:08:31.529Z · score: 0 (0 votes) · EA · GW

Would asking people on the street if they'd be willing to donate money to effective charities such as AMF (or similar marketing efforts to try to raise money quickly rather than focus on high quality movement building) have this negative counterfactual impact?

What is a good way to evaluate this risk of a negative counterfactual impact for candidate projects one is considering launching in general?

Comment by williamkiely on Opportunity to increase your giving impact through AMF · 2016-02-24T22:06:45.727Z · score: 1 (1 votes) · EA · GW

A full analysis of the effectiveness of this event will be undertaken, with an analysis to be made publicly available.

Will you ask donors what they would have done otherwise had they not heard of this event and decided to participate (e.g. donate to AMF anyways, or to another charity, or not donate at all)?

Comment by williamkiely on Accomplishments Open Thread - February 2016 · 2016-02-07T04:17:14.949Z · score: 5 (7 votes) · EA · GW

In January the only EA activities I engaged in were Skyping with Gleb about possible volunteer work I could do for InIn and offering him some feedback on an article or two.

I also wrote down two EA-related ideas that have been on my mind in two blog posts, although it's not clear to me that doing this actually did any good.

Tonight I completed about half of my Centre for Effective Altruism Pareto Fellowship application.

Comment by williamkiely on Celebrating All Who Are in Effective Altruism · 2016-01-20T03:26:15.136Z · score: 2 (2 votes) · EA · GW

I would be interested in hearing from someone who disagrees with this about why they disagree. Just curious.

Comment by williamkiely on Accomplishments Open Thread · 2016-01-10T03:34:21.140Z · score: 2 (2 votes) · EA · GW

Thank you for embracing the awkwardness; this is inspiring!