Posts

EA Giving Tuesday, Dec 3, 2019: Instructions for Donors 2019-11-24T04:02:34.896Z · score: 46 (16 votes)
Will CEA have an open donor lottery by Giving Tuesday, December 3rd, 2019? 2019-10-07T21:38:47.073Z · score: 17 (6 votes)
#GivingTuesday: Counter-Factual Donation Matching is the Lowest-Hanging Fruit in Effective Giving 2017-11-25T20:40:12.834Z · score: 14 (14 votes)
What is the expected effect of poverty alleviation efforts on existential risk? 2015-10-02T20:43:30.808Z · score: 6 (6 votes)
Charity Redirect - A proposal for a new kind of Effective Altruist organization 2015-08-23T15:35:29.717Z · score: 3 (7 votes)

Comments

Comment by williamkiely on How we can make it easier to change your mind about cause areas · 2020-10-10T21:22:06.705Z · score: 6 (3 votes) · EA · GW

I've found the advice of this post useful.

In particular, the suggestion to "1) Give a small donation ($20) to a charity in each major cause area, especially  the ones you've never donated to before."

I just acted on this advice by giving $20 to a group of Democrat senate campaigns which David Shor considers "the races where I think the marginal *small dollar donation* will go the furthest." This was my first political donation (besides a $1 donation in February to Yang's campaign that allegedly helped bring him to the debates.)

Previously, I followed the advice by donating small dollar amounts to a couple other organizations working to help animals (the Good Food Institute and the Wild Animal Initiative). While these acts haven't (yet) caused me to change which cause area I make the bulk of my donations to, I've noticed that they  seem to have had some effect on me psychologically, making me more open to seriously considering making substantial donations to these organizations/cause areas.

Comment by williamkiely on 5,000 people have pledged to give at least 10% of their lifetime incomes to effective charities · 2020-09-30T03:36:11.325Z · score: 9 (7 votes) · EA · GW

You all are inspiring! Thank you for making the world a better place. Let's reach 10,000 members soon. When is a realistic target?

Comment by williamkiely on Donating effectively does not necessarily imply donating tax-deductibly · 2020-08-19T16:43:45.693Z · score: 3 (2 votes) · EA · GW

That said, it does seem to me that once one had decided on an organization to give to, one should separately optimize how to make that donation (considering donation matching opportunities, whether the donation is or can be tax deductible (which may mean investigating whether a donation swap with another EA makes sense and whether practicing donation bunching makes sense), fees, etc.)

Comment by williamkiely on Donating effectively does not necessarily imply donating tax-deductibly · 2020-08-19T16:36:22.009Z · score: 4 (3 votes) · EA · GW

A similar point to be made is that "Donating effectively does not necessarily imply making a donation with low or zero fees."

E.g. If you would donate $X to an organization if there were zero fees, the fact that there is actually a credit card fee of a few percent probably should not cause you to donate to a different organization entirely (or cause you to give substantially less for that matter).

Comment by williamkiely on Donating effectively does not necessarily imply donating tax-deductibly · 2020-08-19T16:13:21.130Z · score: 4 (3 votes) · EA · GW

Plenty of caveats to this of course, like if you have employer matching

Also worth mentioning: Whether you have employer matching or not, all US donors can take advantage of Facebook's Giving Tuesday donation match (see EAs Should Invest All Year, then Give only on Giving Tuesday and the EA Giving Tuesday website).

TL;DR: Even though Facebook has only provided $7 million USD in matching funds, EAs have been able to get more than half their donations matched in the last two years and I don't expect the opportunity to become saturated (by EAs or the general public) this year either. That is, I expect EA donors who take an hour or two to prepare (by reading the EA Giving Tuesday team's instructions and making a few small practice donations) will still have a 30-70% probability of getting matched (up to $20,000 per donor) this Giving Tuesday, December 3, 2020. (E.g. I'm currently willing to make an unconditional bet at even odds that I will make a donation that is matched by Facebook for Giving Tuesday this year.)

Comment by williamkiely on My Cause Selection: Michael Dickens · 2020-07-26T23:34:40.631Z · score: 1 (1 votes) · EA · GW

Some other related links I found helpful:

Vipul Naik's "My 2018 donations": https://forum.effectivealtruism.org/posts/dznyZNkAQMNq6HtXf/my-2018-donations

Adam Gleave's "2017 Donor Lotter Report": https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-report

Brian Tomasik's "My Donation Recommendations": https://reducing-suffering.org/donation-recommendations/

https://forum.effectivealtruism.org/posts/Z6FoocxsPfQdyNX3P/where-some-people-donated-in-2017

Comment by williamkiely on My Cause Selection: Michael Dickens · 2020-07-26T23:34:10.304Z · score: 1 (1 votes) · EA · GW

I read this post today after first reading a significant portion of it on ~December 2nd, 2019. I'm not sure my main takeaways are from reading it, but wanted to comment to say that it's the best example I currently am aware of someone explaining their cause prioritization reasoning when deciding where to donate. Can anyone point me to more or better examples of people explaining their cause prioritization reasoning?

Comment by williamkiely on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-06-30T23:59:25.149Z · score: 15 (7 votes) · EA · GW

I vaguely recall hearing something like 'the skill of developing the right questions to pose in forecasting tournaments is more important than the skill of making accurate forecasts on those questions.' What are your thoughts on this and the value of developing questions to pose to forecasters?

Comment by williamkiely on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-06-30T23:55:31.997Z · score: 19 (13 votes) · EA · GW

Is forecasting plausibly a high-value use of one's time if one is a top-5% or top-1% forecaster?

What are the most important/valuable questions or forecasting tournaments for top forecasters to forecast or participate in? Are they likely questions/tournaments that will happen at a later time (e.g. during a future pandemic)? If so, how valuable is it to become a top forecaster and establish a track record of being a top forecaster ahead of time?

Comment by williamkiely on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-06-30T23:53:58.282Z · score: 5 (3 votes) · EA · GW

What were your reasons for getting more involved in forecasting?

Comment by williamkiely on Study results: The most convincing argument for effective donations · 2020-06-30T21:25:31.097Z · score: 1 (1 votes) · EA · GW

My entry (475 words):

Morally it is a very good thing to donate to highly-effective charities such as Against Malaria Foundation because the money will go very far to do a lot of good.

To elaborate:

Consider that in relatively-rich developed countries, many governments and people are willing to spend large amounts of money, in the range of $1,000,000-$10,000,000, to avert (prevent) a death. For example, the United States Department of Transportation put the value of a life at $9.2 million in 2014.

In comparison, according to estimates of researchers at the nonprofit GiveWell, which is dedicated to finding outstanding giving opportunities and publishing the full details of their analysis to help donors decide where to give, it only costs about $2,300 to save a life if that money is given to Against Malaria Foundation, one of GiveWell's top charities.

Specifically, consider these four cost-effectiveness estimate results:

GiveWell's 2019 median staff estimate of the "Cost per under-5 death averted" for Against Malaria Foundation is $3,710.

GiveWell's 2019 median staff estimate of the "Cost per age 5+ death averted" for Against Malaria Foundation is $6,269.

GiveWell's 2019 median staff estimate of the "Cost per death averted at any age" for Against Malaria Foundation is $2,331.

GiveWell's 2019 median staff estimate of the "Cost per outcome as good as: averting the death of an individual under 5" for Against Malaria Foundation is $1,690.

These are bargain prices enabling people like you to make your money go very far to do a lot of good, regardless of how much money you give.

If these sound like unbelievably low prices to you given the hundreds of thousands or millions of dollars that it can cost to save a life in developed countries such as the United States, consider that the reality is that millions of people die of preventable diseases every year in very poor countries in Africa and elsewhere. As such, these very inexpensive ways of saving lives very cost-effectively do in fact exist.

Since money you give to Against Malaria Foundation will go very far to do a lot of good to save lives, you should strongly consider donating to Against Malaria Foundation or another highly-effective charity if given the opportunity. Even a donation of just $10 to Against Malaria Foundation or another highly-effective charity will do a lot of good.

Based on GiveWell's cost-effectiveness estimates above, and assuming that averting a death saves about 30 years of life on average, your decision to donate even just $10 to the Against Malaria Foundation will prevent approximately 47 days of life from being lost in expectation.

In summary, it is a very morally good and morally praiseworthy thing to donate to highly-effective charities such as Against Malaria Foundation because the money will go very far to do a lot of good.

My entry is different than all five of the Top 5 entries in that my entry is the only one to not engage with the objection "but what about the value of $10 for myself?"

The primary reason why people don't give presumably is because they'd rather have the money for their own uses.

All five of the Top 5 arguments engage with this idea by implying in one way or another that taking the money for your own use would make you a selfish or bad person.

My entry seems mediocre (in part) because it only highlights the benefits of effective giving to others. It does not attempt to make the reader feel guilty about turning down these bargain opportunities and taking the $10 for oneself.

Comment by williamkiely on Study results: The most convincing argument for effective donations · 2020-06-30T21:15:35.635Z · score: 4 (3 votes) · EA · GW

I'm impressed by the top 5 entries, approximately in the order of the mean donation amount they caused.

I submitted an entry to this contest which I thought was decent when I wrote it, but now seems really mediocre upon re-reading it (see my reply to this comment for my entry).

One thing I noticed about all five of the Top 5 arguments (though not my entry) is that they all can be interpreted as guilting the reader into donating. That is, there is an unstated implication the reader could draw that the reader would be a bad person if they chose not to donate:

  • Argument #9: After reading this winning argument, the reader might think: "Now if I don't donate the $10 I'd be admitting that I don't value the suffering of children in poor countries even one-thousandth as much as my own child (or someone I know's child). What a terrible person I'd be. I don't want to feel like a bad person so I'll donate."

  • Argument #3: Someone might think: "Practically everyone agrees that giving to charity is good, so if I don't donate the $10 that would make me bad. I don't want to feel like a bad person so I'll donate."

  • Argument #5: "If I take the $10 rather than donate it, I'd be putting my own interest in receiving $10 above the interests of four children who don't want malaria, which would make me a bad person. I don't want to feel like a bad person so I'll donate."

  • Argument $12: "I just read that I should feel good about whether I decide to 'take' or 'give' the $10. And also that I should prioritize helping a large number of people over the value of $10 for myself. So now I'm not sure that I could feel good about 'taking' the money for myself. I don't want to feel guilty over $10 so I'll donate."

  • Argument #14: "'Every single day you have the opportunity to spare a small amount of money to provide a fellow human with the same basic access to food or drinking water – how often have you done this?' Clearly I'd be a bad person if I decided to take $10 that is offered to me rather than give the $10 to provide a fellow human with basic access to food or drinking water. I don't want to feel like a bad person, so I'll donate.

Comment by williamkiely on Can the EA community copy Teach for America? (Looking for Task Y) · 2020-06-13T15:53:07.823Z · score: 1 (1 votes) · EA · GW

Helpful post for thinking about how to scale the EA community to make productive use of more people.

Comment by williamkiely on New article from Oren Etzioni · 2020-02-26T05:59:35.372Z · score: 6 (4 votes) · EA · GW

It feels like Etzioni is misunderstanding Bostrom in this article, but I'm not sure. His point about Pascal's Wager confuses me:

Some theorists, like Bostrom, argue that we must nonetheless plan for very low-probability but high-consequence events as though they were inevitable

Etzioni seems to be saying that Bostrom argues that we must prepare for short AI timelines even though developing HLMI on a short timeline is (in Etzioni's view) a very low-probability event?

I don't know whether Bostrom thinks this or not, but isn't Bostrom's main point that even if AI systems sufficiently-powerful to cause an existential catastrophe are not coming for at least a few decades (or even a century or longer), we should still think about and see what we can do today to prepare for the eventual development of such AI systems if we believe that there are good reasons to think that they may cause an x-catastrophe when they eventually are developed and deployed?

It doesn't seem that Etzioni addresses this, except to imply that he disagrees with the view by saying it's unreasonable to worry about AI risk now and by saying that we'll (definitely?) have time to adequately address any existential risk that future AI systems may pose if we wait to start addressing those risks until after the canaries start collapsing.

Comment by williamkiely on New article from Oren Etzioni · 2020-02-26T05:18:25.481Z · score: 2 (2 votes) · EA · GW

Etzioni's implicit argument against AI posing a nontrivial existential risk seems to be the following:

(a) The probability of human-level AI being developed on a short timeline (less than a couple decades) is trivial.

(b) Before human-level AI is developed, there will be 'canaries collapsing' warning us that human-level AI is potentially coming soon or at least is no longer a "very low probability" on the timescale of a couple decades.

(c) "If and when a canary “collapses,” we will have ample time before the emergence of human-level AI to design robust “off-switches” and to identify red lines we don’t want AI to cross"

(d) Therefore, AI does not pose a nontrivial existential risk.

It seems to me that if there is a nontrivial probability that he is wrong about 'c' then in fact it is meaningful to say that AI does pose a nontrivial existential risk that we should start preparing for before the canaries he mentions start collapsing.

Comment by williamkiely on New article from Oren Etzioni · 2020-02-26T04:58:09.045Z · score: 2 (2 votes) · EA · GW

Etzioni also appears to agree that once canaries start collapsing it is reasonable to worry about AI threatening the existence of all of humanity.

As Andrew Ng, one of the world’s most prominent AI experts, has said, “Worrying about AI turning evil is a little bit like worrying about overpopulation on Mars.” Until the canaries start dying, he is entirely correct.

Comment by williamkiely on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-05T22:40:11.743Z · score: 8 (5 votes) · EA · GW

I accepted a bet on January 30th with a friend with the above terms. Nobody else offered to bet me. Since then, I have updated my view. I now give a ~60% probability that there will be over 10,000 deaths. https://predictionbook.com/predictions/198256

My update is mostly based on (a) Metaculus's estimate of the median number of deaths updating from ~3.5k to now slightly over ~10K (https://www.metaculus.com/questions/3530/how-many-people-will-die-as-a-result-of-the-2019-novel-coronavirus-2019-ncov-before-2021/) and also (b) some naive extrapolation of the possible total number of deaths based on the Feb 4th death data here: https://www.worldometers.info/coronavirus/coronavirus-death-toll/

Comment by williamkiely on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-30T12:00:57.620Z · score: 19 (10 votes) · EA · GW

I'm willing to bet up to $100 at even odds that by the end of 2020, the confirmed death toll by the Wuhan Coronavirus (2019-nCoV) will not be over 10,000. Is anyone willing to take the bet?

Comment by williamkiely on EA Giving Tuesday, Dec 3, 2019: Instructions for Donors · 2019-12-02T23:00:46.856Z · score: 3 (3 votes) · EA · GW

We updated it again with different language, hopefully incorporating the spirit of your feedback. We didn't want to discourage people who were willing to put in more time (say an hour or more) from putting in that much time by mentioning "10-20 minutes". Many donors would benefit from much more time spent preparing and practicing.

Comment by williamkiely on EA Giving Tuesday, Dec 3, 2019: Instructions for Donors · 2019-12-02T22:56:07.725Z · score: 9 (4 votes) · EA · GW

Thanks, done. Updated it to:

  • Speed. Per our instructions, fill out your donation early so you can calmly finalize your donation with one click by clicking the green "Donate" button within the first second of the start of the match.
    • In 2017, the match lasted 86 seconds; in 2018, it lasted 15 seconds; this year we expect it to run out much faster, plausibly in one second (personal median estimate: ~4 seconds).
Comment by williamkiely on EA Giving Tuesday, Dec 3, 2019: Instructions for Donors · 2019-11-30T19:36:32.095Z · score: 3 (3 votes) · EA · GW

Thanks Brian, I updated the 'US, $500 or more' instructions page with a note that "Someone who follows these instructions, which should take only 10-20 minutes of pre-work, should be able to donate within 1-3 seconds."

Comment by williamkiely on EA Giving Tuesday, Dec 3, 2019: Instructions for Donors · 2019-11-30T19:12:31.194Z · score: 2 (2 votes) · EA · GW

Thanks for the helpful feedback, Mike! I just updated the website to improve the language based on your recommendation. Here's what I put:

EDIT/UPDATE 12/2/2019:

Facebook eliminated the "Confirm Your Donation" prompt this morning, so we made the following change:

New version:

Last year the matching funds ran out in 15 seconds. We expect it to run out much faster this year. We recommend clicking clicking the green "Donate" button within the first second after the match start time of December 3rd, 2019, at 08:00:00am EST (05:00:00am PST).

Old version:

Last year the matching funds ran out in 15 seconds. We expect it to run out much faster this year. We recommend starting the donation process early and clicking the green "Donate" button on your $500+ donation prior to the official match start, that way you can finalize your donation with one click by clicking the final gray "Donate" button within the first second after the match start time of December 3rd, 2019, at 08:00:00am EST (05:00:00am PST).
Comment by williamkiely on The Frontpage/Community distinction · 2019-11-24T04:26:00.554Z · score: 11 (5 votes) · EA · GW
it's not uncommon for a post to be accidentally published before it is finished

I suggest adding a "Are you sure you want to publish?" confirmation prompt when users click "Submit" on their draft posts to address this.

Comment by williamkiely on Choosing effective university for donations · 2019-11-24T02:06:53.082Z · score: 1 (1 votes) · EA · GW
(or specific program at a university)

How selective can you be here? As Khorton commented, this seems like it has the best chance of being worthwhile if you can restrict it a particularly high-impact program.

If you can only give it to a university unrestricted then I seriously doubt that taking your employer up on the 3:1 match would be the best use of the money.

Comment by williamkiely on What is the size of the EA community? · 2019-11-24T01:32:49.149Z · score: 6 (4 votes) · EA · GW

estimate

27,547 estimated GiveWell donors in 2018. Helpful, thanks!

Comment by williamkiely on What is the size of the EA community? · 2019-11-24T01:27:02.552Z · score: 4 (3 votes) · EA · GW

There are 17,571 people in the Effective Altruism Facebook group, as of today (November 23rd, 2019).

Comment by williamkiely on Are we living at the most influential time in history? · 2019-10-13T23:36:01.380Z · score: 1 (1 votes) · EA · GW

Did you mean to say "assuming a 1% risk of extinction per century for 1000 centuries"?

Yes, thank you for the correction!

Comment by williamkiely on Why did MyGiving need to be replaced? And why is the EffectiveAltruism.org replacement so bad? · 2019-10-05T22:10:08.151Z · score: 27 (11 votes) · EA · GW

Functionality I would like added to the Pledge Dashboard (note: I didn't use MyGiving):

  • A comment field next to each donation. Currently I use the "Recipient" field to write the organization name plus extra notes I want to record (e.g. whether the donation was counter-factually matched).
  • The ability to see total amount I've given to each organization I've given to.
  • A way to label each donation as being associated with a certain cause area.
  • Bar chart of my donations over time, and chart of my donations per organization, and chart of my donations per cause area (by my labeling).
  • The ability to share one's donation page with others.
Comment by williamkiely on Long-term Donation Bunching? · 2019-10-01T18:55:54.518Z · score: 4 (3 votes) · EA · GW

On the LessWrong mirror of this post, Jeff_Kaufman replied to my above comment:

I'm pretty pessimistic about GivingTuesday persisting as a way for EAs to have a large counterfactually valid impact. "Free money for sufficiently quick and organized folks" won't last.

I replied:

I agree, which is why the large benefit of getting one's donations matched compared to the tax benefits of bunching provides another (stronger) reason (in addition to the value drift reason) for people like the GWWC-donor in your original post to donate this year (on Giving Tuesday) rather than bunch by taking the standard deduction this year and giving in 2020 (or later) instead. (This is the implication I had in mind when I wrote my first comment; sorry for not writing it out then.)
I myself am in this situation. As such:
- If it turns out that Facebook doesn't offer an exploitable donation match this year, then I plan to not donate and take the standard deduction instead.
- In the hypothetical world where free matching money was guaranteed to always be available every year, I would also plan to not donate this year and would take the standard deduction instead.
- However, as seems most likely to be the case, if Facebook does offer an exploitable match this Giving Tuesday and it seems significantly less likely that I could get matched again in 2020 (as we both agree seems to be the case) then I will donate this Giving Tuesday to take advantage of the free money while it lasts.
Comment by williamkiely on Long-term Donation Bunching? · 2019-09-28T22:06:32.237Z · score: 9 (3 votes) · EA · GW

It's worth noting that the possible tax benefits are small compared to the benefit of getting one's donations matched: https://forum.effectivealtruism.org/posts/9ZRenh6bERDkoCfdX/eas-should-invest-all-year-then-give-only-on-giving-tuesday

Comment by williamkiely on The Long-Term Future: An Attitude Survey · 2019-09-19T00:15:02.113Z · score: 5 (3 votes) · EA · GW
However, we also collected qualitative responses to this question, and found that many of the people who preferred the smaller civilization over the bigger were unwilling to accept the stipulations.

Perhaps a way to avoid this problem would be to use numbers that are both significantly less than the current population, such as 2B vs 3B rather than 1B vs 10B.

Comment by williamkiely on Are we living at the most influential time in history? · 2019-09-13T06:43:56.534Z · score: 4 (4 votes) · EA · GW

Claim: The most influential time in the future must occur before civilization goes extinct.

Thoughts on whether this is true or not?

Comment by williamkiely on Are we living at the most influential time in history? · 2019-09-13T06:03:57.416Z · score: 1 (1 votes) · EA · GW

Thanks for the reply, Will. I go by Will too by the way.

for simplicity, just assume a high-population future, which are the action-relevant futures if you're a longtermist

This assumption seems dubious to me because it seems to ignore the nontrivial possibility that there is something like a Great Filter in our future that requires direct-work to overcome (or could benefit from direct-work).

That is, maybe if we solve one challenge right in our near-term future right (e.g. hand-off the future to benevolent AGI) then it will be more or less inevitable that life will flourish for billions of years, and if we fail to overcome that challenge then we will go extinct fairly soon. As long as you put a nontrivial probability on such a challenge existing in the short-term future and it being tractable, then even longtermist altruists in the small-population worlds (possibly ours) who try punting to the future / passing the buck instead of doing direct work and thus fail to make it past the Great-Filter-like challenge can (I claim, contrary to you by my understanding) be said to be living in an action-relevant world despite living in a small-population universe. This is because they had the power (even though they didn't exercise it) to make the future a big-population universe.

Comment by williamkiely on Are we living at the most influential time in history? · 2019-09-07T21:56:27.096Z · score: 4 (4 votes) · EA · GW

Under Toby's prior, what is the prior probability that the most influential century ever is in the past?

Comment by williamkiely on Are we living at the most influential time in history? · 2019-09-04T18:07:48.279Z · score: 14 (5 votes) · EA · GW

Another reason to think that MacAskill's method of determining the prior is flawed that I forgot to write down:

If one uses the same approach to come up with a prior that the second, third, fourth, X century is the hingiest century of the future, and then adds these priors together one ought to get 100%. This is true because exactly one of the set of all future centuries must be the hingiest century of the future. Yet with MacAskill's method of determining the priors, when one sums all the individual priors that the hingiest century is century X, one gets a number far greater than 100%. That is, MacAskill's estimate is that there are 1 million expected centuries ahead, so he uses a prior of 1 in 1 million that the first century is the hingiest (before the arbitrary 10x adjustment). However, his model assumes that it's possible that civilization could last as long as 10 billion centuries (1 trillion years). So what is his prior that e.g. the 2 billionth century is the hingiest? 1 in 1 million also? Surely this isn't reasonable, for if one uses a prior of 1 in 1 million for all 10 billion possible centuries then, one's prior expectation that one of the 10 billion centuries that civilization will possible live through is 10,000 (aka 1,000,000%). One's credence in this ought to be 1 (100%) by definition.

My method of determining the prior doesn't have this problem. On the contrary, as Column J of my linked spreadsheet from the previous comment shows, the prior probability that the Hingiest Century is somewhere in the Century 1-1000 range (which I calculate by summing the individual priors for those thousand centuries) approaches 100% as the probability that civilization goes extinct in those first 1000 centuries approaches 100%.

Comment by williamkiely on Are we living at the most influential time in history? · 2019-09-04T11:56:06.769Z · score: 20 (12 votes) · EA · GW
1. It’s a priori extremely unlikely that we’re at the hinge of history
Claim 1

I want to push back on the idea of setting the "ur-prior" at 1 in 100,000, which seems far too low to me. I also will critique the method that arrived at that number, and propose a method of determining the prior that seems superior to me.

(One note before that: I'm going to ignore the possibility that the hingiest century could be in the past and assume that we are just interested in the question of how probable it is that the current century is hingier than any future century.)

First, to argue that 1 in 100,000 is too low: The hingiest century of the future must occur before civilization goes extinct. Therefore, one's prior that the current century is the hingiest century of the future must be at least as high as one's credence that civilization will go extinct in the current century. I think this is already (significantly) greater than 1 in 100,000.

I'll come back to this idea when I propose my method of determining the prior, but first to critique yours:

The method you used to come up with the 1 in 100,000 prior that our current century is hingier than any future century was to estimate the expected number of centuries that civilization will survive (1,000,000) and then to try to "[restrict] ourselves to a uniform prior over the first 10%" of that expected number of centuries because "the number of future people is decreasing every century."

(Note that while I think the adjustment from 10^-6 to 10^-5 is an adjustment for a good reason in the right direction, I think it can be left out of the prior: You can update on the fact that "the number of future people is decreasing every century" (and other things) later after determining the prior.)

Now to critique the method Will used of arriving at the 1 in 1,000,000 prior. It basically starts with an implicit probability distribution for when civilization is going to go extinct (good), but then compresses that into an average expected number of centuries that civilization is going to survive and (mistakenly) essentially assumes that civilization is going to last precisely that long. It then computes one over the average expected number of centuries to get the base rate that a given century is the hingiest (determining a base rate is good, but this isn't the right way).

I propose that a better method is that one should start with the same implicit probability distribution for the expected lifespan of civilization, except make it explicit, and do the same base rate calculation but for each discrete possible length of civilization (1 century, 2 centuries, etc) instead of compressing the probability distribution for the expected lifespan of civilization into an average expected number of centuries.

That is, I'd argue that one's prior that the current century is the hingiest century of the future should be equal to one's credence that civilization will go extinct in the current century plus 1/2 times one's credence that civilization will go extinct in the second century (since there will then be two possible centuries and we are calculating a base rate), plus 1/3 times one's credence that civilization will go extinct in the third century (this is the third base rate we are summing), etc.

I've modeled an example of this here: https://docs.google.com/spreadsheets/d/1AqlfY47EmdcsE0D_uR4UlC3IuQbsCXLsq7YdtEqnyjg/edit?usp=sharing

From my "1000 Century Model", assuming a 1% per century risk of extinction per century for 1000 centuries, the prior that the first century is the hingiest is ~4.65%.

From my "90% Likely to Survive 999 Centuries Model", assuming a 10% chance of extinction in the first century, and a 0% chance of extinction every year thereafter until the 1000th century, and a 100% chance of extinction in the 1000th century, my method gives a prior of ~10.09% that the first century is the hingiest. On the other hand, since the expected number of centuries is ~900 years, MacAskill's method gives an initial prior of ~0.111% and a prior of ~1.111% after "[restricting] ourselves to a uniform prior over the first 10% [of expected centuries]". Both priors calculated using MacAskill's method are below the 10% rate of extinction in the first century, which (I claim again) obviously means they are too low.

Comment by williamkiely on Are we living at the most influential time in history? · 2019-09-04T04:59:05.144Z · score: 2 (2 votes) · EA · GW

Typo corrections:

Lots of things are a priori extremely [unlikely] yet we should have high credence in them

and

so I should update towards the cards having [not] been shuffled.

and

All other things being equal, this gives us reason to give resources to future people than to use rather than to use those resources now.

This doesn't show up on the sidebar Table of Contents:

#3: The simulation update argument against HoH
Comment by williamkiely on How do you, personally, experience "EA motivation"? · 2019-08-22T20:24:14.653Z · score: 5 (3 votes) · EA · GW

You might find Nate Soares' series on Replacing Guilt helpful: http://mindingourway.com/guilt/

Comment by williamkiely on How do you, personally, experience "EA motivation"? · 2019-08-22T20:21:40.273Z · score: 5 (3 votes) · EA · GW

I'll add that when I want to help people effectively I feel like Nate Soares' character Daniel in his post "On Caring" after he has undergone his mental shift:

Daniel doesn't wind up giving $50k to the WWF, and he also doesn't donate to ALSA or NBCF. But if you ask Daniel why he's not donating all his money, he won't look at you funny or think you're rude. He's left the place where you don't care far behind, and has realized that his mind was lying to him the whole time about the gravity of the real problems.
Now he realizes that he can't possibly do enough. After adjusting for his scope insensitivity (and the fact that his brain lies about the size of large numbers), even the "less important" causes like the WWF suddenly seem worthy of dedicating a life to. Wildlife destruction and ALS and breast cancer are suddenly all problems that he would move mountains to solve — except he's finally understood that there are just too many mountains, and ALS isn't the bottleneck, and AHHH HOW DID ALL THESE MOUNTAINS GET HERE?
In the original mindstate, the reason he didn't drop everything to work on ALS was because it just didn't seem… pressing enough. Or tractable enough. Or important enough. Kind of. These are sort of the reason, but the real reason is more that the concept of "dropping everything to address ALS" never even crossed his mind as a real possibility. The idea was too much of a break from the standard narrative. It wasn't his problem.
In the new mindstate, everything is his problem. The only reason he's not dropping everything to work on ALS is because there are far too many things to do first.
Comment by williamkiely on How do you, personally, experience "EA motivation"? · 2019-08-22T20:18:03.908Z · score: 8 (3 votes) · EA · GW

Wonderful post by Holly, thank you for sharing.

To answer Aaron's OP question, to me it just feels good in the same way that making good decisions in a game or winning a game feels good, except in a deeper more rewarding sense (with games the good feeling can quickly fade when I realize that winning the game has trivial real-world value) because I think that doing EA is essentially the life game that actually matters according to our values. It feels like I'm doing the right thing.

Note that I get my warm fuzzies from striving to do good in an EA sense. To the extent that I realize that an act of helping someone is not optimal for me to do in an EA sense, I feel less good about doing it.

Comment by williamkiely on EA-aligned podcast with Lewis Bollard · 2019-08-21T16:21:01.339Z · score: 6 (5 votes) · EA · GW

Title advice: Instead of "EA-aligned podcast with Lewis Bollard" (since by posting on the EA Forum we already assume it is EA-related) something like the title of the episode would have been better, e.g. "Podcast: Lewis Bollard on Ending Factory Farming" or if you want to distinguish it from the 80,000 Hours podcast episode on the same subject, perhaps add the date "Podcast: Lewis Bollard on Ending Factory Farming (Aug 2019)".

Comment by williamkiely on EA-aligned podcast with Lewis Bollard · 2019-08-21T15:34:30.923Z · score: 5 (4 votes) · EA · GW

I enjoyed the podcast! I think that assuming your audience is familiar with EA like the 80,000 Hours podcast does is a good thing. Two questions I really like were (1) when you asked how Lewis feels when he sees people eating factory-farmed meat all the time and (2) when you asked Lewis to describe some of the horrible conditions that factory farmed animals live in. I really liked the thought experiment Lewis gave involving the neighbor barbecuing a piglet and your related comment about a Parks and Recreation scene.

Comment by williamkiely on Ask Me Anything! · 2019-08-20T02:34:14.787Z · score: 1 (1 votes) · EA · GW
If now isn’t the right time for longtermism (because there isn’t enough to do) and instead it would be better if there were a push around longtermism at some time in the future

Have you thought about whether there's a way you could write your book on longtermism to make it robustly beneficial even if it turns out that it's not yet a good time for a push around longtermism?

Comment by williamkiely on How likely is a nuclear exchange between the US and Russia? · 2019-06-27T03:37:05.502Z · score: 2 (2 votes) · EA · GW

Re footnote 15, did Luisa assume that the two events were independent and that's how she got the 0.02%? (In reality I would think that they are strongly correlated.)

Comment by williamkiely on Reasons to eat meat · 2019-05-12T18:25:21.157Z · score: 1 (1 votes) · EA · GW

I'm very skeptical of negative utilitarianism. There are other ways it makes sense if other non-utilitarian considerations matter, as I was saying above.

To try to point you in the direction I was thinking, I'll quote Michael Huemer below and clarify that I lean toward Huemer's view that the appropriate thing to do is "draw a line somewhere in the middle" rather than take the extreme view of strict consequentialism:

"“How large must the benefits be to justify a rights violation?” (For instance, for what number n is it permissible to kill one innocent person to save n innocent lives?) One extreme answer is “Rights violations are never justified,” but for various reasons, I think this answer [is] indefensible. Another extreme answer is consequentialism, “Rights violations are justified whenever the benefits exceed the harms” – which is really equivalent to saying there are no such things as rights. This is not indefensible, but it is very counter-intuitive. So we’re left with a seemingly arbitrary line somewhere in the middle."

When drawing the line somewhere in the middle murdering one person to save two may not be permissible (even though under utilitarianismit is), but murdering one to save 1000 may be, say.

Similarly, under one of these "line somewhere in the middle" views killing a sentient cattle for beef may be permissible if one could be certain that the cattle definitely had a net positive life, however killing the cattle may be impermissible given a certain amount of doubt (say 10%) about whether the cattle's life is net positive (even if one still thinks the cattle's life is net positive in expectation).

Comment by williamkiely on Reasons to eat meat · 2019-04-24T04:56:08.021Z · score: 3 (3 votes) · EA · GW

It seems to me that the fact that grass-fed beef cattle might not have net positive lives is a strong argument in favor of not eating grass-fed beef. My values are roughly utilitarian but I have a fair amount of moral uncertainty and it seems to me that avoiding eating meat seems like the cautious thing to do given this uncertainty.

Comment by williamkiely on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-12T00:50:08.585Z · score: 4 (3 votes) · EA · GW

Great points. I agree re Double Up Drive that it was worthy of much deeper investigation to figure out the counterfactual nature of it. Perhaps there was even a way donors could have *made it* more counterfactual via doing a great job using their donation as an opportunity to signal and influence others' behaviors. I briefly considered donating to it just so that I could write a message to the donors who were offering the match encouraging them to donate the full amount regardless of whether or not the match was reached and have my message possibly be heard.

More generally one thing I have updated a lot on after this past giving season is that I now believe that for small donors the signaling value of their donations matter a lot more. For example, a GWWC member earning $50k/year and donating $5k/year has a certain amount of credibility that conceivably could be used to help influence much larger donors to donate more and more effectively. How one's donation and one's communications around one's donation is going to be perceived by much larger donations may in fact count for more than the value of one's donation itself.

Comment by williamkiely on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-11T01:15:43.181Z · score: 2 (2 votes) · EA · GW

I am currently more than 30% confident that I will get at least one donation matched by Facebook on Giving Tuesday 2019. This is not conditioned on Facebook doing another match this year or on anything else.

Comment by williamkiely on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-11T00:57:50.556Z · score: 4 (3 votes) · EA · GW

Minor point: I don't think that regression to the mean is a reason to expect the probability of an EA's donation being matched in 2019 to be less than the probability of it getting matched in 2018.

Comment by williamkiely on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-11T00:55:08.014Z · score: 6 (5 votes) · EA · GW

Thanks for writing this up. This is what I did in 2018 and I have no regrets. It clearly seemed to me to be the right thing to do after I saw my Giving Tuesday 2017 donations get matched. I considered writing a similar post to this one after Giving Tuesday 2017 to encourage others to hold off on donating until Giving Tuesday 2018, but I don't think I ever did due to (if I recall correctly):

(1) a worry that I would dissuade people from donating and worried that some of those people would suffer from value drift and no longer want to donate at all come Giving Tuesday, and

(2) I wasn't sure if I had sufficient evidence to give others confidence that come Giving Tuesday 2018 they'd be able to get their donations matched.

This year for a US donor who is not concerned about value drift being a significant risk I agree with Cullen that this is probably a great bet again.