Posts

Make a $100 donation into $200 (or more) 2021-11-01T05:07:36.344Z
How large can the solar system's economy get? 2021-07-01T02:29:04.608Z
WilliamKiely's Shortform 2021-01-21T07:01:12.860Z
[Expired] 20,000 Free $50 Charity Gift Cards 2020-12-11T20:00:57.934Z
Make a $10 donation into $35 2020-12-01T19:52:36.749Z
EA Giving Tuesday, Dec 3, 2019: Instructions for Donors 2019-11-24T04:02:34.896Z
Will CEA have an open donor lottery by Giving Tuesday, December 3rd, 2019? 2019-10-07T21:38:47.073Z
#GivingTuesday: Counter-Factual Donation Matching is the Lowest-Hanging Fruit in Effective Giving 2017-11-25T20:40:12.834Z
What is the expected effect of poverty alleviation efforts on existential risk? 2015-10-02T20:43:30.808Z
Charity Redirect - A proposal for a new kind of Effective Altruist organization 2015-08-23T15:35:29.717Z

Comments

Comment by WilliamKiely on William MacAskill - The Daily Show · 2022-09-29T21:01:13.550Z · EA · GW

I have some difficulty in understanding Will's accent (as an American who grew up in the northeast--New Hampshire), though notably less difficulty than plenty of Americans who aren't as articulate. Specfically, I listened to WWOTF at 2x speed and I struggled to hear everything. I know I can easily comprehend other voices at the same wpm, but with Will I was missing occasional words and it took until the second half of the book before I felt I was really hearing everything. I wished they had gone with a professional narrator for selfish reasons, though I understood the motivation behind Will narrating it himself. (At 1x speed I have the same issues with Will's voice, except my brain has more time to process and figure out what word he said based on context, so it's not an issue.).

Comment by WilliamKiely on Let's advertise infrastructure projects · 2022-09-27T22:58:24.672Z · EA · GW

Any information I have is multiple years outdated, but her about page says "Many thanks to EA Grants, the EA Meta Fund, and the Long-Term Future Fund for supporting EA Coaching" so I assume she at least still offers some heavily discounted coaching to at least some EAs. Probably worth it for whoever is taking on the job of trying to make the list somewhat complete to reach out.

Comment by WilliamKiely on Community Builder Writing Contest: $20,000 in prizes for reflections · 2022-09-26T20:54:54.765Z · EA · GW

I also just came across this. Will DM the author to reply here.

Comment by WilliamKiely on Let's advertise infrastructure projects · 2022-09-26T16:42:26.935Z · EA · GW

Under coaching there is also https://www.trainingforgood.com/coaching and https://lynettebye.com/services.

The list in the article seems like it's probably very incomplete. I'm not aware of other similar lists others have made, but they may exist.

Comment by WilliamKiely on Comments for shorter Cold Takes pieces · 2022-09-23T23:32:21.418Z · EA · GW

[Holden won the bet](https://thezvi.substack.com/i/66658630/i-lose-a-bet). In retrospect, I think I was justified in having high confidence and right that Zvi's bet was foreseeably bad. If anything, when I lowered my forecast from 95% to 68% for a couple weeks in April I was meta-updating too much on the community median and assigning too much weight to the possibility of an extremely large "cases to true infections" adjustment.

Note that I disagree with what Zvi wrote yesterday: "In hindsight, the question of ‘what counts as Omicron’ does have a strong bearing on who had the right side of this wager, and also is a key insight into the mistake that I made here."

I disagree that that was his mistake. Even if subvariants counted as different variants that would only increase the chance that the bet resolves ambiguously. There was (IMO) never a >70% chance (or even >50%) chance that Zvi would win (conditional on someone winning), even if the language of the bet considered subvariants to be different variant.

Comment by WilliamKiely on Rational Animations' Script Writing Contest · 2022-09-16T18:09:16.486Z · EA · GW

Terrific, I'm excited to see how things turn out!

Comment by WilliamKiely on Rational Animations' Script Writing Contest · 2022-09-16T18:08:22.754Z · EA · GW

And FWIW I think a lot of the essay would work well paired with an animation, such as the discussion of scope insensitivity, the story of Daniel the college student with the birds, and the mountains of problems everywhere later on.

Comment by WilliamKiely on Rational Animations' Script Writing Contest · 2022-09-15T22:03:13.352Z · EA · GW

I suggest On Caring by Nates Soares. It is ~2880 words, so slightly long, but many people have strongly recommended it over the years (myself included), such as jackva:

For me, and I have heard this from many other people in EA, this has been a deeply  touching essay and is among the best short statements of the core of EA.

Comment by WilliamKiely on Rational Animations' Script Writing Contest · 2022-09-15T21:52:43.324Z · EA · GW

I'm really happy to see this contest and hope it will produce high quality scripts!

I've watched all the longtermism-relevant videos on your channel and thought they were very well done overall. To be more specific, I thought the video you released promoting WWOTF was significantly better than Kurzgesagt's video promoting WWOTF and I was disappointed Kurzgesagt hadn't used a script like yours (given their very large audience).

While I'm sure you've already thought of this, I want to highlight one concern I have about the contest, namely that your $5,000 prize may provide a much smaller incentive than a prize 2-3 times as large:

Given you're hiring a team of 9 animators to work on the next video, I'd guess that $5,000 is not a large fraction of the budget (though I could be mistaken). And in my opinion, the script matters more than the animation (e.g. see my claim that your WWOTF video was better than Kurzgesagt's despite them presumably having a much larger / more expensive animation team). So I'd question the decision to spend a lot more on animators than the script (if you are in fact doing that).

Additionally, contest participants know they are not guaranteed to win the top prize. To asses the expected hourly earnings from entering the contest they need to discount the prize by the probability that they win. All things considered I'm not sure that many people who could write great scripts for you would be justified in believing they'd earn a reasonable wage in expectation by participating in the contest.

Anyway, I'm sure you picked the $5,000 amount carefully and that you've already thought of the relative value of higher prize amounts, but just wanted to provide this quick feedback in case it's helpful.

The second related point of feedback is that committing to "0-4" prizes means that someone might think "even if I write the best script, they still might not choose me and I might not win any money" leading people to discount their expected earnings even more. Perhaps commit to offering some prize for the best script regardless of whether you create a video out of it?

Comment by WilliamKiely on Samotsvety's AI risk forecasts · 2022-09-14T20:37:01.017Z · EA · GW

Do the ranges 3-91.5% and 45-99.5% include or exclude the highest and lowest forecasts?

Comment by WilliamKiely on Updating on the passage of time and conditional prediction curves · 2022-09-10T10:48:12.724Z · EA · GW

Perhaps the way to enable forecasters to have automatically-updating forecasts based on the passage of time is to ask questions in pairs:

(1) Will X happen by Date?

(2) Conditional on X happening by Date, when will X happen?

The probability density function a forecaster gives for 2 can then be used to auto-update their binary forecast for 1.

I only skimmed the post and didn't try to follow the math, so I'm not sure if you already made this point in the post.

Comment by WilliamKiely on Driving Education on EA Topics Through Khan Academy · 2022-09-09T14:47:43.796Z · EA · GW

The EA Austin group brainstormed some more ideas. In the order they came up:

  • How many people could exist in the future?
  • Top 10 Ways Humanity Fails To Reach A Star Trek Future
  • Anthropomorphic (Existential/Extinction Risks) are Much Greater than Natural Risks
  • How Do We Know Bed Nets Work?
  • Human Challenge Trials Explained
  • What It's Like To Donate Your Kidney
  • What It's Like To Donate Your Bone Marrow
  • AI Risk Explained in One Minute
  • Hack Your Happiness Scientifically (approaching mediation from a trial and error perspective)
  • 80,000 Hours in your career -- worth spending 1% of that time choosing a career a career to do good
  • Impact matters more than Overhead (when it comes to choosing charities)
  • Cultured Meat Could Prevent A Lot of Animal Suffering
  • Why worry about "Suffering-Risks" -- It'd be very bad if in the future humanity spread some of the bad stuff that happens on Earth today (e.g. extreme animal suffering) across the universe.
  • Explainer videos on e.g. Vitamin A supplementation, deworming, bed bets as malaria prevention, other GiveWell recommended charities
  • Steelman arguments you disagree with (Steelmanning explained)
  • 5 Things More Dangerous Than Donating A Kidney
  • We are in triage every second of every day
  • Opportunity cost - explained (one of the most important concepts in economics)
Comment by WilliamKiely on Driving Education on EA Topics Through Khan Academy · 2022-09-08T03:02:37.863Z · EA · GW

Three related ideas:

The long view -- looking at history from the perspective of someone who has lived for millions or billions of years rather than decades.

This Can't Go On / Limits to Growth -- The economy can't continue to grow at the rate it has for the last several decades for more than 10,000 years. Total compute can't continue to grow at the same rate it has the last several decades for more than 350 years, since the physical limit of the maximum size computer in the observable universe would be reached by then.

This is the Dream Time -- Billions of years from now, if civilization is still around, people will look back on this era of only a few centuries that we are in now as being special and unique.

Comment by WilliamKiely on Is Civilization on the Brink of Collapse? - Kurzgesagt · 2022-08-18T01:06:04.315Z · EA · GW

I'm still seeing "Is Civilization on the Brink of Collapse?" so looks like they may have changed it back.

Comment by WilliamKiely on Is Civilization on the Brink of Collapse? - Kurzgesagt · 2022-08-16T20:58:38.276Z · EA · GW

There are a lot of highly upvoted comments saying the video needs a title change and frankly I agree the current title is not accurate for what the video discusses. I'm a little surprised and disappointed Kurzgesagt hasn't changed the title in response to the feedback. Does anyone know why they haven't changed it?

Comment by WilliamKiely on WilliamKiely's Shortform · 2022-07-31T01:02:01.349Z · EA · GW

I just asked Will about this at EAG and he clarified that (1) he's talking about non-AI risk, (2) by "much" more he means something like 8x as likely, (3) most of the non-AI risk is biorisk, and in his view biorisk is less than Toby's view; Will said he puts bio xrisk at something like 0.5% by 2100.

Comment by WilliamKiely on Reasons I’ve been hesitant about high levels of near-ish AI risk · 2022-07-28T23:26:49.930Z · EA · GW

Understood, thanks!

Comment by WilliamKiely on Reasons I’ve been hesitant about high levels of near-ish AI risk · 2022-07-27T16:04:20.958Z · EA · GW

Could you elaborate on the expected value of the future point? Specifically, it's unclear to me how it should affect your credence of AI risk or AI timelines.

Comment by WilliamKiely on WilliamKiely's Shortform · 2022-07-26T22:15:23.814Z · EA · GW

Will MacAskill, 80,000 Hours Podcast May 2022:

Because catastrophes that kill 99% of people are much more likely, I think, than catastrophes that kill 100%.

I'm flagging this as something that I'm personally unsure about and tentatively disagree with.

It's unclear how much more MacAskill means by "much". My interpretation was that he probably meant something like 2-10x more likely.

My tentative view is that catastrophes that kill 99% of people are probably <2x as likely as catastrophes that kill 100% of people.

Full excerpt for those curious:

Will MacAskill: — most of the literature. I really wanted to just come in and be like, “Look, this is of huge importance” — because if it’s 50/50 when you lose 99% of the population whether you come back to modern levels of technology, that potentially radically changes how we should do longtermist prioritization. Because catastrophes that kill 99% of people are much more likely, I think, than catastrophes that kill 100%.
Will MacAskill: And that’s just one of very many particular issues that just hadn’t had this sufficient investigation. I mean, the ideal for me is if people reading this book go away and take one little chunk of it — that might be a paragraph in the book or a chapter of it — and then really do 10 years of research perhaps on the question.
Comment by WilliamKiely on My Most Likely Reason to Die Young is AI X-Risk · 2022-07-08T00:06:17.112Z · EA · GW

I made this same point a couple months ago--happy to hear more people pointing it out:

Spencer Greenberg:

A question for you: if you knew for a fact that you were going to die exactly 10 years from now, how would you change your current behavior?

My answer:

I'd tell everyone because presumably it would mean people should strongly update their credence of human extinction in 10 years, e.g. from something like 0.5%-2% to 10%-50%, since the current probability that I die as part of an extinction event in exactly 10 years time (assuming I die then in some event) is approximately 10-50%.

Comment by WilliamKiely on I interviewed Sam Bankman-Fried on my podcast! · 2022-07-06T18:06:05.392Z · EA · GW

I enjoyed this. You got Sam to say that's a really good question several questions in a row!

Comment by WilliamKiely on Kurzgesagt - The Last Human (Longtermist video) · 2022-07-03T17:56:26.955Z · EA · GW

Note that if you sort the comments by New rather than Top, a smaller fraction say very positive things.

Comment by WilliamKiely on Kurzgesagt - The Last Human (Longtermist video) · 2022-07-02T07:37:51.189Z · EA · GW

A thing I didn't like about this video was how it mentioned asteroids and climate change and nuclear weapons and how diseases can quickly spread around the world but not the elephant in the room--AI.

Perhaps they thought provoking questions by mentioning AI would ruin the optimistic reaction that a lot of viewers had?

Comment by WilliamKiely on Proposal: Impact List -- like the Forbes List except for impact via donations · 2022-06-08T01:52:54.166Z · EA · GW

Misc thoughts:

Doing credible cost-effectiveness estimates of all the world's top (by $ amount) philanthropists (who may plausibly make the list) seems very time-intensive.

Supposing the list became popular, I imagine people would commonly ask "Why is so-and-so not on the list?" and there'd be a need for a list of the most-asked-about-people-who-are-unexpectedly-not-on-the-list with justifications for why they are not on the list. After a few minutes of thinking about it, I'm still not sure how to avoid this. Figuring out how to celebrate top philanthropists (by impact) without claiming to be exhaustive and having people disagree with the rankings seems hard.

Comment by WilliamKiely on Proposal: Impact List -- like the Forbes List except for impact via donations · 2022-06-08T01:48:32.424Z · EA · GW

Considerations in the opposite direction:

  • Value of information of initial investments in the project. If it's not looking good after a year, the project can be abandoned when <<$10M has been spent.
  • 80/20 rule: It could influence one person to become the new top EA funder, and this could represent a majority of the money moved to high-cost-effectiveness philanthropy.
  • It could positively influence the trajectory of EA giving, such that capping the influence at 10 years doesn't capture a lot of the value. E.g. Some person who is a child now becomes the next SBF in another 10-20 years, in part due to the impact the list has on the culture of giving.
Comment by WilliamKiely on Proposal: Impact List -- like the Forbes List except for impact via donations · 2022-06-08T01:48:16.788Z · EA · GW
A very simple expected value calculation

Your estimate seems optimistic to me because:

(a) It seems likely that even in a wildly successful case of EA going more mainstream Impact List could only take a fraction of the credit for that. E.g. If 10 years from now the total amount of money committed to EA (in 2022 dollars) increased from its current ~$40B to ~$400B, I'd probably only assign about 10% or so of the credit for that growth to a $1M/year (2022 dollars) Impact List project, even in the case where it seemed like Impact List played a large role. So that's maybe $36B or so of donations the $10M investment in Impact List can take credit for.

(b) When we're talking hundreds of billions of dollars, there's significant diminishing marginal value of the money being committed to EA. So turn the $36B into $10B or something (not sure the appropriate discount). Then we're talking a 0.1%-1% chance of that. So that's $10M-$100M of value.

If a good team can be assembled, it does seem worth funding to me, but it doesn't seem as clear-cut as your estimate suggests.

Comment by WilliamKiely on What important truth do very few people agree with you on? · 2022-06-01T17:38:57.541Z · EA · GW

Related: In October 2017, "What important truth do very few effective altruists agree with you on?" was asked in the main Effective Altruism Facebook group and got 389 comments. (This is Peter Thiel's contrarian question applied to EAs.)

Comment by WilliamKiely on IPTi for malaria: a promising intervention with likely room to scale · 2022-05-19T19:13:19.502Z · EA · GW

Thank you, Miranda, the context you provided is indeed very helpful and satisfies my curiosity.

I also want to add that all the communication I've seen from GiveWell with the public recently has frankly been outstanding (e.g. on rollover funding). I'm really impressed and appreciate the great work you all are doing, keep it up!

Comment by WilliamKiely on Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety) · 2022-05-05T22:43:13.566Z · EA · GW

Thanks for sharing, Julia. I think this sort of post is valuable for helping individuals make better cause prioritization decisions. A related post is Claire Zabel's How we can make it easier to change your mind about cause areas.

Providing these insights can also help us understand why others might not be receptive to working on EA causes, which can be relevant for outreach work.

(Erin commented "people aren’t gonna like EA anyways – I’ve gotten more cynical", but I'm optimistic that an EA community that better understands stories like yours could do things differently to make people more receptive to caring about certain causes on the margin.)

Comment by WilliamKiely on Erin Braid's Shortform · 2022-05-05T22:03:12.692Z · EA · GW

Interesting suggestion. I'm not familiar with anyone doing a donation match like this.

It seems like having a default charity for matching money to go to could be counterproductive to the matcher's goals. E.g. Every.org wanted to get more people to use their platform to donate. But I think many people don't really find it more valuable for money to get directed to one charity over another. EAs are different in that regard. While we're certainly not unique in caring which charities money goes to, I think many people might think "Why should I donate when the money is already going to go to charity?" and decide not to participate.

While generally I wouldn't advise people to do donation matches, would it be good for organizations already running them to make cash transfers the default use of the money if matching donors don't direct it elsewhere? Maybe. One benefit might be that it just gets people to think more about the value of directing money to one organization versus another, instead of merely thinking that they're raising more money for a charity of their choice.

Comment by WilliamKiely on [$20K In Prizes] AI Safety Arguments Competition · 2022-04-26T19:18:30.415Z · EA · GW

(Alternatively if it's not too long but just needs to be one paragraph, use this version:)

The British mathematician I. J. Good who worked with Alan Turing on Allied code-breaking during World War II is remembered for making this important insight in a 1966 paper: "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously." Today far more people are taking this concern seriously. For example, Shane Legg, co-founder of DeepMind, recently remarked: "If you go back 10-12 years ago the whole notion of Artificial General Intelligence was lunatic fringe. People [in the field] would literally just roll their eyes and just walk away. [...] [But] every year [the number of people who roll their eyes] becomes less."

Comment by WilliamKiely on [$20K In Prizes] AI Safety Arguments Competition · 2022-04-26T19:14:43.479Z · EA · GW

(For policy makers and tech executives. If this is too, shorten it by ending it after the I.J. Good quote.)

The British mathematician I. J. Good who worked with Alan Turing on Allied code-breaking during World War II is remembered for making this important insight in a 1966 paper:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.

I.J. Good expressed concern that we might not be able to keep this superintelligent machine under our control and also was able to recognize that this concern was worth taking seriously despite how it was usually only talked about in science fiction. History has proven him right--Today far more people are taking this concern seriously. For example, Shane Legg, co-founder of DeepMind, recently remarked:

If you go back 10-12 years ago the whole notion of Artificial General Intelligence was lunatic fringe. People [in the field] would literally just roll their eyes and just walk away. [...] [But] every year [the number of people who roll their eyes] becomes less.
Comment by WilliamKiely on Which Post Idea Is Most Effective? · 2022-04-25T06:07:03.888Z · EA · GW
9. Massively scalable project-based community building idea

If your idea for this is good this might be the highest value post you could write from this list.

20 and 21 (before you get too familiar with EA thinking and possibly forget your origin story) also seem high value.

If 17 is a novel practical idea it's probably also worth writing about.

8 and 16 interest me.

Comment by WilliamKiely on Which Post Idea Is Most Effective? · 2022-04-25T05:58:00.185Z · EA · GW
6. Why I’m more concerned about human alignment than AI alignment; why rapidly accelerating technology will make terrorism an insurmountable existential threat within a relatively short timeframe

I was thinking about the human alignment portion of this earlier today--how bad actors with future powerful (non-AGI) AI systems at their disposal could cause a tremendous amount of damage. I haven't thought through just how severe this damage might get and would be interested in reading your thoughts on this. What are the most significant risks from unaligned humans empowered by future technology?

Comment by WilliamKiely on IPTi for malaria: a promising intervention with likely room to scale · 2022-04-25T05:43:34.323Z · EA · GW

In this comment I list out some questions and curiosities related to coordination between GiveWell and BMGF and other funders. I don't actually need answers to them, so don't worry about addressing them if doing so isn't easy:

In September 2021, we recommended a small grant to Malaria Consortium and PATH to assess the feasibility and cost-effectiveness of implementing IPTi at national scale in two countries.

I notice that this Malaria Consortium document that GiveWell is hosting on its site says there's a study funded by the Bill & Melinda Gates Foundation that "will assess IPTi’s clinical effectiveness and operational feasibility in Nigeria," and that the project runs from November 2020 – October 2024.

It sounds like GiveWell and BMGF both decided to fund MC to do a study on IPTI effectiveness at a similar time.

Also from the MC doc:

However, a decade after WHO’s recommendation, only one country — Sierra Leone — has adopted the strategy as national policy.

It seems unlikely to me that the two studies are starting around the same time by coincidence given that they are both happening about a decade after WHO's recommendation.

Can GiveWell say what the explanation for this is? E.g. Did one of GiveWell or BMGF influence the other to start a study? Or is MC responsible for reaching out to both to incite a study? Or has BMGF actually been funding studies on this for years and I came across this recent study just because its their latest study on IPTi?

I also wonder whether there is likely a lot more low-hanging fruit like this--policy proposals from several years ago that seem this cost-effective and that can make use of $50-$200M (~16k-67k lives saved) or more but that haven't been implemented yet because no institution is systematically following up on these recommendations to confirm their cost-effectiveness and implement them as soon as possible if worthwhile.

If so, does GiveWell have a plan to change this so that these sort of opportunities don't go unfunded for a decade anymore? Is the answer to just hiring more researchers to look for these opportunities until it's no longer worthwhile to pay more researchers to search?

Counterfactual impact of our funding – It might be that another funder would step in to support IPTi implementation in the next few years if we don’t, thus reducing the value of our funding recommendation.

It seems like the most obvious other candidate funder for this is the BMGF.

Also given that WHO originally recommended this intervention, might it be possible to get WHO to help fund much or all of it (assuming GiveWell decides it is worth funding and finds organizations able to implement IPTi)?

I'm curious if GiveWell dedicates significant attention to coordinating with other funders like BMGF and WHO for the purpose of negotiating their help in funding these worthwhile interventions.

On the one hand it'll be great for GiveWell's public image / reputation if GiveWell can say that it funded $50M in donations to IPTi at 18x cash, but on the other hand if GiveWell can get another funder to fund the opportunity instead that seems even better (assuming that the other funder's spending is less cost-effective than GiveWell's on average, since it frees up more of GiveWell's money to spend on programs more cost-effective than the programs the other funder would have funded), even if that makes it harder for GiveWell to get social credit for the impact.

Comment by WilliamKiely on IPTi for malaria: a promising intervention with likely room to scale · 2022-04-25T05:10:45.490Z · EA · GW
Our preliminary estimate suggested that IPTi may be around 18 times as cost-effective as cash transfers, which is above the range of cost-effectiveness of programs we would consider funding.[8]

Wow, that's great. 18x cash as a preliminary estimate isn't surprising to me given that I had already seen GiveWell's communications that it expected to identify a lot of funding opportunities above 8x cash in 2022, but still I can't help but notice and reflect for a moment on the fact that 18x cash is way better than cash transfers. Imagine donating $52,000 and saving ~18 lives instead of ~1.

Comment by WilliamKiely on My GWWC donations: Switching from long- to near-termist opportunities? · 2022-04-24T00:17:39.679Z · EA · GW

Note that Toby Ord has long given 10% to global poverty. He doesn't explain why in the linked interview despite being asked "Has that made you want to donate to more charities dealing on “long-termist” issues? If not, why not?"

My guess is that he intentionally dodged the question because the true answer is that he continues to donate to global poverty charities because he thinks the signaling value of him donating to global poverty charities is greater than the signaling value of him donating to longtermist charities and yet saying this explicitly in the interview would likely have undermined some of that signaling value.

In any case, I think those two things are true, and think the signaling value represents the vast majority of the value of his donations, so his decision seems quite reasonable to me, even assuming there are longtermist giving opportunities available to him that offer more direct impact per dollar (as I believe).

For other small donors whose donations are not so visible, I still think the signaling value is often greater than the direct value of the donations. Unlike in Toby Ord's case though, for typical donors I think the donations with the highest signaling value are usually the donations with the highest direct impact.

There are probably exceptions though, such as if you often introduce effective giving to people by talking about how ridculously inexpensive it is to save someone's life. In that case, I think it's reasonable for you to donate a nontrivial amount (even up to everything you donate, potentially) to e.g. GiveWell's MIF even if you think the direct cost-effectiveness of that donation is less, since the indirect effect of raising the probability of getting the people you talk to into effective giving and perhaps eventually into a higher impact career path can plausibly more than make up for the reduced direct impact.

An important consideration related to all of this that I haven't mentioned yet is that large donors (e.g. Open Phil and FTX) could funge your donations. I.e. You donate more to X so they donate less to it and more to the other high impact giving opportunities available to them, such that the ultimate effect of your donation to X is to only increase the amount of funding for X a little bit and to increase the funding for other better things mpre. I don't know if this actually happens, though I often hope it does.

(For example, I hope it does whenever I seize opportunities to raise funds for EA nonprofits that are not the nonprofits that I believe will use marginal dollars most cost-effectively. E.g. During the last every.org donation match I directed matching funds to 60+ EA nonprofits due to a limit on the match amount per nonprofit despite thinking many of those nonprofits would use marginal funds less than half as cost-effectively as the nonprofits that seemed best to me. My hope was that this was the right call, i.e. that large EA funders would correct the allocation to EA nonprofits by giving less to the nonprofits I gave to and more to the giving opportunities that had highest cost-effectivess than they otherwise would have, thereby making my decision the right call.)

Comment by WilliamKiely on "Long-Termism" vs. "Existential Risk" · 2022-04-08T19:35:51.340Z · EA · GW
projects that increase the rate of GDP growth by .01% per year in a way that compounds over many centuries

Michael Wiebe comments: "Can we please stop talking about GDP growth like this? There's no growth dial that you can turn up by 0.01, and then the economy grows at that rate forever. In practice, policy changes have one-off effects on the level of GDP, and at best can increase the growth rate for a short time before fading out. We don't have the ability to increase the growth rate for many centuries."

Comment by WilliamKiely on "Long-Termism" vs. "Existential Risk" · 2022-04-08T04:14:44.857Z · EA · GW
But I don't think long-termists are actually asking for $30 million to make the apocalypse 0.0001% less likely - both because we can't reliably calculate numbers that low, and because if you had $30 million you could probably do much better than 0.0001%.

Agreed. Linch's .01% Fund post proposes a research/funding entity that identifies projects that can reduce existential risk by 0.01% for $100M-$1B. This is 3x-30x as cost-effective as the quoted text and targeting a reduction 100x the size.

Comment by WilliamKiely on [deleted post] 2022-03-29T00:05:51.838Z

As of now (March 2022) all of the most upvoted posts for this tag are past events from ~1-10 months ago, so people clearly haven't been downvoting per the instructions.

Comment by WilliamKiely on Predicting for Good: Charity Prediction Markets · 2022-03-24T22:29:06.850Z · EA · GW

Manifold feels like it has a lot of bad or mediocre questions, which are annoying to sort through. This is in part due to how anyone can submit any question. But even for the questions that try to be good, they often aren't as good as they would be if they were subject to the community feedback period that Metaculus has (allowing users to suggest wording improvements, better operationalizations, or improvements to the resolution criteria, etc.).

I don't like that even when I think the market price is wrong, it usually doesn't make sense to spend more than the M$ 20 loan on buying shares (due to how it can tie your money up for a long time). And it's hard to make money time-efficiently making only M$ 20 bets, so that usually means it's not worth spending time evaluating questions that you know it won't be worth betting more than M$ 20 on anyway. (Or if you don't care that much since it's not real money, on most questions you'll just be making quick, low-information M$ 20 bets.)

I also don't like that I can't easily tell with high confidence whether a given question will be reliably resolved accurately in a timely manner (due to resolution being done by the question creator without any other oversight).

The top trader leaderboard rank still doesn't seem like a very useful indicator of forecasting skill, which is what I think a leaderboard should try to signal. Metaculus's ranking over-weights participation in my view so it isn't great either, but it still seems a lot better than Manifold currently. (Manifold probably also over-weights participation.)

Comment by WilliamKiely on Predicting for Good: Charity Prediction Markets · 2022-03-24T19:03:38.736Z · EA · GW
Is there any way it could cause major harm?

Another concern is that subsidizing Manifold Markets with a lot of money could change the incentive landscape for forecasters in general, e.g. causing people who used to forecast on Metaculus to spend much less time there and more time on Manifold instead.

Comment by WilliamKiely on The BEAHR: Dust off your CVs for the Big EA Hiring Round! · 2022-03-24T18:56:05.505Z · EA · GW

Here's an example of an FTX Future Fund application requesting $1M that doesn't mention using the money to hire anyone. Instead, the plan is to use the money to subsidize charity prediction markets.

Comment by WilliamKiely on WilliamKiely's Shortform · 2022-03-15T18:48:12.484Z · EA · GW

A possible story on how the value of a longtermist's life might be higher in a post-London-gets-nuked world than in today's world (from my comment replying to Ben Todd's comment on this Google Doc):

--------

I think what we actually care about is value of a life if London gets nuked relative to if it doesn't rather than quality-adjusted life expectancy.

This might vary a lot depending on the person. E.g. For a typical person, life after London gets nuked is probably worth significantly less (as you say), but for a longtermist altruist it seems conceivable that life is actually worth more after a nuclear war. I'm not confident that's the case in expectation (more research is needed), but here's a possible story:

Perhaps after a Russia-US nuclear war that leaves London in ruins, existential risk this century is higher because China is more likely to create AGI than the West (relative to the world in which nuclear war didn't occur) and because it's true that China is less likely to solve AI alignment than the West. The marginal western longtermist might make more of a difference in expectation in the post-war world than in the world without war due to (1) the absolute existential risk being higher in the post-war world and (2) there being fewer qualified people alive in the post-war world who could meaningfully affect the development of AGI.

If the longtermist indeed makes more of a difference to raising the probability of a very long-lasting and positive future in the post-war world than in the normal-low-risk-of-nuclear-war world, then the value of their life is higher in the post-war world, and so it might make sense to use >50 years of life left for this highlighted estimate. Or alternatively, saving 7 hours of life expectancy in a post-war world might be more like saving 14 hours of life in a world with normal low nuclear risk (if the longtermist's life is twice as valuable in the post-war world).

Comment by WilliamKiely on WilliamKiely's Shortform · 2022-03-15T18:45:17.073Z · EA · GW

My response on Facebook to Rob Wiblin's list of triggers for leaving London:

--------

Some major uncertainties:
(a) Risk of London getting nuked within a month conditional on each of these triggers
(b) Value of a life today (i.e. willingness to pay to reduce risk of death in a world with normal levels of nuclear risk)
(c) Value of a life in a post-London-gets-nuked world (i.e. willingness to pay to increase chance that Rob Wiblin survives London getting nuked)

(Note: (c) might be higher than b) if it's the case that one can make more of a difference in the post-nuclear-war world in expectation.)

Using the 16 micromorts per month risk of death by nuke of staying in London estimate from March 6th[1] and assuming you'd be willing to pay $10-$100M[2] of your own money to avert your death (i.e. $10-$100 per micromort), that means on March 6th it would have made sense (while ignoring nuclear risk) to leave London for a month if you'd rather (taking into account altruistic impacts) leave London for a month than pay $160-$1,600 to stay for a month (or alternatively that you'd leave London for a month if someone paid you $160-$1,600 to do so).

I think that triggers 1-9 probably all increase the risk of London getting nuked to at least 2x what the risk was on March 6th, so assuming you'd be happy to leave for a month for $320-$3,200 (ignoring nuclear risk) (which seems reasonable to me if your productivity doesn't take a significant hit), then I think I agree with your assessment of whether to leave.

However, it seems worth noting that for a lot of EAs working in London whose work would take a significant hit by leaving London, it is probably the case that they shouldn't leave in some of the scenarios where you say they should (specifically the scenarios where the risk of London getting nuked would only be ~2 times higher (or perhaps ~2-10 times higher) than what the risk was on March 6th). This is because even using the $100 per micromort value of life estimate, it would only cost $3,200/166.7=$20 extra per hour for an EA org to hire their full-time employee at that significantly higher productivity, and that seems like it would be clearly worth doing (if necessary) for many employees at EA orgs.

It seems hard to imagine how an EA would be willing to pay $100 to reduce the risk of death of someone by one micromort (which increases the life expectancy of someone with a 50 year life expectancy by 0.438 hours and the expected direct work of someone with 60,000 hours of direct work left in their career by 0.06 hours) and not also be willing to pay $20 to increase the expected direct work they do by 1 hour. The only thing I'm coming up with that might make this somewhat reasonable is if one thinks one life is much more valuable in a post-nuclear-war world than in the present world.

It might also more sense to just think of this in terms of expected valuable work hours saved and skip the step of assessing how much you should be willing to pay to reduce your risk of death by one micromort (since that's probably roughly a function of the value of one's work output anyway). Reducing one's risk of death by 16 micromorts saves ~1 hour of valuable work in expectation if that person has 60,000 hours of valuable work left in their career (16/(10^6)*60,000=0.96). If leaving would cost you one hour of work in expectation, then it wasn't worth leaving assuming the value of your life comes entirely from the value of your work output. This also ignores the difference in value of your life in a post-nuclear-war world compared to today's world; you should perform an adjustment based on this.

[1] https://docs.google.com/document/d/1xrLokMs6fjSdnCtI6u9P5IwaWlvUoniS-pF2ZDuWhCY/edit

Comment by WilliamKiely on What's your prior probability that "good things are good" (for the long-term future)? · 2022-03-15T18:01:05.329Z · EA · GW

I came here to say this--in particular that I think my prior probability for "good things are good for the long-term future" might be very different than my prior for "good things are good for the long-term future in expectation", so it matters a lot which is being asked.

I think the former is probably much closer to 50% than the latter. These aren't my actual estimates, but for illustrative purposes I think the numbers might be something like 55% and 90%.

I agree with Eli that my actual estimates would also depend on the other questions Eli raises.

Another factor that might affect my prior a lot is what the reference class of "good things" looks like. In particular, are we weighting good things based on how often these good things are done / how much money is spent on them, or weighting them once per unique thing as if someone were generating a list of good things? E.g. Does "donation to a GiveWell top charity" count a lot, or once? (Linch's wording at the end of the post makes it seem like he means the latter.)

Perhaps it would be helpful to Linch's question to generate a list of 10-20 "good things" and then actually think about each one carefully and estimate the probability that it is good for the future, and good for the future in expectation, and use these 10-20 data points to estimate what one's prior should be. (Any thoughts on whether this would be a worthwhile research activity, Linch or others reading this?)

Comment by WilliamKiely on The Future Fund’s Project Ideas Competition · 2022-03-08T00:12:12.108Z · EA · GW

Recruitment agencies for EA jobs

Empowering Exceptional People, Effective Altruism

There are hundreds of organizations in the effective altruism ecosystem and even more high-impact job openings. Additionally, there are new organizations and projects we’d like to fund that need to recruit talent in order to establish founding teams and grow. Many of these often lack adequate resources to do proper recruiting. As such, we’d be excited to fund EA-aligned recruitment agencies to help meet these hiring needs by matching talented job-seekers with high-impact roles based on their skills and personal fit.

----------------

(Also submitted via the Google Form.)

Other very similar ideas: Lauren Reid’s Headhunter Office idea and aviv’s Operations and Execution Support for Impact idea.

Comment by WilliamKiely on We need more nuance regarding funding gaps · 2022-02-18T01:57:54.105Z · EA · GW

(Wrote this earlier; just submitting it as a comment now before discussing this post with EA Austin.)

I didn't find the chart easy to understand. Like Jonas, I couldn't figure out what the numbers in brackets mean.

Additionally, I couldn't tell if the limited / middling / strong classificiations were just intuitive judgment calls based on Joey's experience, or if they were supposed to represent something objective.

I couldn't tell whether limited / middling / strong was supposed to be a measure of availability of funding relative to demand, availability of funding in absolute terms, a measure of how easy it is for someone looking to get funding in this area for a project of a given expected cost-effectivess (and if so whether the cost-effectiveness standard is different or the same for each cause area, or what the standard was at all), or something else.

The combination of the table seeming fairly vague and the fact that some people in the comments strongly disagreed with some of the classifications (e.g. for biorisk) makes me concerned that the table is not as informative as I initially assumed it would be (given that the post had 197 karma when I saw it), leading me to be concerned that some people might assume the classifications in the table mean more than they do. I.e. I'm afraid that this post won't do a good job adding the nuance to the funding gap discussion that it intended to.

Comment by WilliamKiely on A New Book to Introduce People to Ethical Vegetarianism · 2022-02-11T01:53:11.852Z · EA · GW
I think it should be the new standard text in effective altruism fellowships and discussion groups to introduce issues surrounding eating meat.

I want to flag that I don't think that a text on issues surround eating meat belongs in an introductory EA fellowship curriculum. While I liked Huemer's Dialogues on Ethical Vegetarianism and am an ethical vegan myself, I don't think going vegan is actually very relevant to the project of doing the most good possible.

I agree Huemer's book is tangentially relevant to EA in the sense that if a person doesn't think there's anything wrong with how typical animal agriculture is conducted then I think they're likely going to have a big blind spot preventing them from properly comprehending the scale of animal suffering that exists in the world, which could potentially prevent them from doing the most good if it turns out their comparative advantage is doing something to help animals.

But I know plenty of EAs who are aware factory farming is awful, yet find going vegetarian or vegan personally difficult and instead opt to be reducetarian or in some cases not change their diet at all. While I agree with Huemer that this is morally problematic, from an EA perspective I think it's more or less fine in the sense that I think personal dietary change represents a very small impact compared to other things that EAs often do, meaning such people can still do a very large amount of good despite not changing their diet. So I fear that bludgeoning them over the head with a text arguing that they're acting immorally would not be appropriate in the context of an EA curriculum meant to educate them on concepts related to doing the most good possible. Inclusion of the text would seem to suggest to newcomers that EAs believe that a person changing their diet is big part of doing as much good as they can, which I don't think is true, and I don't think most EAs think is true.

A more relevant text to include in an Intro to EA curriculum in my view would be one that focuses on describing the animal suffering that exists in the world or effective efforts to reduce the suffering, rather than one that focuses on how dietary change is morally necessary. Some suggestions in this vein:

Comment by WilliamKiely on A New Book to Introduce People to Ethical Vegetarianism · 2022-02-11T00:57:08.931Z · EA · GW
By the way, if you want a legal free copy of the book, a previous draft was published in Between the Species.  You can find it here.

Perhaps this should be added to the main post.