Posts

Notes on "A World Without Email", plus my practical implementation 2022-06-20T15:34:53.335Z
Human survival is a policy choice 2022-06-03T18:53:50.599Z
The chance of accidental nuclear war has been going down 2022-05-31T14:48:26.560Z
In current EA, scalability matters 2022-03-03T14:42:03.762Z
We’re Rethink Priorities. Ask us anything! 2021-11-15T16:25:05.734Z
Rethink Priorities - 2021 Impact and 2022 Strategy 2021-11-15T16:17:34.212Z
Peter Wildeford's Shortform 2021-10-10T19:21:50.088Z
Notes on "Managing to Change the World" 2021-10-09T02:17:14.462Z
Help Rethink Priorities Use Data for Animals, Longtermism, and EA 2021-07-05T17:20:59.662Z
Please Take the 2020 EA Survey 2020-11-11T16:05:51.462Z
US Non-Profit? Get Free* Money From the Gov on 3 Apr! 2020-04-01T18:07:54.351Z
Coronavirus Research Ideas for EAs 2020-03-27T21:01:48.181Z
We're Rethink Priorities. AMA. 2019-12-12T16:09:19.404Z
Rethink Priorities 2019 Impact and Strategy 2019-12-02T16:32:25.324Z
Please Take the 2019 EA Survey! 2019-09-23T17:36:35.084Z
GiveWell's Top Charities Are Increasingly Hard to Beat 2019-07-10T00:34:52.510Z
EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge? 2019-06-16T23:04:46.626Z
Is EA Growing? EA Growth Metrics for 2018 2019-06-02T04:08:30.726Z
EA Survey 2018 Series: How Long Do EAs Stay in EA? 2019-05-31T00:32:20.989Z
Rethink Priorities Plans for 2019 2018-12-18T00:18:31.987Z
Open Thread #40 2018-07-08T17:51:47.777Z
Animal Equality showed that advocating for diet change works. But is it cost-effective? 2018-06-07T04:06:02.831Z
Cost-Effectiveness of Vaccines: Appendices and Endnotes 2018-05-08T07:43:43.262Z
Cost-Effectiveness of Vaccines: Exploring Model Uncertainty and Takeaways 2018-05-08T07:42:53.369Z
What is the cost-effectiveness of researching vaccines? 2018-05-08T07:41:10.595Z
How much does it cost to roll-out a vaccine? 2018-02-26T15:33:03.710Z
How much does it cost to research and develop a vaccine? 2018-02-24T01:23:33.601Z
What is Animal Farming in Rural Zambia Like? A Site Visit 2018-02-19T20:49:45.024Z
Four Organizations EAs Should Fully Fund for 2018 2017-12-12T07:17:14.418Z
Is EA Growing? Some EA Growth Metrics for 2017 2017-09-05T23:36:39.591Z
How long does it take to research and develop a new vaccine? 2017-06-28T23:20:04.289Z
Can we apply start-up investing principles to non-profits? 2017-06-27T03:16:49.074Z
The 2017 Effective Altruism Survey - Please Take! 2017-04-24T21:01:26.039Z
How do EA Orgs Account for Uncertainty in their Analysis? 2017-04-05T16:48:45.220Z
How Should I Spend My Time? 2017-01-08T03:22:46.745Z
Effective Altruism is Not a Competition 2017-01-05T02:11:23.505Z
Semi-regular Open Thread #35 2016-12-30T22:28:48.381Z
Why I Took the Giving What We Can Pledge 2016-12-28T00:02:57.065Z
The Value of Time Spent Fundraising: Four Examples 2016-12-23T04:35:25.797Z
What is the expected value of creating a GiveWell top charity? 2016-12-18T02:02:16.774Z
How many hits does hits-based giving get? A concrete study idea to find out (and a $1500 offer for implementation) 2016-12-09T03:08:25.796Z
Thoughts on the Reducetarian Labs MTurk Study 2016-12-02T17:12:44.731Z
Using a Spreadsheet to Make Good Decisions: Five Examples 2016-11-26T02:21:29.740Z
Students for High Impact Charity: Review and $10K Grant 2016-09-27T21:05:44.340Z
A Method for Automatic Trustworthiness in Study Pre-Registration 2016-09-25T04:22:38.817Z
Using Amazon's Mechanical Turk for Animal Advocacy Studies: Opportunities and Challenges 2016-08-02T19:24:58.259Z
Five Ways to Handle Flow-Through Effects 2016-07-28T03:39:44.235Z
End-Relational Theory of Meta-ethics: A Dialogue 2016-06-28T20:11:52.534Z
How should we prioritize cause prioritization? 2016-06-13T17:03:45.558Z
A Case for Empirical Cause Prioritization 2016-06-06T17:32:43.818Z

Comments

Comment by Peter Wildeford (Peter_Hurford) on New US Senate Bill on Catastrophic Risk Mitigation [Linkpost] · 2022-07-04T03:15:09.937Z · EA · GW

Per https://www.hsgac.senate.gov/media/minority-media/portman-peters-introduce-bipartisan-bill-to-ensure-federal-government-is-prepared-for-catastrophic-risks-to-national-security- Portman is also said to be on the bill, thus making it bipartisan, but we should note that Portman is retiring this term

Comment by Peter Wildeford (Peter_Hurford) on Announcing Epoch: A research organization investigating the road to Transformative AI · 2022-06-28T00:19:38.760Z · EA · GW

Thanks! We’re very excited to be both an accelerant and a partner for Epoch’s work

Comment by Peter Wildeford (Peter_Hurford) on Red-teaming Holden Karnofsky's AI timelines · 2022-06-25T17:44:53.854Z · EA · GW

Thanks for putting this together! I think more scrutiny on these ideas is incredibly important so I'm delighted to see you approach it.

So meta to red team a red team, but some things I want to comment on:

  • Your median estimate for the conservative and aggressive bioanchor reports in your table are accidentally flipped (2090 is the conservative median, not the aggressive one - and vice versa for 2040).

  • Looking literally at Cotra's sheet the median year occurs is 2053. Though in Cotra's report, you're right that she rounds this to 2050 and reports this as her official median year. So I think the only differences between your interpretation and Holden's interpretation is just different rounding.

  • I do agree more precise definitions would be helpful.

  • I don't think it makes sense to deviate from Cotra's best guess and create a mean out of aggregating between the conservative and aggressive estimates. We shouldn't assume these estimates are symmetric where the mean lies in the middle using some aggregation method, instead I think we should take Cotra's report literally where the mean of the distribution is where she says it is (it is her distribution to define how she wants), which would be the "best guess". In particular, her aggressive vs. conservative range does not represent any sort of formal confidence interval so we can't interpret it that way. I have some unpublished work where I re-run a version of Cotra's model where the variables are defined by formal confidence intervals - I think that would be the next step for this analysis.

  • The "Representativeness" section is very interesting and I'd love to see more timelines analyzed concretely and included in aggregations. For more reviews and analysis that include AI timelines, you should also look to "Reviews of “Is power-seeking AI an existential risk?”". I also liked this LessWrong thread where multiple people stated their timelines.

Comment by Peter Wildeford (Peter_Hurford) on Save the date: EAGxVirtual 2022 · 2022-06-25T01:21:10.600Z · EA · GW

I’m super excited for this! I love the in person conferences but I think virtual conferences also fill an important void for people who for a variety of reasons can’t easily travel to the US or UK.

Comment by Peter Wildeford (Peter_Hurford) on Preventing a US-China war as a policy priority · 2022-06-23T01:41:42.737Z · EA · GW

If you have time, could you (or someone else) explain strategic ambiguity with regard to the US against China? I never really understood it because my understanding is that deterrence relies on clear communication and a lot of wars arise from miscalculations around how likely an adversary is to engage.

Comment by Peter Wildeford (Peter_Hurford) on Notes on "A World Without Email", plus my practical implementation · 2022-06-21T23:00:02.760Z · EA · GW

Sounds exciting!

I’m curious how that will work for people who aren’t self-employed teams of one?

Comment by Peter Wildeford (Peter_Hurford) on Notes on "A World Without Email", plus my practical implementation · 2022-06-21T22:59:02.302Z · EA · GW

I didn’t mention it but I do use that actually

Comment by Peter Wildeford (Peter_Hurford) on Critiques of EA that I want to read · 2022-06-20T04:28:14.373Z · EA · GW

Right now the thing we are most interested in is finding a strong candidate to work on the Insect Welfare Project full-time: https://careers.rethinkpriorities.org/en/jobs/50511

Donations would also be helpful. This kind of stuff can be harder to find financial support for than other things in EA. https://rethinkpriorities.org/donate

Comment by Peter Wildeford (Peter_Hurford) on Emphasize Vegetarian Retention · 2022-06-12T18:19:31.299Z · EA · GW

Will do!

Comment by Peter Wildeford (Peter_Hurford) on Emphasize Vegetarian Retention · 2022-06-11T22:49:46.433Z · EA · GW
  • More time periods
  • Better question wording
  • 2.8x bigger sample size
Comment by Peter Wildeford (Peter_Hurford) on Emphasize Vegetarian Retention · 2022-06-11T18:46:49.049Z · EA · GW

However, polls suggest that the percentage of the population that’s vegetarian has stayed basically flat since 1999

I think there's three nitpicks I'd make here:

1.) The sample size of this poll you cite (margin of error of +/- 4%) is typically not large enough to detect subtle shifts in the percentage of vegetarians, especially since the initial population is so small, such that the veg rate could approximately double and still have a ~50% chance of not being detected by the poll.

2.) As you may know, asking people whether they are vegetarian/vegan in a poll is a fairly fraught concept, since we know that people frequently say "Yes" to this question while also saying "Yes" to eating meat.

3.) I think looking at a better collection of polls actually does find a positive upward trend, going from ~2.5% in 1999 to ~6% in 2022. These polls also solve (2) by better question wording.

None of this should be taken to undermine your point though - I do think veg retention is a large issue and more marginal work on it would be helpful.

Comment by Peter Wildeford (Peter_Hurford) on Emphasize Vegetarian Retention · 2022-06-11T18:36:43.181Z · EA · GW

the "*" is meant to be a glob/wildcard rather than a censor

Comment by Peter Wildeford (Peter_Hurford) on The dangers of high salaries within EA organisations · 2022-06-10T22:03:00.643Z · EA · GW

Speaking for Rethink Priorities, I'd just like to add that benchmarking to market rates is just one part of how we set compensation, and benchmarking to academia is just one part of how we might benchmark to market rates.

In general, academic salaries are notoriously low and I think this is harmful for building long-term relationships with talent that let them afford a life that we want them to be able to live. Also we want to be able to attract the top-tier of research assistant and a higher salary helps with that.

Comment by Peter Wildeford (Peter_Hurford) on Announcing the launch of Open Phil's new website · 2022-06-09T19:33:29.857Z · EA · GW

I found a few bugs:

Comment by Peter Wildeford (Peter_Hurford) on What does ‘one dollar of value’ mean? · 2022-06-07T06:04:16.892Z · EA · GW

I’ve wondered this a lot myself and find this lack of clarity to always be an issue. I personally think something in the realm of 9 makes the most sense, and I personally define “$X of value” as “as good as shifting $X from a morally neutral use to being donated to GiveDirectly”. It helps that I roughly expect GiveDirectly to have linear returns, even in the range of billions spent. But I do try to make this explicit in a footnote or something when I discuss value.

Another good idea in the realm of 9 is how GiveDirectly defines their ROI:

We measure philanthropic impact in units of the welfare gained by giving a dollar to someone with an annual income of $50,000, which was roughly US GDP per capita when we adopted this framework.

Comment by Peter Wildeford (Peter_Hurford) on The 2015 Survey of Effective Altruists: Results and Analysis · 2022-06-04T03:36:31.338Z · EA · GW

The original results are hosted on a site that no longer works, so the results have been moved here: https://rethinkpriorities.org/s/EASurvey2015.pdf

Comment by Peter Wildeford (Peter_Hurford) on The 2014 Survey of Effective Altruists: Results and Analysis · 2022-06-04T03:36:05.145Z · EA · GW

The previous link to the survey results died, so I edited to update the link.

Comment by Peter Wildeford (Peter_Hurford) on Introducing EAecon: Community-Building Project · 2022-05-29T00:18:25.243Z · EA · GW

I hope someday you organize a convention and call it EAEconCon

Comment by Peter Wildeford (Peter_Hurford) on Should we be hiring more “unqualified” people? · 2022-05-15T15:22:27.869Z · EA · GW

I can't think of any problem area where I'd be excited to actively hire a ton of people without vetting or supervision, but I agree that just because I can't think of one doesn't mean that one doesn't exist.

Also, as you and others mention, giving out prizes our bounties could work well if you have an area where you could easily evaluate the quality of a piece of work.

Comment by Peter Wildeford (Peter_Hurford) on Should we be hiring more “unqualified” people? · 2022-05-15T02:34:37.172Z · EA · GW

I think the core issue with your idea is that the problems we are interested in are all problems where progress is very difficult, and it’s furthermore very difficult to evaluate the quality of someone’s work, and furthermore it is very hard for them to make progress without lots of guidance and feedback, so you cannot just throw a ton of people at the problem and expect it to work well.

I like the idea of giving more people opportunities though, and I like that Rethink Priorities plays a role in this by trying to hire a lot of people to do research. But we find it requires a lot of mentorship and management for people to do well.

Comment by Peter Wildeford (Peter_Hurford) on The biggest risk of free-spending EA is not optics or motivated cognition, but grift · 2022-05-14T19:25:06.887Z · EA · GW

This matches my personal experience as well.

Comment by Peter Wildeford (Peter_Hurford) on Intro and practical ideas around Salesforce within EA · 2022-05-14T02:31:38.388Z · EA · GW

Can you elaborate more on what benefits an organization might get from Salesforce?

Comment by Peter Wildeford (Peter_Hurford) on What are some high-EV but failed EA projects? · 2022-05-14T01:14:49.104Z · EA · GW

I think three key differences:

  • By 2018, we had more of a track record before starting.

  • For the 2018 attempt, we self-funded for six months before seeking funding to build an even bigger track record, rather than trying to get funding right at the beginning.

  • EA funding was notably more plentiful in 2018 than 2016. (Though still notably less plentiful than in 2022.)

Comment by Peter Wildeford (Peter_Hurford) on What are some high-EV but failed EA projects? · 2022-05-13T19:55:43.943Z · EA · GW

Few people know that we tried to start something pretty similar to Rethink Priorities in 2016 (our actual founding was in 2018). We (Marcus and me, the RP co-founders, plus some others) did some initial work but failed to get sustained funding and traction so we gave up for >1 year before trying again. Given that RP -2018 seems to have turned out to be quite successful, I think RP-2016 could be an example of a failed project?

Comment by Peter Wildeford (Peter_Hurford) on Bad Omens in Current Community Building · 2022-05-13T17:10:47.999Z · EA · GW

I think it will be really important for EAs to engage in more empirical work to understand how people think about EA. Of course you don't want people to feel like they're being fed the results of a script tested by a focus group (that's the whole point of this post), but you do want to actually know in reliable ways how bad some of these problems are, how things are resonating, and how to do better in a genuine and authentic way. Empirical results should be a big part of this (though not all of it), but right now they aren't, and this seems bad. Instead, we frequently confuse "what my immediate friends in my immediate network think about EA" with "what everyone thinks about EA" and I think this is a mistake.

This is something Rethink Priorities is working on this year, though we invite others to do similar work. I think there's a lot we can learn!

Comment by Peter Wildeford (Peter_Hurford) on Some clarifications on the Future Fund's approach to grantmaking · 2022-05-13T15:55:45.055Z · EA · GW

Do you think it was a mistake to put "FTX" in the "FTX Future Fund" so prominently? My thinking is that you likely want the goodness of EA and philanthropy to make people feel more positively about FTX, which seems fine to me, but in doing so you also run a risk of if FTX has any big scandal or other issue it could cause blowback on EA, whether merited or not.

I understand the Future Fund has tried to distance itself from effective altruism somewhat, though I'm skeptical this has worked in practice.

To be clear, I do like FTX personally, am very grateful for what the FTX Future Fund does, and could see reasons why putting FTX in the name is also a positive.

Comment by Peter Wildeford (Peter_Hurford) on Snakebites kill 100,000 people every year, here's what you should know · 2022-05-11T18:53:51.087Z · EA · GW

I will follow up tomorrow!

Comment by Peter Wildeford (Peter_Hurford) on Potatoes: A Critical Review · 2022-05-10T19:34:36.562Z · EA · GW

Good example of red teaming a paper!

Comment by Peter Wildeford (Peter_Hurford) on EA Tours of Service · 2022-05-10T19:17:14.375Z · EA · GW

I'm interested in this idea. I also really like and endorse the idea of making very clear, actionable, mostly objective goals for employment even if that employment is open-ended and not tied to a specific length.

Comment by Peter Wildeford (Peter_Hurford) on 'Beneficentrism', by Richard Yetter Chappell · 2022-05-10T15:18:42.756Z · EA · GW

Thanks! Both of those approaches sounds justifiable to me.

Comment by Peter Wildeford (Peter_Hurford) on Some clarifications on the Future Fund's approach to grantmaking · 2022-05-10T04:32:58.440Z · EA · GW

Note that it may be hard to give criticism (even if anonymous) about FTX's grantmaking because a lot of FTX's grantmaking is (currently) not disclosed. This is definitely understandable and likely avoids certain important downsides, but it also does amplify other downsides (e.g., public misunderstanding of FTX's goals and outputs) - I'm not sure how to navigate that trade-off, but it is important to acknowledge that it exists!

Comment by Peter Wildeford (Peter_Hurford) on 'Beneficentrism', by Richard Yetter Chappell · 2022-05-10T02:44:42.138Z · EA · GW

I'm a big fan of your philosophical writing and your attempts to philosophically defend and refine utilitarianism and effective altruism. I also really like your more general idea here of pushing people to think less about avoiding wrongdoing and towards thinking more about rightdoing.

I think one thing I'd wonder is what it means to make something a "central life project" and what kind of demandingness this implies. Is GWWC membership sufficient? Is 30min of volunteering a week sufficient? This is the hard part I think about satisficing views (even though I personally am definitely a satisficier when it comes to ethics).

I'm also curious what you mean by "[y]ou could accept any number of views about partiality and/or priority" since I think this actually runs counter to one of the core tenets of what I think of effective altruism, which is the radical empathy/impartiality of extending our care to strangers, nonhuman animals, future people, etc. In fact, I often think you gain a lot more by convincing people to adopt the radical empathy and "per dollar effectiveness maximization" views of effective altruism even if they then don't maximize their efforts / make EA a central life project. That is, I think someone devoting 1% of their income to The Humane League will create more benefit for general welfare than another person devoting 10% of their income to charities that laypeople typically think of when they think they are helping the general welfare.

I think the main way to rescue this is to insist strongly on the radical impartiality part but not insist on making it the sole thing a person does with their resources, or even their resources set aside to philanthropy.

Comment by Peter Wildeford (Peter_Hurford) on The case for becoming a black-box investigator of language models · 2022-05-06T22:56:28.368Z · EA · GW

I’ve started doing a bunch of this and posting results to my Twitter.

Comment by Peter Wildeford (Peter_Hurford) on EA is more than longtermism · 2022-05-04T23:57:51.320Z · EA · GW

I think it's a factor of global health being already allocated to much more scalable opportunities than exist in longtermism, whereas the longtermists have a much smaller amount of funding opportunities to compete for. EA individuals are the main source of longtermist opportunities and thus we get showered in longtermist money but not other kinds of money.

Animals is a bit more of a mix of the two.

Comment by Peter Wildeford (Peter_Hurford) on Why CEA Online doesn’t outsource more work to non-EA freelancers · 2022-05-04T21:54:15.314Z · EA · GW

Ah ok that makes sense

Comment by Peter Wildeford (Peter_Hurford) on Why CEA Online doesn’t outsource more work to non-EA freelancers · 2022-05-04T20:33:01.448Z · EA · GW

What's an example of something that is a core competency yet operationally unimportant (the top-left grid)? I'm starting to think the entire operational importance axis isn't needed.

Comment by Peter Wildeford (Peter_Hurford) on EA is more than longtermism · 2022-05-04T20:10:48.666Z · EA · GW

On one hand it's clear that global poverty does get the most overall EA funding right now, but it's also clear that it's more easy for me to personally get my 20th best longtermism idea funded than to get my 3rd best animal idea or 3rd best global poverty idea funded and this asymmetry seems important.

Comment by Peter Wildeford (Peter_Hurford) on Why CEA Online doesn’t outsource more work to non-EA freelancers · 2022-05-04T16:35:59.832Z · EA · GW

I get that outsourcing doesn't work for core competencies but why does outsourcing not work for operationally unimportant activities? Basically I'm confused by the bottom-left quadrant.

Comment by Peter Wildeford (Peter_Hurford) on Snakebites kill 100,000 people every year, here's what you should know · 2022-05-04T16:31:44.250Z · EA · GW

The best example I have right now is that I funded a climate change research group but arguably research is still infrastructure... I'd like to fund some more direct stuff though

Comment by Peter Wildeford (Peter_Hurford) on Snakebites kill 100,000 people every year, here's what you should know · 2022-05-04T15:10:50.747Z · EA · GW

I agree it is confusing but I prefer to just fund impactful things and worry less about what fits in the scope of the fund. And even if the other fund managers deem it out of scope, I frequently can refer it to other interested funders.

Comment by Peter Wildeford (Peter_Hurford) on Snakebites kill 100,000 people every year, here's what you should know · 2022-05-03T16:14:13.661Z · EA · GW

I'd be willing to fund these sorts of things via the Effective Altruism Infrastructure Fund

Comment by Peter Wildeford (Peter_Hurford) on Snakebites kill 100,000 people every year, here's what you should know · 2022-05-03T15:21:46.738Z · EA · GW

EA Funds definitely accepts unsolicited proposals! That's the whole point of it!

Comment by Peter Wildeford (Peter_Hurford) on Snakebites kill 100,000 people every year, here's what you should know · 2022-04-28T21:02:02.173Z · EA · GW

Does anyone know if there are any ways to direct funding to this? I'd be potentially interested in exploring it.

Comment by Peter Wildeford (Peter_Hurford) on Consider Changing Your Forum Username to Your Real Name · 2022-04-27T16:11:28.501Z · EA · GW

no

Comment by Peter Wildeford (Peter_Hurford) on Mid-career people: strongly consider switching to EA work · 2022-04-27T03:18:12.925Z · EA · GW

I'm not sure why it couldn't be like any other startup that doesn't pan out - save money in the process and in the interviews talk about what you did and what you learned.

Comment by Peter Wildeford (Peter_Hurford) on Mid-career people: strongly consider switching to EA work · 2022-04-26T18:29:11.065Z · EA · GW

I agree. To point to a singular source of funding to complete this call to action, I encourage relevant onlookers to look into the Effective Altruism Infrastructure Fund.

Comment by Peter Wildeford (Peter_Hurford) on Consider Changing Your Forum Username to Your Real Name · 2022-04-26T17:13:42.579Z · EA · GW

I'd add that as a person who has done recruiting for EA orgs, I like to try to hire from talented-seeming EA forum posters and it is a lot easier to try to recruit someone when their full name is accessible from their username or bio.

Comment by Peter Wildeford (Peter_Hurford) on Working in US policy as a foreign national: Immigration pathways and types of impact · 2022-04-26T00:05:26.735Z · EA · GW

Onlookers should also note that Rethink Priorities (and other organizations) can also hire non-US nationals in their home country to work on some matters relevant to US policy without needing to immigrate.

Comment by Peter Wildeford (Peter_Hurford) on FTX/CEA - show us your numbers! · 2022-04-20T19:31:15.686Z · EA · GW

This varies grantmaker-to-grantmaker but I personally try to get an ROI that is at least 10x better than donating the equivalent amount to AMF.

I'd really like to help programs build more learning by doing. That seems like a large gap worth addressing. Right now I find myself without enough capacity to do it, so hopefully someone else will do it, or I'll eventually figure out how to get myself or someone at Rethink Priorities to work on it (especially given that we've been hiring a lot more).

Comment by Peter Wildeford (Peter_Hurford) on FTX/CEA - show us your numbers! · 2022-04-19T18:40:41.530Z · EA · GW

I agree with what you are saying that yes, we ideally should rank order all the possible ways to market EA and only take those that get the best (quality adjusted) EAs per $ spent, regardless of our value of EAs - that is, we should maximize return on investment.

**However, in practice, as we do not currently yet have enough EA marketing opportunities to saturate our billions of dollars in potential marketing budget, it would be an easier decision procedure to simply fund every opportunity that meets some target ROI threshold and revise that ROI threshold over time as we learn more about our opportunities and budget. ** We'd also ideally set ourselves to learn-by-doing when engaging in this outreach work.