Posts

EA is three radical ideas I want to protect 2023-03-27T15:31:55.092Z
Rethink Priorities’ Leadership Statement on the FTX situation 2022-11-23T22:54:56.079Z
Announcing Squigglepy, a Python package for Squiggle 2022-10-19T18:34:20.431Z
21 criticisms of EA I'm thinking about 2022-09-01T19:28:13.107Z
Notes on how prizes may fail and how to reduce the risk of them failing 2022-08-30T18:57:40.284Z
Notes on "A World Without Email", plus my practical implementation 2022-06-20T15:34:53.335Z
Human survival is a policy choice 2022-06-03T18:53:50.599Z
The chance of accidental nuclear war has been going down 2022-05-31T14:48:26.560Z
In current EA, scalability matters 2022-03-03T14:42:03.762Z
We’re Rethink Priorities. Ask us anything! 2021-11-15T16:25:05.734Z
Rethink Priorities - 2021 Impact and 2022 Strategy 2021-11-15T16:17:34.212Z
Peter Wildeford's Shortform 2021-10-10T19:21:50.088Z
Notes on "Managing to Change the World" 2021-10-09T02:17:14.462Z
Help Rethink Priorities Use Data for Animals, Longtermism, and EA 2021-07-05T17:20:59.662Z
Please Take the 2020 EA Survey 2020-11-11T16:05:51.462Z
US Non-Profit? Get Free* Money From the Gov on 3 Apr! 2020-04-01T18:07:54.351Z
Coronavirus Research Ideas for EAs 2020-03-27T21:01:48.181Z
We're Rethink Priorities. AMA. 2019-12-12T16:09:19.404Z
Rethink Priorities 2019 Impact and Strategy 2019-12-02T16:32:25.324Z
Please Take the 2019 EA Survey! 2019-09-23T17:36:35.084Z
GiveWell's Top Charities Are Increasingly Hard to Beat 2019-07-10T00:34:52.510Z
EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge? 2019-06-16T23:04:46.626Z
Is EA Growing? EA Growth Metrics for 2018 2019-06-02T04:08:30.726Z
EA Survey 2018 Series: How Long Do EAs Stay in EA? 2019-05-31T00:32:20.989Z
Rethink Priorities Plans for 2019 2018-12-18T00:18:31.987Z
Open Thread #40 2018-07-08T17:51:47.777Z
Animal Equality showed that advocating for diet change works. But is it cost-effective? 2018-06-07T04:06:02.831Z
Cost-Effectiveness of Vaccines: Appendices and Endnotes 2018-05-08T07:43:43.262Z
Cost-Effectiveness of Vaccines: Exploring Model Uncertainty and Takeaways 2018-05-08T07:42:53.369Z
What is the cost-effectiveness of researching vaccines? 2018-05-08T07:41:10.595Z
How much does it cost to roll-out a vaccine? 2018-02-26T15:33:03.710Z
How much does it cost to research and develop a vaccine? 2018-02-24T01:23:33.601Z
What is Animal Farming in Rural Zambia Like? A Site Visit 2018-02-19T20:49:45.024Z
Four Organizations EAs Should Fully Fund for 2018 2017-12-12T07:17:14.418Z
Is EA Growing? Some EA Growth Metrics for 2017 2017-09-05T23:36:39.591Z
How long does it take to research and develop a new vaccine? 2017-06-28T23:20:04.289Z
Can we apply start-up investing principles to non-profits? 2017-06-27T03:16:49.074Z
The 2017 Effective Altruism Survey - Please Take! 2017-04-24T21:01:26.039Z
How do EA Orgs Account for Uncertainty in their Analysis? 2017-04-05T16:48:45.220Z
How Should I Spend My Time? 2017-01-08T03:22:46.745Z
Effective Altruism is Not a Competition 2017-01-05T02:11:23.505Z
Semi-regular Open Thread #35 2016-12-30T22:28:48.381Z
Why I Took the Giving What We Can Pledge 2016-12-28T00:02:57.065Z
The Value of Time Spent Fundraising: Four Examples 2016-12-23T04:35:25.797Z
What is the expected value of creating a GiveWell top charity? 2016-12-18T02:02:16.774Z
How many hits does hits-based giving get? A concrete study idea to find out (and a $1500 offer for implementation) 2016-12-09T03:08:25.796Z
Thoughts on the Reducetarian Labs MTurk Study 2016-12-02T17:12:44.731Z
Using a Spreadsheet to Make Good Decisions: Five Examples 2016-11-26T02:21:29.740Z
Students for High Impact Charity: Review and $10K Grant 2016-09-27T21:05:44.340Z
A Method for Automatic Trustworthiness in Study Pre-Registration 2016-09-25T04:22:38.817Z

Comments

Comment by Peter Wildeford (Peter_Hurford) on The supply gap of EA org service providers · 2023-03-27T22:48:31.018Z · EA · GW

Finance: everything from bookkeeping to outsourced CFOs Legal: contracts, employment, compliance Tech: software implementation, Salesforce etc

Why would it make sense for there to be EA-specific services for these? All of these services seem like things you can outsource to non-EA firms just fine and benefit little to none from EA knowledge/affiliation/alignment.

Comment by Peter Wildeford (Peter_Hurford) on Shutting Down the Lightcone Offices · 2023-03-15T20:12:13.608Z · EA · GW

Where did/does Lightcone get the money to run?

Comment by Peter Wildeford (Peter_Hurford) on Share the burden · 2023-03-13T17:12:41.700Z · EA · GW

I think your feelings are genuine, but I'm unfortunately not sure what to do about them besides what I'm already doing, which is try to be empathetic and welcoming.

~

there is a discussion on twitter that suggests screenshots of the forum are fair game. I disagree - while public, this is a different kind of public than twitter. If screenshots are fair game then rephrasing or retracting is out the window.

I had a conversation with someone that went like this:

Them - "Man, the EA Forum is like if all of EA had a water cooler to chat by"

Me, sarcastic - "Great, yeah, real smart of us to have a water cooler that is surrounded by journalists"

I think this gets at an important point that is pretty stifling / chilling, since the norms we've cultivated may not be upheld in other venues. I think it's important to have these conversations in public so everyone can hear, but there are real large costs to that.

Another option: maybe have a moderated conversation in an offline space and then edit it before publishing?

Comment by Peter Wildeford (Peter_Hurford) on Share the burden · 2023-03-13T15:22:36.118Z · EA · GW

To be clear, I definitely do think you take women's sadness seriously.

Also I certainly hope nothing I've done has implied that you should agree or shut up - that's not my intention at all.

I really do think benefit of the doubt is important. If you misphrase an idea and then concede that you misphrased it, I will understand that and not change my respect for you. I misphrase ideas all the time.

Comment by Peter Wildeford (Peter_Hurford) on Share the burden · 2023-03-12T04:02:47.095Z · EA · GW

Yeah I'm just going to retract my comment entirely because it looks like I misunderstood the situation.

Comment by Peter Wildeford (Peter_Hurford) on A personal response to Nick Bostrom's "Apology for an Old Email" · 2023-03-12T03:27:12.839Z · EA · GW

I don't think this is a good way to think about it. I do actually think this is a pretty racist way of thinking about it. I guarantee you 100% that the reason wherever you are "lacks diversity" is not because minorities "lack the relevant level of aptitude". And I think disparate impact tests are pretty clearly a good thing.

Comment by Peter Wildeford (Peter_Hurford) on Share the burden · 2023-03-12T03:22:43.667Z · EA · GW

Yeah, we should probably do something about that. My guess is that Community Health is on this (EDIT: they are on this, sorry I missed that message!)

I imagine there's a few things CH could do if they learn the identity of the offender - my guess is an appropriate reaction would be a warning or maybe just ban them from the next EAG, followed by permanently banning from EAG for repeated offending.

Comment by Peter Wildeford (Peter_Hurford) on Share the burden · 2023-03-11T20:21:20.462Z · EA · GW

Hey Nathan, thanks for sharing even when it's hard. I'd be curious to hear more about "I think that both parties in this current sexual norms discourse find this discussion exhausting." I think there are tremendously simple norms at play here, from Emma's accounts of EAG in this article:

  • Don't use Swapcard (or other clearly professional infrastructure) to try to get dates / flirt.

  • Don't immediately start touching people until there's a clearer context / consent for it. If you're in doubt, either ask or don't touch them.

  • If someone tells you to stop doing something, stop doing it.

There are definitely a few more norms that should be added to this list.

But I don't think these are too hard or exhausting to think about or follow. And, of course, it goes without saying but I imagine it's way more exhausting for sexual harrassment victims than for non-victims. Curious what I'm missing?

Comment by Peter Wildeford (Peter_Hurford) on Nathan Young's Shortform · 2023-03-10T06:16:43.634Z · EA · GW

Questioning, doubt, and dissent are discouraged or even punished.

I think this is probably partial, given claims in this post, and positive-agreevote concerns here (though clearly all of the agree voters might be wrong).

I think you may have very high standards? By these standards, I don't think there are any communities at all that would score 0 here.

~

I think this is nonzero, I think subsets of the community do display "excessively zealous" commitment to a leader given "What would SBF do" stickers. Outside views of LW (or at least older versions of it would probably worry that this was an EY cult.

I was not aware of "What would SBF do" stickers. Hopefully those people feel really dumb now. I definitely know about EY hero worship but I was going to count that towards a separate rationalist/LW cult count instead of the EA cult count.

Comment by Peter Wildeford (Peter_Hurford) on Nathan Young's Shortform · 2023-03-10T06:12:37.249Z · EA · GW

Ok updated to 0.5. I think "the leader is considered the Messiah or an avatar" being false is fairly important.

Comment by Peter Wildeford (Peter_Hurford) on Nathan Young's Shortform · 2023-03-10T01:26:44.016Z · EA · GW

My call: EA gets 3.9 out of 14 possible cult points.

The group is focused on a living leader to whom members seem to display excessively zealous, unquestioning commitment.

No

The group is preoccupied with bringing in new members.

Yes (+1)

The group is preoccupied with making money.

Partial (+0.8)

Questioning, doubt, and dissent are discouraged or even punished.

No

Mind-numbing techniques (such as meditation, chanting, speaking in tongues, denunciation sessions, debilitating work routines) are used to suppress doubts about the group and its leader(s).

No

The leadership dictates sometimes in great detail how members should think, act, and feel (for example: members must get permission from leaders to date, change jobs, get married; leaders may prescribe what types of clothes to wear, where to live, how to discipline children, and so forth).

No

The group is elitist, claiming a special, exalted status for itself, its leader(s), and members (for example: the leader is considered the Messiah or an avatar; the group and/or the leader has a special mission to save humanity).

Partial (+0.5)

The group has a polarized us- versus-them mentality, which causes conflict with the wider society.

Very weak (+0.1)

The group's leader is not accountable to any authorities (as are, for example, military commanders and ministers, priests, monks, and rabbis of mainstream denominations).

No

The group teaches or implies that its supposedly exalted ends justify means that members would have considered unethical before joining the group (for example: collecting money for bogus charities).

Partial (+0.5)

The leadership induces guilt feelings in members in order to control them.

No

Members' subservience to the group causes them to cut ties with family and friends, and to give up personal goals and activities that were of interest before joining the group.

No

Members are expected to devote inordinate amounts of time to the group.

Yes (+1)

Members are encouraged or required to live and/or socialize only with other group members.

No

Comment by Peter Wildeford (Peter_Hurford) on Should people get neuroscience phD to work in AI safety field? · 2023-03-07T21:01:48.410Z · EA · GW

I think your reply is pretty heavily based on deciding between neuroscience PhD and CS PhD, but my guess is >80% likely the best move is to not get a PhD at all.

Comment by Peter Wildeford (Peter_Hurford) on More Centralisation? · 2023-03-07T17:30:43.361Z · EA · GW

It's totally plausible that one rogue clergyperson (employee) or congregation (project) could incur enough liability to overwhelm a single corporation's insurance and consume the assets of the other 11 congregations and the central office.

Yeah I think this is a really plausible risk to centralization.

~

if RP Special Projects grew and/or took on riskier projects, I would at least consider spinning it off into a wholly owned subsidiary of RP.

This is definitely something we are considering doing.

Comment by Peter Wildeford (Peter_Hurford) on More Centralisation? · 2023-03-07T16:29:54.770Z · EA · GW

On the first point, if FTX had happened and there were more large EA organisations, it would have been easier to handle the fall out from that, with more places for smaller organisations and individuals to go to for support.

Right, I totally agree with that. I think the way FTX Future Fund pushed their risk onto individuals who were not well equipped to understand and/or handle that was a huge failing and is very sad.

~

On the last point it seems like that was a part of why DARPA had success, they had lots of projects and were focused on the best succeeding rather than maintaining failing ideas.

Agreed. That's part of what I'm trying to do with Rethink Priorities. But I think there are many ways in which we fail to live up to that, and my guess is that other big orgs maintain cost-ineffective projects due to inertia or political pressure because it's really hard to be ruthless about these sorts of things.

Comment by Peter Wildeford (Peter_Hurford) on More Centralisation? · 2023-03-07T16:08:21.227Z · EA · GW

It seems like the main point of the original post and the comments are about how more centralization is helpful. For balance, I want to argue against myself and while I think there are clear benefits to net centralization, there are also some reasons/ways net centralization may be harmful:

  • You are consolidating legal risk into fewer entities, meaning that one high-level mistake can take a lot of things down (in the wake of FTX this seems extra important... I think this is the biggest drawback to more centralization)

  • Mainly due to the above but also other factors, larger organizations are much more risk averse and can just do fewer things

  • Smaller orgs/individuals who don't have to care about their reputation as much can take bolder risks (this is both good and bad)

  • Smaller organizations are quicker to act, require less stakeholder sign-off to get things done (this is both good and bad)

  • Smaller organizations I guess on the margin are more able to shut down with fewer politics if things don't work out rather than continuing to do something not effective (but on the other hand maybe shutting down more clearly means you are fired and don't get money whereas in a bigger org maybe you can be moved to a new project?)

Comment by Peter Wildeford (Peter_Hurford) on More Centralisation? · 2023-03-07T07:15:23.153Z · EA · GW

I think a lot of the stuff Deena touches on in "3 Basic Steps to Reduce Personal Liability as an Org Leader" are important here too. I think de-centralization has lead to a lot of people doing work independently and then being really under-resourced to handle the pressures of that (and this went 100x in particular response to FTX). I think grantees-as-individuals need to be very careful about not co-mingling funds, making sure taxes are in order, etc. and our current community plan of having a lot of individual grantees may involve getting people to be taking on a lot of legal risk that bigger organizations are in a better position to handle.

I think especially during the FTX era but still now, I have been a bit surprised to see a bias towards wanting to fund many smaller orgs rather than one bigger org. FTX had an explicit preference for newer and less established orgs / individuals and I think that clearly backfired. Some of this makes sense as you want to avoid having "all of your eggs in one basket" / "hedge your bets" but I think big orgs have a lot of great advantages that are underrated by EA funders and others.

~

Disclaimer: Obviously I would say this though given that I run a "big org" (Rethink Priorities). I'm speaking for just myself personally here though, not some RP position (there's a lot of diversity of perspective at RP). Also I am complaining about "funders" here but I am on the EA Infrastructure Fund so maybe I'm part of the problem?

Comment by Peter Wildeford (Peter_Hurford) on On the First Anniversary of my Best Friend’s Death · 2023-03-07T07:03:46.992Z · EA · GW

Damn, Alexa sounds like an incredible person. I'm so sorry for your loss. Thank you for sharing more of her with us.

Comment by Peter Wildeford (Peter_Hurford) on More Centralisation? · 2023-03-07T06:11:26.328Z · EA · GW

I agree. I've explored some of this. I think it definitely could make sense in a lot of cases.

Comment by Peter Wildeford (Peter_Hurford) on Nick Bostrom should step down as Director of FHI · 2023-03-06T18:16:17.513Z · EA · GW

Thanks - that's a good point.

Comment by Peter Wildeford (Peter_Hurford) on Nick Bostrom should step down as Director of FHI · 2023-03-06T18:15:44.415Z · EA · GW

Yeah I was definitely using the word "proof" colloquially and not literally. My understanding from inside info though is that FHI's issues with Oxford have very little to do with their choice of research agenda. I think this is also clear from outside info (FHI had a similar research agenda for a long time and had university support).

Comment by Peter Wildeford (Peter_Hurford) on Nick Bostrom should step down as Director of FHI · 2023-03-06T17:37:16.616Z · EA · GW

Great points.

~

All of the people mentioned joined a long time ago and all but Sandberg have left GPI. Is there anyone of a comparable quality that joined in the last 5 years?

Just two quick nitpicks: I think you mean "FHI" not "GPI". And I think Drexler is still at FHI in addition to Sandberg. But you're right that ASB, Owain, and Stuart Armstrong have left FHI.

Comment by Peter Wildeford (Peter_Hurford) on Nick Bostrom should step down as Director of FHI · 2023-03-06T17:35:41.066Z · EA · GW

GPI is a pretty clear existence proof that while collaborating with universities is difficult and costly, it can be done.

Comment by Peter Wildeford (Peter_Hurford) on Scoring forecasts from the 2016 “Expert Survey on Progress in AI” · 2023-03-06T16:43:31.194Z · EA · GW

No, it was me who got this wrong. Thanks!

Comment by Peter Wildeford (Peter_Hurford) on Nathan Young's Shortform · 2023-03-06T08:37:35.793Z · EA · GW

We've had multiple big newspaper attacks now. How'd we do compared to your expectations?

Comment by Peter Wildeford (Peter_Hurford) on Nathan Young's Shortform · 2023-03-06T08:36:48.645Z · EA · GW

Thanks!

"Chesterton's TAP" is the most rationalist buzzword thing I've ever heard LOL, but I am putting together that what Chana said is that she'd like there to be some way for people to automatically notice (the trigger action pattern) when they might be adopting an abnormal/atypical governance plan and then reconsider whether the "normal" governance plan may be that way for a good reason even if we don't immediately know what that reason is (the Chesterton's fence)?

Comment by Peter Wildeford (Peter_Hurford) on Nick Bostrom should step down as Director of FHI · 2023-03-06T08:09:11.358Z · EA · GW

Given how the Oxford group has become the most relevant internationally in the academic research of X-risk is hard to argue against his tenure.

Disagree. The relevance here is what FHI will accomplish in the future, not what it has accomplished in the past. And it seems clear that it is not hard to argue against his tenure as people are clearly doing just that.

Beyond that, my main claim is about the mail incident.

Disagree with you as well, but I am going to stand by my desire to not relitigate the apology here and instead defer that conversation to other threads.

Comment by Peter Wildeford (Peter_Hurford) on A personal response to Nick Bostrom's "Apology for an Old Email" · 2023-03-06T02:08:31.168Z · EA · GW

I just want to add that I can't think of anyone denying (1) - that there are actual observed differences in IQ tests between races. None of the people ragging on Bostrom are denying this. So the fact that Rutherford and Bostrom agree on (1) is entirely irrelevant and unsurprising. I think the main disagreement is on (2) and way more importantly (3).

I personally agree with titotal that taking a statement like "there are currently differences in average IQ test score between races, for a variety of reasons, primarily racism and it's legacy", and reducing it to "blacks are stupider than whites" is - in titotal's words "stripping away all the context from a complex issue into a gross simplification with a negative spin that furthers a racist narrative". I don't really see what we gain from doing that or why that somehow is cool / should be protected / should be celebrated. I think that's the main crux.

Comment by Peter Wildeford (Peter_Hurford) on Nick Bostrom should step down as Director of FHI · 2023-03-06T02:01:00.202Z · EA · GW

My understanding is that this is indeed unique to FHI, unfortunately. This is maybe why FHI and GPI make for a compelling comparison - both are EA-affiliated, both are in the University of Oxford. While working with a University is never easy, GPI seems totally fine and indeed does continue to hire, run events, etc. FHI does not.

I don't know about the alternatives to Bostrom or how likely they would be to change the situation. Nathan makes a good point that perhaps prediction markets could play a role here. I generally think that, given I run an EA research org that could be construed as competing with FHI for funding/talent/influence/etc. I shouldn't really engage in explicitly calling for Bostrom to step down or help analyze the alternatives. But hopefully I can more generally help people think through the situation more clearly as a whole. I mainly wrote what I wrote because the comment made me angry enough that I felt like I had to.

Comment by Peter Wildeford (Peter_Hurford) on Nick Bostrom should step down as Director of FHI · 2023-03-06T01:38:00.456Z · EA · GW

I hesitate to weigh in here but I really don't think this is a good way of thinking about it.

I'm certainly not trying to "bully" Bostrom and I don't view the author of this post as trying to "bully" Bostrom either. If Bostrom were to step down as Director, I don't see that as somehow a "win" for "bullying", whatever that means.

I do agree that being able to come up with important and useful ideas requires feelings of safety and for this reason and others I always want to give people the benefit of the doubt when they express themselves. Moreover, I understand that in a social movement made up of thousands of people, you are not going to be able to find common agreement on every issue and in order to make progress we need to find some way to deal with that. So I am pretty sympathetic to the view that Bostrom deserves some form of generalized protection even if he's said colossally stupid things.

But - to be clear - no one I know is trying to get Bostrom fired or expelled or cancelled or jailed or anything. He still could have a very cushy, high status, independent non-cancelled life as a "FHI senior researcher", even if he weren't Director. The question is - should he be Director?

My understanding of the view of the author of this post is that:

(1) FHI is probably useful and important and does good things for the world,

(2) FHI would probably be more useful and more important and do more good things for the world if it had a really great Director,

and (3) Bostrom is not a really great Director (at least going forward in expectation).

The alleged "significant mismanagement" seems like great evidence for (3). This is just basic consequentialist reasoning that I think all orgs - especially those that claim to be affiliated with effective altruism - engage in. I'd happily welcome people write "Peter Wildeford should step down as Co-CEO of Rethink Priorities" if there indeed were good reasons for me to do so.

So I certainly find it overdramatic at best to take "here are a few reasons why Bostrom would not be the ideal leader of FHI going forward" and convert it to "all original thinkers have a Sword of Damocles hanging over their head, knowing they might be denounced and fired if that became politically expedient". Being a good leader means things like being able to communicate well and understand when your actions will have predictably bad consequences, avoid making everyone really uncomfortable about working with you, and avoid getting your organization to the point where you can't hire anyone and your operations staff and other key leadership quit. To be frank - a lot of Bostrom's research is great and I'm very grateful to him for a lot of it, but this benchmark is just something Bostrom isn't accomplishing and I think independent researcher life would suit him better and be a win-win for everyone.

Comment by Peter Wildeford (Peter_Hurford) on Nick Bostrom should step down as Director of FHI · 2023-03-06T01:20:48.801Z · EA · GW

1 and 2 are very good points, thanks.

re 3: It's also not out of the question that they could just aim to have an open (or private) hiring round for a new Director, perhaps with Ord or Sandberg as Interim/Acting Director in the meantime.

Comment by Peter Wildeford (Peter_Hurford) on Nick Bostrom should step down as Director of FHI · 2023-03-06T01:19:50.364Z · EA · GW

Does being the best philosopher of our time and the main founder or longtermism mean that he is the best person to run FHI as a Director? I don't really see the relevance.

Comment by Peter Wildeford (Peter_Hurford) on Nick Bostrom should step down as Director of FHI · 2023-03-06T01:18:20.256Z · EA · GW

I strongly disagree with you but yeah I really don't think we need to re-litigate the apology. There are lots of other threads for that.

Comment by Peter Wildeford (Peter_Hurford) on Nick Bostrom should step down as Director of FHI · 2023-03-06T01:16:42.160Z · EA · GW

I think it was meant to link to my piece: "Why I'm personally upset with Nick Bostrom right now".

Comment by Peter Wildeford (Peter_Hurford) on Nick Bostrom should step down as Director of FHI · 2023-03-06T01:15:31.664Z · EA · GW

Mervin makes a great point that it is hard to compare GPI to FHI in general. But I also think comparing past FHI and past GPI is not the right way of thinking about it - instead we want to compare current/expected future FHI to current/expected future GPI. And the fact of the matter is quite clear that current/expected future GPI still can actually hire people, engage in productive research work, and maintain a relationship with the university whereas current/expected future FHI I think can best be described as "basically dead".

Comment by Peter Wildeford (Peter_Hurford) on Scoring forecasts from the 2016 “Expert Survey on Progress in AI” · 2023-03-06T01:09:19.076Z · EA · GW

My analysis suggests that the experts did a fairly good job of forecasting (Brier score = 0.21), and would have been less accurate if they had predicted each development in AI to generally come, by a factor of 1.5, later (Brier score = 0.26) or sooner (Brier score = 0.29) than they actually predicted.

Important missing context from this is that Brier score = 0.25 is what you would get if you predicted randomly (i.e., put 50% on everything no matter what, or assigned 0% or 100% by coin flip). So that means here that systematically predicting "later" or "sooner" would make you "worse than random" whereas the actual predictions are "better than random" (though not by much).

So I think the main takeaways are: (1) predicting this AI stuff is very hard and (2) we are at least not systematically biased as far as we can tell so far in predicting progress too slow or too fast. (1) is not great but (2) is at least reassuring.

Comment by Peter Wildeford (Peter_Hurford) on Nathan Young's Shortform · 2023-03-05T19:33:15.717Z · EA · GW

What's a TAP? I'm still not really sure what you're saying.

Comment by Peter Wildeford (Peter_Hurford) on Nathan Young's Shortform · 2023-03-05T06:10:59.543Z · EA · GW

What's a "Chesterton's TAP"?

Comment by Peter Wildeford (Peter_Hurford) on keller_scholl's Shortform · 2023-03-05T05:01:37.814Z · EA · GW

I appreciate you raising this despite not having an actual view on the topic and I appreciate you being clear that this is a complex topic that's hard to form a view on.

I think I had a lot of freewheeling conversations at EAG and I don't think I thought enough about the fact that journalists I don't trust by default might be able to overhear and comment on those conversations and that thinking through this may have a somewhat chilling effect on how I interact in future EAGs, which I find to be unfortuntate.

That being said, I totally agree with you that excluding these journalists may also be unfair or otherwise based on bad norms, and it's a pretty thorny trade-off. Like you, this is something I don't really fully understand.

Comment by Peter Wildeford (Peter_Hurford) on keller_scholl's Shortform · 2023-03-05T04:59:26.426Z · EA · GW

I do agree there's a wide spectrum of what "disclosing this" looks like and I think it's entirely possible that you did disclose it enough or maybe even disclosed it more than enough (for example, if perhaps we conclude it didn't need to be disclosed at all, then you did more than necessary). I think - like Keller - I don't really have a view on this. But I think the level of disclosure you did do is also entirely possible to be pretty inadequate (again I'm genuinely not sure) given that is on page 9 of a guide I imagine most people don't read (I didn't). But I imagine you agree with this.

Comment by Peter Wildeford (Peter_Hurford) on Conference on EA hubs and offices, expression of interest · 2023-02-28T18:43:35.449Z · EA · GW

SOL didn't close - it just failed to open and my understanding was this was entirely due to financing falling through.

Lightcone on the other hand does not have any financial issues to the best of my limited knowledge, but chose to close due to a change in strategy.

These two situations seem very different.

I'm not aware of any other office situations changing but it definitely makes sense that office strategy in general would be affected by a decline in available assets for offices. I expect this to continue case-by-case.

Comment by Peter Wildeford (Peter_Hurford) on 80,000 Hours has been putting much more resources into growing our audience · 2023-02-27T19:07:18.389Z · EA · GW

I really appreciate how thoughtful you've been about this, including sensitivity to downside risks. Do you have any plans to monitor the downside risks? A lot of them seem quite verifiable/testable.

Comment by Peter Wildeford (Peter_Hurford) on EA Global in 2022 and plans for 2023 · 2023-02-23T22:19:00.286Z · EA · GW

I definitely agree that NYC is a very compelling location too. Best of luck with EAGxNYC and I'll see if I can attend.

Comment by Peter Wildeford (Peter_Hurford) on EA Global in 2022 and plans for 2023 · 2023-02-23T21:29:37.120Z · EA · GW

I could definitely see the EAG East Coast alternating between Boston and DC every other year. I have nothing against Boston and I think it is also a great place for an EAG and I realize it is a very difficult choice if you can only pick one.

The idea of a professional suit-tie EAGxDC with significant policy engagement (perhaps not even branded as "EAG" as all but something else) is pretty appealing to me.

Comment by Peter Wildeford (Peter_Hurford) on EA Global in 2022 and plans for 2023 · 2023-02-23T21:11:03.787Z · EA · GW

Great to see this announcement! Curious to hear if there are any plans for a Washington DC-based event?

Comment by Peter Wildeford (Peter_Hurford) on EA, Sexual Harassment, and Abuse · 2023-02-20T18:52:59.247Z · EA · GW

Thank you. I am still considerably unhappy with how this situation was handled but I accept Julia's apology and I am glad to see this did come to some sort of resolution. I'm especially glad to see an independent investigation into how this was handled.

Comment by Peter Wildeford (Peter_Hurford) on Aaron Bergman's shortform · 2023-02-20T03:52:07.562Z · EA · GW

Wow, blast from the past!

Comment by Peter Wildeford (Peter_Hurford) on Aaron Bergman's shortform · 2023-02-20T03:51:15.528Z · EA · GW

The uniform prior case just generalizes to Laplace's Law of Succession, right?

Comment by Peter Wildeford (Peter_Hurford) on In (mild) defence of the social/professional overlap in EA · 2023-02-17T16:40:29.292Z · EA · GW

Yeah this is basically what I was trying to say in my comment.

Comment by Peter Wildeford (Peter_Hurford) on Transitioning to an advisory role · 2023-02-16T20:42:05.839Z · EA · GW

Hey Max, I really want to thank you again for everything you've done for CEA, RP, myself, and the broader EA movement. I also am really proud of you for recognizing when you're not in the right space for the role. I think you're a very positive role model to demonstrate to others that it's totally best to prioritize your own health, even when it seems like you're really hard to replace. I look forward to working with the next ED to make EA a great space and I'm glad you'll still be in an advisory capccity.

Comment by Peter Wildeford (Peter_Hurford) on EA, Sexual Harassment, and Abuse · 2023-02-15T21:18:20.275Z · EA · GW

Thanks Chana. I'm glad we can both see each other's perspectives. I look forward to hearing more next week. Committing to a response and a rough timeline is already very helpful.