How did our historical moral heroes deal with severe adversity and/or moral compromise? 2023-01-09T18:12:51.253Z
[Draft Amnesty] Unfinished draft on the case for a first principles, systematic scoping of meat alternatives 2022-12-19T08:29:03.411Z
What Rethink Priorities General Longtermism Team Did in 2022, and Updates in Light of the Current Situation 2022-12-14T13:37:04.252Z
[Linkpost] Dan Luu: Futurist prediction methods and accuracy 2022-09-15T21:20:11.401Z
Notes on Apollo report on biodefense 2022-07-23T21:38:37.404Z
Some unfun lessons I learned as a junior grantmaker 2022-05-23T16:31:40.912Z
Help us make civilizational refuges happen 2022-04-13T12:57:13.751Z
What would you do with a Facebook meme page with 250k followers? 2022-04-12T21:56:44.750Z
Announcing Impact Island: A New EA Reality TV Show 2022-04-01T05:37:29.406Z
Forecasts estimate limited cultured meat production through 2050 2022-03-21T23:13:20.048Z
Potentially great ways forecasting can improve the longterm future 2022-03-14T19:21:10.362Z
Early-warning Forecasting Center: What it is, and why it'd be cool 2022-03-14T19:20:00.618Z
.01% Fund - Ideation and Proposal 2022-03-01T18:25:40.468Z
As an independent researcher, what are the biggest bottlenecks (if any) to your motivation, productivity, or impact? 2022-02-17T19:32:31.113Z
As an independent researcher, how do you stay or become motivated, productive, and impactful? 2022-02-17T19:00:07.719Z
What's your prior probability that "good things are good" (for the long-term future)? 2022-02-05T18:44:33.071Z
What's the Theory of Change/Theory of Victory for Farmed Animal Welfare? 2021-12-01T00:52:32.246Z
How would you define "existential risk?" 2021-11-29T05:17:33.359Z
How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? 2021-11-27T23:46:00.740Z
[Linkpost] Don't Look Up - a Netflix comedy about asteroid risk and realistic societal reactions (Dec. 24th) 2021-11-18T21:40:55.260Z
[Job ad] Research important longtermist topics at Rethink Priorities! 2021-10-06T19:09:08.967Z
Cultured meat: A comparison of techno-economic analyses 2021-09-24T22:20:40.077Z
The motivated reasoning critique of effective altruism 2021-09-14T20:43:14.571Z
How valuable is ladder-climbing outside of EA for people who aren't unusually good at ladder-climbing or unusually entrepreneurial? 2021-09-01T00:47:31.983Z
What are examples of technologies which would be a big deal if they scaled but never ended up scaling? 2021-08-27T08:47:16.911Z
What are some key numbers that (almost) every EA should know? 2021-06-18T00:37:17.794Z
Epistemic Trade: A quick proof sketch with one example 2021-05-11T09:05:25.181Z
[Linkpost] New Oxford Malaria Vaccine Shows ~75% Efficacy in Initial Trial with Infants 2021-04-23T23:50:20.545Z
Some EA Forum Posts I'd like to write 2021-02-23T05:27:26.992Z
RP Work Trial Output: How to Prioritize Anti-Aging Prioritization - A Light Investigation 2021-01-12T22:51:31.802Z
Some learnings I had from forecasting in 2020 2020-10-03T19:21:40.176Z
How can good generalist judgment be differentiated from skill at forecasting? 2020-08-21T23:13:12.132Z
What are some low-information priors that you find practically useful for thinking about the world? 2020-08-07T04:38:07.384Z
David Manheim: A Personal (Interim) COVID-19 Postmortem 2020-07-01T06:05:59.945Z
I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA 2020-06-30T19:35:13.376Z
Are there historical examples of excess panic during pandemics killing a lot of people? 2020-05-27T17:00:29.943Z
[Open Thread] What virtual events are you hosting that you'd like to open to the EA Forum-reading public? 2020-04-07T01:49:05.770Z
Should recent events make us more or less concerned about biorisk? 2020-03-19T00:00:57.476Z
Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar? 2020-03-12T21:19:19.565Z
All Bay Area EA events will be postponed until further notice 2020-03-06T03:19:24.587Z
Are there good EA projects for helping with COVID-19? 2020-03-03T23:55:59.259Z
How can EA local groups reduce likelihood of our members getting COVID-19 or other infectious diseases? 2020-02-26T16:16:49.234Z
What types of content creation would be useful for local/university groups, if anything? 2020-02-15T21:52:00.803Z
How much will local/university groups benefit from targeted EA content creation? 2020-02-15T21:46:49.090Z
Should EAs be more welcoming to thoughtful and aligned Republicans? 2020-01-20T02:28:12.943Z
Is learning about EA concepts in detail useful to the typical EA? 2020-01-16T07:37:30.348Z
8 things I believe about climate change 2019-12-28T03:02:33.035Z
Is there a clear writeup summarizing the arguments for why deep ecology is wrong? 2019-10-25T07:53:27.802Z
Linch's Shortform 2019-09-19T00:28:40.280Z
The Possibility of an Ongoing Moral Catastrophe (Summary) 2019-08-02T21:55:57.827Z


Comment by Linch on Open Thread: January — March 2023 · 2023-03-31T06:29:46.387Z · EA · GW

I feel confused about how dangerous/costly it is to use LLMs for private documents or thoughts to assist longtermist research, in a way that may wind up in the training data for future iterations of LLMs. Some sample use cases that I'd be worried about:

  • Summarizing private AI evals docs about plans to evaluate future models
  • Rewrite emails on high-stakes AI gov conversations
  • Generate lists of ideas for biosecurity interventions that can be helped/harmed by AI
  • Scrub potentially risky/infohazard-y information from a planned public forecasting questions
  • Summarize/rewrite speculations of potential near-future AI capabilities gains.

I'm worried about using LLMs for the following reasons:

  1. Standard privacy concerns/leakage to dangerous (human) actors
    1. If it's possible to back out your biosecurity plans from the models, this might give terrorists/gov'ts ideas.
    2. your infohazards might leak
    3. People might (probabilistically) back out private sensitive communication, which could be embarassing
      1. I wouldn't be surprised if care for consumer privacy at AGI labs for chatbot consumers is much lower than say for emails hosted by large tech companies
        1. I've heard rumors to this effect, also see
    4. (unlikely) your capabilities insights might actually be useful for near-future AI developers.
  2. Training models in an undesirable direction:
    1. Give pre-superintelligent AIs more-realistic-than-usual ideas/plans for takeover
    2. Subtly bias the motivations of future AIs in dangerous ways.
    3. Perhaps leak capabilities gains ideas that allows for greater potential for self-improvement.

I'm confused whether these are actually significant concerns, vs pretty minor in the grand scheme of things. Advice/guidance/more considerations highly appreciated! 

Comment by Linch on Some Comments on the Recent FTX TIME Article · 2023-03-25T15:18:37.178Z · EA · GW

FWIW the former CEO of FTX US also claimed this:

In early April 2022, my eleventh month, I made one last try. I made a written formal complaint about what I saw to be the largest organizational problems inhibiting FTX’s future success. I wrote that I would resign if the problems weren’t addressed.

29/49 In response, I was threatened on Sam’s behalf that I would be fired and that Sam would destroy my professional reputation. I was instructed to formally retract what I’d written and to deliver an apology to Sam that had been drafted for me.

The threat model is still unclear, but this is at least somewhat corroborating evidence that Sam is not above using threats in such situations.

Comment by Linch on SVB collapse- cheap startup equity? · 2023-03-18T01:48:03.874Z · EA · GW

Thanks, appreciate the explanation!

Comment by Linch on Does EA get the "best" people? Hypotheses + call for discussion · 2023-03-16T19:09:19.731Z · EA · GW

Can you say which norms the current comment breaks? I think it was not clear to me upon reading both the comment, and looking at the forum norms again.

Comment by Linch on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-16T19:00:15.390Z · EA · GW

If almost all current leaders would be better than any plausible replacement, even after a significant hit to long-term effectiveness, then I think that says something about the leadership development pipeline that is worth observing.

I think it's relatively obvious that there's a dearth of competent leadership/management in EA. I think this is even more extreme for EA qua EA, since the personal costs : altruistic rewards tradeoff for EA qua EA work is arguably worse than e.g. setting up an AI governance initiative or leading a biosecurity project.

Comment by Linch on Paper summary: Are we living at the hinge of history? (William MacAskill) · 2023-03-14T00:17:06.254Z · EA · GW

MacAskill thinks that (2) provides evidence that our time may be the most influential, but this evidence isn't strong enough to overcome the stronger arguments against this hypothesis. 

Comment by Linch on SVB collapse- cheap startup equity? · 2023-03-13T07:32:03.023Z · EA · GW

I'm confused. Isn't the US Treasury covering this? Or are you suggesting that there might be a liquidity problem while things are getting sorted?

Comment by Linch on Abuse in LessWrong and rationalist communities in Bloomberg News · 2023-03-08T03:01:46.784Z · EA · GW

Minor compared to much more important points other people can be making, but highlighting this line:

At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him.

Wow, this is an interesting framing on Yudkowsky writing him in as literal Voldemort

Maybe there's a lesson about trustworthiness and interpersonal dynamics here somewhere.

Comment by Linch on Please don't criticize EAs who "sell out" to OpenAI and Anthropic · 2023-03-07T02:20:30.773Z · EA · GW

I downvoted this post originally because it originally appeared to be about not criticizing people who are working on AI capabilities at large labs. Now that it's edited to be about not offering unsolicited criticism for people working on AI safety at large labs (with arguments about why we should avoid unsolicited criticism in general), I still disagree, but I've removed my downvote.

Comment by Linch on Call to demand answers from Anthropic about joining the AI race · 2023-03-05T01:11:34.121Z · EA · GW

tbc I don't know any more than you here, and I only have the text of the comment to go off of. I just interpreted "You really don't need a [blip] from Russia to lead you into a discussion about some next shit that's about to blow in Silicon Valley . I'm pretty sure you can do it :)

Don't ask me, I'm an immigrant here." as referring to themselves. I found the rest of the comment kind of hard to understand so it's definitely possible I also misunderstood things here.

Comment by Linch on Call to demand answers from Anthropic about joining the AI race · 2023-03-05T00:36:31.655Z · EA · GW

Feels kinda mean to tell a non-native speaker off for using a slur about their own group. 

Comment by Linch on Who is Uncomfortable Critiquing Who, Around EA? · 2023-03-03T03:31:16.903Z · EA · GW

I dunno, a fairly central example in my mind is if an employee or ex-employee says mean and (by your perspective) wrong things about you online. Seems like if it wasn't for discomfort or awkwardness, replying to said employee would otherwise be a pretty obvious tool in the arsenal. Whereas you can't fire ex-employees and firing current employees is a) just generally a bad move and b) will make you look worse.

Comment by Linch on Call to demand answers from Anthropic about joining the AI race · 2023-03-03T00:29:33.710Z · EA · GW

I mean in at least in global health and animal welfare, most of the time we don't evaluate charities for being net-negative, we only look at "other people's charities" that are already above a certain bar. I would be opposed to spending considerable resources looking at net negative charities in normal domains, most of your time is much better spent trying to triage resources to send to great projects and away from mediocre ones.

In longtermism or x-risk or meta, everything is really confusing so looking at net-positive vs net-negative becomes more compelling.

For what it's worth, it's very common at LTFF and other grantmakers to consider whether grants are net negative.

Also to be clear, you don't consider OpenAI to be EA-adjacent right? Because I feel like there are many discussions about OpenAI's sign over the years.

Comment by Linch on Who is Uncomfortable Critiquing Who, Around EA? · 2023-03-03T00:19:42.610Z · EA · GW

Standard management advice is that managers fire employees too slowly rather than too quickly. 

Comment by Linch on Call to demand answers from Anthropic about joining the AI race · 2023-03-03T00:13:13.972Z · EA · GW

I don't know if demanding answers makes sense, but I do think it's a pretty hard call whether Anthropic is net positive or net negative for AI safety; I'm surprised at the degree to which some people seem to think this question is obvious; I'm annoyed at the EA memeplex for not making this uncertainty more transparent to newcomers/outsiders; I hope not too many people join Anthropic for bad reasons.

Comment by Linch on FLI FAQ on the rejected grant proposal controversy · 2023-03-01T08:57:15.346Z · EA · GW

A month has since passed, and tbh while my emotions has cooled down a bunch, I see no updates on a quick skim either here or Google. I'm sorry that the FLI team was put under such crossfires and extended EAF vitirol. However, I find myself confused about the grantmaking process that led to such a grant being almost approved. I think I still am more than a bit worried about the generative process that led to this situation, and can only hope that either a) there are exculpating circumstances at FLI that either can't be shared or FLI deprioritized sharing or b) FLI has quietly made changes to increase their grantmaking quality in the future, or decreased their willingness to give out grants until such changes have been made.

Comment by Linch on "EA is very open to some kinds of critique and very not open to others" and "Why do critical EAs have to use pseudonyms?" · 2023-02-26T04:31:37.870Z · EA · GW

I think we're maybe talking past each other. E.g. I would not classify Thiel's political views as libertarian (I think he might have been at one point, but certainly not in the last 10+ years), and I'll be surprised if the median American or libertarian would. Some specific points:

Example, there are many EAs working in cryptocurrency and they tend to be libertarian

To be clear, the problem with SBF is that he stole billions of dollars. Theft is no less of a problem if it was in the traditional financial system.[1]

I do believe SBF donated large sums to Republicans.

Notably, not to the Libertarian Party!

Cryptocurrency, Race-IQ differences, and polyamory tend to be libertarian dominated areas of fascination.

Seems pretty unfalsifiable to me. Also kinda irrelevant. 

But I don't really want to be speculating on these specific individuals political views, but make the broader point that those areas of itnerest are assosciated with libertarians.

Seems like an unusual framing of "to-date, all the major EA scandals have been caused by libertarians." Usually when I think (paraphrased)"X group of people caused Y" I don't think "X group of people have areas of interests in the vicinity of Y."

  1. ^

     If anything, non-consensual redistribution is much more of a leftist thing than that of any other modern political strand? 

Comment by Linch on Cause area: Short-sleeper genes · 2023-02-26T02:00:53.717Z · EA · GW

I quite appreciate this comment, thank you!

Comment by Linch on "EA is very open to some kinds of critique and very not open to others" and "Why do critical EAs have to use pseudonyms?" · 2023-02-25T13:07:57.009Z · EA · GW

I will point out that to-date, all the major EA scandals have been caused by libertarians (cryptocurrency, race science, sexual abuse in polyarmorous community).

Hmm this seems patently false to me?[1]. Am I misunderstanding something? If not, I'd appreciate it if people don't assert false things on the forum.

  1. ^

    SBF was a major Democratic donor with parents who are Democratic donors. I doubt he ever identified as libertarian. Among the biggest critiques of Bostrom's academic views is that he seems too open to authoritarian survelliance (cf Vulnerable World Hypothesis), hardly a libertarian position. I don't know which incidences of "sexual abuse in polyarmorous community" you're referring to, but I suspect you're wrong there too.

Comment by Linch on Announcing the Launch of the Insect Institute · 2023-02-25T00:41:52.204Z · EA · GW

Not eating bugs is a win! People who already aren't going to do this are not a group we need to reach.

Comment by Linch on Manifold Markets Charity program ending March 1st · 2023-02-23T22:18:31.734Z · EA · GW

Charity program will no longer be cancelled, according to Manifold Twitter.

Comment by Linch on A statement and an apology · 2023-02-22T22:25:50.359Z · EA · GW

Thanks, appreciate the update! <3

Comment by Linch on A statement and an apology · 2023-02-22T21:30:17.131Z · EA · GW

Thanks, appreciate the feedback. I didn't mean my comment as sarcastic and have retracted the comment. I had an even less charitable comment prepared but realized that "non-native speaker misunderstood what I said" is also a pretty plausible explanation given the international nature of this forum.

I might've been overly sensitive here, because the degree of misunderstanding and the sensitive nature of the topic feels reminiscent of patterns I've observed before on other platforms. This is one of the reasons why I no longer have a public Twitter.

Comment by Linch on A statement and an apology · 2023-02-22T21:10:56.261Z · EA · GW

I definitely agree that there might be other incidents that come to light. I still disagree that the presence of at least 5 incidents is much of an update that Time is underselling things. 

Comment by Linch on A statement and an apology · 2023-02-22T21:02:12.526Z · EA · GW

Is English your native language? If not, I sometimes have trouble reading Mandarin texts and I found Google Translate to be okay. There might be better AI translation in the coming years as well.

Comment by Linch on A statement and an apology · 2023-02-22T20:42:20.243Z · EA · GW

Let me be more explicit: 

Upon reading the Time article, I immediately assumed that whoever the article was talking about did other creepy things. Assuming the Time article did not misrepresent things hugely, the idea that the person (who we now know is Owen) has not done any other creepy things did not even cross my mind. I feel like this is an extremely normal, even boring, within-context reading.

On the other hand, when you said that "Context that makes Owen look worse" includes "Owen self-admittedly went on to make other inappropriate comments to people on 4 other occasions" this implies to me that your prior belief before reading Owen's statement was that whoever the Time article was referring to did not do other bad things, or at least did less bad things than say 4 other inappropriate comments of similar magnitude. 

Because your reading appears to have differed so much from my own, I'm remarking on how this seems like a pretty odd prior to have, from my perspective.

Comment by Linch on A statement and an apology · 2023-02-22T20:30:05.110Z · EA · GW

???? I don't understand what your comment is trying to imply. 

Comment by Linch on A statement and an apology · 2023-02-22T20:18:07.368Z · EA · GW

Owen himself claims that the culture of EA contributed to his sexual misconduct.  

Regardless of my own views about which are the largest cultural problems in EA, what's your prior that people who do wrongdoing are accurate in their public assessment of factors that diminish their moral responsibility and/or make themselves look better? Your italicized bolding implies that you think this is an unusually reliable source of truth, whereas I pretty straightforwardly think it's unusually bad evidence.

Comment by Linch on People Will Sometimes Just Lie About You · 2023-02-22T20:07:01.818Z · EA · GW

Yeah I've heard elsewhere that NYT is pretty unusual here, would trust them less than other media.

Comment by Linch on A statement and an apology · 2023-02-22T19:42:43.639Z · EA · GW

I agree with that. But also, I don't think you necessarily need a model of bias or malfeasance by anybody else. If I was reading a statement/apology by someone who has zero power remaining in this community, I still would have significant doubts about its accuracy. 

Comment by Linch on A statement and an apology · 2023-02-22T19:21:56.452Z · EA · GW

Owen self-admittedly went on to make other innapropriate comments to people on 4 other occasions (although they were self-judged to be less egregious). 

Sorry, what was your prior belief here? Upon reading that section in the Time article, I definitely did not interpret (paraphrased) "telling a job interviewee staying at your house about your masturbation habits" as a one-off incident by someone who never otherwise does creepy things, and I doubt the average Time reader did.

EDIT: I'm confused about the disagree-votes. Did other people reading the Time article assume that it was a one-off incident before Owen's apology?

EDIT2: Fwiw I thought the rest of the comment that I replied to was a good contribution to the discourse, and I upvoted it before my comment.

Comment by Linch on A statement and an apology · 2023-02-22T19:04:00.471Z · EA · GW

There may yet be further events that haven't yet been reported to, or disclosed by Owen, and indeed, on the outside view, most events would not be suchly reported.

I want to highlight this. The more general thing to flag is that this is only Cotton-Barratt's side of the story, albeit apparently checked by several people. The prior is that at least some of this presentation to be slanted in his favor, subconsciously or otherwise.

I don't think it's reasonable to take either the facts or (especially) the framing of this story at face value without entertaining at least significant doubts, and I'm surprised at the number of commentators who appear to be doing this. 

Comment by Linch on EV UK board statement on Owen's resignation · 2023-02-22T05:07:30.512Z · EA · GW

This seems good, both as reparation and as a reward for speaking up.

Comment by Linch on People Will Sometimes Just Lie About You · 2023-02-19T01:59:42.437Z · EA · GW

One example is how the New York Times decided that they wouldn't cover tech positively:

My understanding from those links is that NYT's actions here is a significant outlier in journalistic/editorial ethics, enough that both Kelsey and Matt thought it was relevant to comment on in those terms.


I'd never heard anything like it[...]

For the record, Vox has never told me that my coverage of something must be 'hard-hitting' or must be critical or must be positive, and if they did, I would quit. Internal culture can happen in more subtle ways but the thing the NYT did is not normal.


But what happened is that a few years ago the New York Times made a weird editorial decision with its tech coverage.

Comment by Linch on EigenKarma: trust at scale · 2023-02-09T02:06:25.710Z · EA · GW

The literature on differential privacy might be helpful here. I think I may know a few people in the field, although none of them are close.

Comment by Linch on [Atlas Fellowship] Why do 100 high-schoolers need $50k each from Open Philanthropy? · 2023-02-08T05:49:17.614Z · EA · GW

I think they're currently not planning to, see here.

Comment by Linch on EA, Sexual Harassment, and Abuse · 2023-02-08T03:25:54.922Z · EA · GW

For what it's worth, my current vote is for immediate suspension in situations if there is credible allegations for anyone in a grantmaking etc capacity where they used such powers in a retaliatory action for rejected romantic or sexual advances. In addition to being illegal, such actions are just so obviously evidence of bad judgement and/or poor self-control that I'd hesitate to consider anyone who acted in such ways a good fit for any significant positions of power. I have not thought the specific question much, but it's very hard for me to imagine any realistic situation where someone with such traits is a good fit for grantmaking. 

Comment by Linch on The number of burner accounts is too damn high · 2023-02-07T23:16:05.071Z · EA · GW

I think posting under pseudonyms makes sense for EAs who are young[1], who are unsure what they want to do with their lives, and/or people who have a decent chance of wanting jobs that require discretion in the future, e.g. jobs in politics or government. 

I know at least some governance people who likely regret being openly tied with Future Fund and adjacent entities after the recent debacle. 

Also in general I'm confused about how the tides of what's "permissible internet dirt to dig up on people" will change in the future. Things might get either better or much worse, and in the worse worlds there's some option value left in making sure our movement doesn't unintentionally taint the futures of some extremely smart, well-meaning, and agentic young people.

That said, I personally prefer pseudonymous account names with a continued history like Larks or AppliedDivinityStudies[2], rather than anonymous accounts that draw attention to their anonymity, like whistleblower1984. 

  1. ^

     <22? Maybe <25, I'm not sure. One important factor to keep track of is how likely you are to dramatically change your mind in politically relevant ways, e.g. I think if you're currently a firm Communist it's bad to not tell your voters about it, but plenty of open-minded young people quickly go through phases of Communism then anarcho-capitalism then anarcho-socialism, etc, and depending on how the tides change maybe you don't want your blog musings while 19 to become too tied in your public identity.

  2. ^

    A consideration against, and maybe a strong enough consideration to make my whole point moot, is the live possibility of much much better AI stylometry in the next decade.

Comment by Linch on [Atlas Fellowship] Why do 100 high-schoolers need $50k each from Open Philanthropy? · 2023-02-07T22:56:17.123Z · EA · GW

I find myself pretty confused here, and can easily imagine that I screwed up in this assessment. I think the two main things that I find confusing is a) what standards should I expect to have for critics who're probably younger than 20[1], where I do in fact consider these norm violations to be quite bad if they came from people who are say older than 22, and b) how to relate to the very real possibility that I or people I know are doing bad things. Like humans have all sorts of biases to protect their in-group etc, and I can easily imagine both undercorrecting and overcorrecting here. 

  1. ^

     Which I'm not very calibrated about. You're much more calibrated than I am here, though for this specific question there are obvious reasons I shouldn't defer to you.

Comment by Linch on [Atlas Fellowship] Why do 100 high-schoolers need $50k each from Open Philanthropy? · 2023-02-07T03:29:57.207Z · EA · GW

Hi, I think on balance I appreciate this post. This is a hard thing for me to say, as the post has likely caused nontrivial costs to some people rather close to me, and has broken some norms that I view as both subtle and important. But on balance I think our movement will do better with more critical thinkers, and more people with critical pushback when there is apparent divergence between stated memes and revealed goals. 

I think this is better both culturally, and also is directly necessary to combat actual harm if there is also actual large-scale wrongdoing that agreeable people have been acculturated to not point out. I think it will be bad for the composition and future of our movement if we push away young people who are idealistic and disagreeable, which I think is the default outcome if posts like this only receive critical pushback.

So thank you for this post. I hope you stay and continue being critical.

Comment by Linch on [Link] How effective altruists ignored risk · 2023-02-07T03:12:55.941Z · EA · GW

I previously addressed this here.

Comment by Linch on EA, Sexual Harassment, and Abuse · 2023-02-06T21:05:15.822Z · EA · GW

I directionally agree with you. However, they do have a few other levers. For example, local EA groups can ban people based on information from CH. Grantmakers can also ask CH for consultation about people they hear concerning grapevine rumors about and outsource this side of investigations to them.

Some of this refers to what I refer to as "mandate" in my earlier shortform that I linked.

I agree that they can't make many decisions about private events, take legal action, or fire people they do not directly employ.

Comment by Linch on [Atlas Fellowship] Why do 100 high-schoolers need $50k each from Open Philanthropy? · 2023-02-06T08:16:49.980Z · EA · GW

To give a concrete example, my (non-EA) ex was from Europe, and she had a relative who both didn't like that she had two partners, and that I was non-white. My understanding was that the "poly" dimension was seen as substantially worse than the racial dimension. The relative's attitude didn't particularly affect our relationship much (we both thought it was kind of funny). But at least in Western countries, I think your bar on outing poly people who don't want to be outed should be at least as high as your bar for outing interracial couples who don't want to be outed, given the relative levels of antipathy people in Western countries have between the two. 

(I may want to delete this comment later).

Comment by Linch on I No Longer Feel Comfortable in EA · 2023-02-06T04:36:32.520Z · EA · GW

Morality is hard in the best of times, and now is not the best of times. The movement may or may not be a good fit for you. I'm glad you're still invested in doing good regardless of perceived or actual wrongdoing of other members of the movement to date, and I hope I and others will do the same.

Comment by Linch on Should EVF consider appointing new board members? · 2023-02-06T01:34:35.416Z · EA · GW

I guess I'm imagining from either Open Phil's perspective, or that of other large funders, the risk of value misalignment or incompetence of Open Phil staff is already priced in, and they've already paid the cost of evaluating Claire.

It's hard to imagine that (purely from the perspective of reducing costs of auditing)Holden or Cari or Dustin preferring an unknown quantity to Claire. There might be other good reasons to prefer having a more decentralized board[1], but this particular reason seems wrong. 

Likewise, from the perspective of future employees or donors to EVF, the risk of value misalignment or incompetence of EVF's largest donor is already a cost they necessarily have to pay if they want to work for or fund EVF. So adding a board member (and another source of COI) that's not associated with Open Phil can only increase the number of COIs, not decrease it.

  1. ^

    for example, a) you want a diversity of perspectives, b) you want to reduce the risks of being beholden to specific entities c) you want to increase the number of potential whistleblowers

Comment by Linch on Should EVF consider appointing new board members? · 2023-02-06T01:10:33.596Z · EA · GW

Your argument here cuts against your prior comment.

Comment by Linch on EA, Sexual Harassment, and Abuse · 2023-02-04T12:31:21.754Z · EA · GW

Why was this comment downvoted?

Comment by Linch on Doing EA Better · 2023-02-04T11:12:43.674Z · EA · GW

Funnily enough, the "pigeon flu" example may cease to become a hypothetical. Pretty soon, we may need to look at the track record of various agencies and individuals to assess their predictions on H5N1

Comment by Linch on What advice would you give someone who wants to avoid doxing themselves here? · 2023-02-03T07:30:31.585Z · EA · GW

I removed my upvote for the same reason.

Comment by Linch on Doing EA Better · 2023-02-03T06:08:02.988Z · EA · GW

Imagine a forecaster that you haven't previously heard of told you that there's a high probability of a new novel pandemic ("pigeon flu") next month, and their technical arguments are too complicated for you to follow.[1]

Suppose you want to figure out how much you want to defer to them, and you dug through to find out the following facts:

a) The forecaster previously made consistently and egregiously bad forecasts about monkeypox, covid-19, Ebola, SARS, and 2009 H1N1.

b) The forecaster made several elementary mistakes in a theoretical paper on Bayesian statistics

c) The forecaster has a really bad record at videogames, like bronze tier at League of Legends.

I claim that the general competency argument technically goes through for a), b), and c). However, for a practical answer on deference, a) is much more damning than b) or especially c), as you might expect domain-specific ability on predicting pandemics to be much stronger evidence for whether the prediction of pigeon flu is reasonable than general competence as revealed by mathematical ability/conscientiousness or videogame ability.

With a quote like 

Hardly anyone associated with Future Fund saw the existential risk to… Future Fund, even though they were as close to it as one could possibly be.

I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.

The natural interpretation to me is that Cowen (and by quoting him, by extension the authors of the post) is trying to say that FF not predicting the FTX fraud and thus "existential risk to FF"  is akin to a). That is, a dispositive domain-specific bad forecast that should be indicative of their abilities to predict existential risk more generally. This is akin to how much you should trust someone predicting pigeon flu when they've been wrong on past pandemics and pandemic scares. 

To me, however, this failure, while significant as evidence of general competency, is more similar to b). It's embarrassing and evidence of poor competence to make elementary errors in math. Similarly, it's embarrassing and evidence of poor competence to not successfully consider all the risks to your organization. But using the phrase "existential risk" is just a semantics game tying them together (in the same way that "why would I trust the Bayesian updates in your pigeon flu forecasting when you've made elementary math errors in a Bayesian statistics paper" is a bit of a semantics game). 

EAs do not to my knowledge claim to be experts on all existential risks, broadly and colloquially defined. Some subset of EAs do claim to be experts on global-scale existential risks like dangerous AI or engineered pandemics, which is a very different proposition.

[1] Or, alternatively, you think their arguments are inside-view correct but you don't have a good sense of the selection biases involved.