Posts

What are some key numbers that (almost) every EA should know? 2021-06-18T00:37:17.794Z
Epistemic Trade: A quick proof sketch with one example 2021-05-11T09:05:25.181Z
[Linkpost] New Oxford Malaria Vaccine Shows ~75% Efficacy in Initial Trial with Infants 2021-04-23T23:50:20.545Z
Some EA Forum Posts I'd like to write 2021-02-23T05:27:26.992Z
RP Work Trial Output: How to Prioritize Anti-Aging Prioritization - A Light Investigation 2021-01-12T22:51:31.802Z
Some learnings I had from forecasting in 2020 2020-10-03T19:21:40.176Z
How can good generalist judgment be differentiated from skill at forecasting? 2020-08-21T23:13:12.132Z
What are some low-information priors that you find practically useful for thinking about the world? 2020-08-07T04:38:07.384Z
David Manheim: A Personal (Interim) COVID-19 Postmortem 2020-07-01T06:05:59.945Z
I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA 2020-06-30T19:35:13.376Z
Are there historical examples of excess panic during pandemics killing a lot of people? 2020-05-27T17:00:29.943Z
[Open Thread] What virtual events are you hosting that you'd like to open to the EA Forum-reading public? 2020-04-07T01:49:05.770Z
Should recent events make us more or less concerned about biorisk? 2020-03-19T00:00:57.476Z
Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar? 2020-03-12T21:19:19.565Z
All Bay Area EA events will be postponed until further notice 2020-03-06T03:19:24.587Z
Are there good EA projects for helping with COVID-19? 2020-03-03T23:55:59.259Z
How can EA local groups reduce likelihood of our members getting COVID-19 or other infectious diseases? 2020-02-26T16:16:49.234Z
What types of content creation would be useful for local/university groups, if anything? 2020-02-15T21:52:00.803Z
How much will local/university groups benefit from targeted EA content creation? 2020-02-15T21:46:49.090Z
Should EAs be more welcoming to thoughtful and aligned Republicans? 2020-01-20T02:28:12.943Z
Is learning about EA concepts in detail useful to the typical EA? 2020-01-16T07:37:30.348Z
8 things I believe about climate change 2019-12-28T03:02:33.035Z
Is there a clear writeup summarizing the arguments for why deep ecology is wrong? 2019-10-25T07:53:27.802Z
Linch's Shortform 2019-09-19T00:28:40.280Z
The Possibility of an Ongoing Moral Catastrophe (Summary) 2019-08-02T21:55:57.827Z
Outcome of GWWC Outreach Experiment 2017-02-09T02:44:42.224Z
Proposal for an Pre-registered Experiment in EA Outreach 2017-01-08T10:19:09.644Z
Tentative Summary of the Giving What We Can Pledge Event 2015/2016 2016-01-19T00:50:58.305Z
The Bystander 2016-01-10T20:16:47.673Z

Comments

Comment by Linch on Taylor Swift's "long story short" Is Actually About Effective Altruism and Longtermism (PARODY) · 2021-07-23T21:28:49.469Z · EA · GW

Got it, I agree with you that this can be what's going on! When the intuition is spelled out we clearly see the "trick" is comparing individual incomes as if they were comparable to household incomes. 

Living in the Bay Area, I think some of my friends do forget that in addition to being extremely rich by international standards, they are also somewhere between fairly and extremely rich by American standards as well. 

Comment by Linch on Taylor Swift's "long story short" Is Actually About Effective Altruism and Longtermism (PARODY) · 2021-07-23T21:25:48.185Z · EA · GW

Speaking of the second video, I have my own fan theory that "Blank Space" is based on popular manga and anime series Death Note.

Comment by Linch on Taylor Swift's "long story short" Is Actually About Effective Altruism and Longtermism (PARODY) · 2021-07-23T19:59:45.609Z · EA · GW

I dunno, I feel like these are two fairly different claims. I also expect the average non-American household to be larger than the average American household, not smaller (so there will be <6 B households worldwide). 

Comment by Linch on Buck's Shortform · 2021-07-23T19:55:35.336Z · EA · GW

I thought you were making an empirical claim with the quoted sentence, not a normative claim. 

Comment by Linch on Taylor Swift's "long story short" Is Actually About Effective Altruism and Longtermism (PARODY) · 2021-07-23T18:11:07.525Z · EA · GW

Not your fault, but

the median American household is comfortably in the top richest 1% globally 

does not seem plausible to me, because the US has ~4% of the world population

Comment by Linch on Buck's Shortform · 2021-07-23T18:07:18.775Z · EA · GW

Below this level of consumption, they’ll prefer consuming dollars to donating them, and so they will always consume them. And above it, they’ll prefer donating dollars to consuming them, and so will always donate them. And this is why the GWWC pledge asks you to input the C such that dF(C)/d(C) is 1, and you pledge to donate everything above it and nothing below it.


Wait the standard GWWC pledge is a 10% of your income, presumably based on cultural norms like tithing which in themselves might reflect an implicit understanding that (if we assume log utility) a constant fraction of consumption is equally costly  to any individual, so made for coordination rather than single-player reasons.

Comment by Linch on A Sequence Against Strong Longtermism · 2021-07-23T02:54:30.538Z · EA · GW

The set of all possible futures is infinite, regardless of whether we consider the life of the universe to be infinite. Why is this? Add to any finite set of possible futures a future where someone spontaneously shouts “1”!, and a future where someone spontaneously shouts “2”!, and a future where someone spontaneously shouts

Wait are you assuming that physics is continuous? If so, isn't this a rejection of modern physics? If not, how do you respond to the objection that there is a limited number of possible configurations for atoms in our controllable universe to be in? 

I think, however, that longtermism has the potential to destroy the effective altruism movement entirely, because by fiddling with the numbers, the above reasoning can be used to squash funding for any charitable cause whatsoever. The stakes are really high here.

I'm worried that this will come across as a bravery debate, but are you familiar with the phrase "anything is possible when you lie?" 

I don't find it particularly problematic that sufficient numerical fiddling (or verbal fiddling for that matter) can produce arbitrary conclusions. 

Your critique reminds me of people who argue that consequentialism can be used to justify horrific conclusions, as if consequentialism had an unusually bad track record, or if other common ethical systems have never justified any terrible actions ever. 


 

Comment by Linch on Writing about my job: pharmaceutical chemist · 2021-07-22T23:48:37.189Z · EA · GW

This seems unlikely from your description, but do you do or know of any work on biologics by any chance? I ask because I'm writing a report on cultured meat and would like a slightly larger pool of reviewers from adjacent industries (eg people who have experience scaling use of CHO cells). 

Comment by Linch on Metaculus Questions Suggest Money Will Do More Good in the Future · 2021-07-22T10:34:03.202Z · EA · GW

For the first question, I was one of the forecasters who gave close to the current Metaculus median answer (~30%). I can't remember my exact reasoning, but roughly:

1. Outside view on how frequently things have changed + some estimates on how likely things are to change in the future. 

2. Decent probability that the current top charities will go down in effectiveness as the problems become less neglected/we've had stronger partial solutions for them/we discover new evidence about them. Concretely:

Malaria: CRISPR or vaccines.  But also I place decent probability on bednet production and distribution being fully solved by states/international actors/large NGOs.
Deworming:  Possibilities include a) we uncover new evidence that suggests deworming is less effective than we previously thought, especially at 2021/2030 worm loads* or b) mass deworming decreases the total amount of worms, decreasing marginal value, or c) both.
Cash: At GiveWell scale, I don't think direct cash transfers is ever competitive with your current best guess for top health/development interventions under naive cost-effectiveness unless you apply a strong robustness penalty to everything else. 
Vitamin A: haven't thought much about it tbqh, so retreat to priors.
 

*The evidentiary base for deworming was always shaky to begin with, I think (within the randomista paradigm) it's reasonable to model deworming as a relatively high-risk high-reward economic intervention.

Comment by Linch on EA Picnic: San Francisco | Sunday, July 11 · 2021-07-22T05:01:50.580Z · EA · GW

EDIT: I'm less certain this is true because I think I didn't fully update on how much the vaccines reduce the risks of covid for young people. I think maybe not getting tested is fine if you aren't likely to be exposed to non-vaccinated people and you aren't in a position to interact heavily with many people.

I've been informed 3 days ago that someone at the event now has covid, likely from the event itself.

Dear Linchuan,  

One of the attendees of the EA Picnic let us know they developed COVID symptoms on Friday July 16th and tested positive on Sunday July 18th with a rapid test. They traveled by plane to the Bay Area on July 7 and flew home on July 13. They had little exposure to others except for this trip, so they very likely caught it during the trip, either at the Picnic or while traveling home. They were fully vaccinated with Moderna at the time.


We’re notifying all attendees in case the information is helpful (for example, if you have symptoms and were considering getting tested).

With best wishes for your health,
The EA Picnic team

Speaking for myself and not for any organization, I would strongly recommend my fellow picnic attendees to get tested if at all reasonably possible, especially if they have any symptoms. A simple sanity check is that if one vaccinated person at a 200(?) person event got covid from the event, odds are decent that at least one other person got it, which already translates to >0.5% chance of covid or 5,000 microcovids.

While I did not have the forethought to advocate this immediately, in the future if this happens people should probably immediately take action upon this information (eg, get tested, maybe/probably quarantine until test results are back regardless of symptoms, and definitely quarantine if there are symptoms). 

I want to commend the EA Picnic team for their transparency and notifying us immediately upon test results. That said, every day counts for reducing further transmission and in retrospect I think it was a mistake for us to think individually in terms of risks etc, and not to act more coherently as a community (eg by advocating stronger measures for several days immediately upon receipt of this information). 

Comment by Linch on Further thoughts on charter cities and effective altruism · 2021-07-21T23:42:46.073Z · EA · GW

(I work for Rethink Priorities in a different team. I had no input into the charter cities intervention report other than  feedback in a very early version of the draft. All comments here truly my own. Due to time constraints I did not run it by anybody else at the org before commenting).

The Rethink Priorities report used a 2017 World Bank article on special economic zones as the reference point for potential growth rates for charter cities. The World Bank report concludes, “rather than catalyzing economic development, in the aggregate, most zones’ performance has resembled their national average.” The Rethink Priorities report ends up taking a similarly pessimistic conclusion on charter cities, in part driven by the findings of the World Bank report.

IIRC, I believe the model used a prior distribution informed by the SEZs. 

I find the intuition of charter cities more useful than models which have arbitrary assumptions that have substantial implications on the outcome. However, even though David and Jason acknowledged the limitations of their model, it seems as though lots of folks are taking it as the primary implication of the paper! I think this is misguided.

I'm confused what "it" is referring to here. 

On the pessimistic side, a 0% increase in the growth rate is certainly possible. Some charter cities will fail. David and Jason argue that it’s possible for charter cities to have a negative growth effect, though it’s hard to imagine how that’s possible under[...]

Are you saying that a negative growth rate relative to the host country is impossible?  (or so vanishingly unlikely that it's close to impossible?) If so, can you specify some betting odds? I get that you're implicitly long charter cities because of your job, but I'm still happy to make some bets here.

Even Esther Duflo and Abhijit Bannerjee, economists known for their support of RCTs, positively reference charter cities in their recent book.

Pretty minor, but I don't think it's reasonable to expect people to buy a book and read through the entire book just to check a reference. At least include a page number!

Imagine a charter city is created on 200+ sq km, with good ocean access, and can attract initial infrastructure investment over $1b in a country that averages under 2% growth. Is it really outside the realm of possibility that the charter city could grow over 8% annually? This impact doesn’t even include the indirect effects of a charter city. [emphasis mine]

I don't get why analyses from all sides keeps skipping over detailed analysis of indirect effects.* To me by far the strongest argument for charter cities is the experimentation value/"laboratories of governance"angle, such that even if individual charter cities are in expectation negative, we'd still see outsized returns from studying  and partially generalizing from the outsized successful charter cities that can be replicated elsewhere, host country or otherwise (I mean that's the whole selling point of the Shenzhen stylized example after all!). 

At least, I think this is the best/strongest argument. Informally, I feel like this argument is practically received wisdom among EAs who think about growth. Yet it's pretty suspicious that nobody (to the best of my knowledge) has made this argument concrete and formal in a numeric way and thus exposed it to stress-testing. I've heard that some people worry that  making this argument publicly is bad PR or something, which, fair. These days, I try not to think about PR if I can get away with it and leave it to others. But to the best of my knowledge there's no private models of this either, which seems like a large deficiency.

EA money is fairly expensive, the prima facie case for investing $millions or more in a charter town/city because of the putative benefits of several thousand or tens of thousands of potential residents  ought to be fairly weak**, but much more plausibly good if the tradeoff is knowledge that can help many more people.

*I also flagged this complaint for the RP report. IIRC (and my memory can be quite faulty) the reasoning for not investigating this further was a) time constraints and b) CCI didn't look into this and RP was trying to engage with CCI's arguments directly. 

**when the counterfactual money can be used to prevent children from dying at $1000-$10,000/child, so each million you invest in charter cities = 100-1000 more dead children.  Such costs make sense when you aggregate benefits across millions or hundreds of millions of people (even in the US, gov agencies have a value of statistical life between 5 and 10 million), but the economic gains for several thousand people have to be truly massive if people are trading off between uncertain economic gain and percentage point probability of dying. ^

^ I find it helpful to use a veil of ignorance thought experiment for these interventions. Like what X% chance of dying would you trade off against Y income increase.

Comment by Linch on Lant Pritchett on the futility of "smart buys" in developing-world education · 2021-07-21T17:50:08.911Z · EA · GW

The belief that micro-credit has good investment ROIs for the typical recipient.

Comment by Linch on What would you do if you had half a million dollars? · 2021-07-20T20:50:07.220Z · EA · GW

I lend some credence to the trendlines argument but mostly think that humans are more likely to want to optimize for extreme happiness (or other positive moral goods) than extreme suffering (or other negatives/moral bads), and any additive account  of moral goods will shake out to in expectation have a lot more positive moral goods than moral bads, unless you have really extreme inside views to think that optimizing for extreme moral bads is as as likely (or more likely) than optimizing for extreme moral goods. 

I do think there are nontrivial probability of P(S-risk | singularity), eg a) our descendants are badly mistaken or b) other agents follow through with credible pre-commitments to torture, but I think it ought to be surprising for classical utilitarians to believe that the EV of the far future is negative.

Comment by Linch on saulius's Shortform · 2021-07-20T20:48:33.304Z · EA · GW

I think this is an interesting point but I'm not convinced that it's true with high enough probability that the alternative isn't worth considering. 

In particular, I can imagine luck/happenstance to shake out enough that arbitrarily powerful agents on one dimension are less powerful/rational on other dimensions. 

Another issue is the nature of precommitments[1]. It seems that under most games/simple decision theories for playing those games (eg "Chicken" in CDT), being the first to credibly precommit gives you a strategic edge under most circumstances. But if you're second in those situations, it's not clear whether "I don't negotiate with terrorists" is a better or worse stance than swerving. 

(And in the former case, with both sides precommitting, a lot of torture will still happen). 

[1] using  what I assume is the technical definition of precommitment

Comment by Linch on Lant Pritchett on the futility of "smart buys" in developing-world education · 2021-07-20T08:48:14.858Z · EA · GW

I think this is a much more plausible view of much of the drop-out phenomena than is “credit constraints.” First, strictly speaking, “credit constraints” is not a very good description of the problem. Let us take the author’s numbers seriously that the return to schooling is, say, 8-10 percent. Let us suppose that families in developing countries could borrow at the prime interest rate. The real interest rate in many countries in the world is around 8 to 10 percent. So given the opportunity to borrow at prime to finance schooling many households would rationally decline the offer. Second, imagine one relaxes the pure credit constraint—would that money flow into education? Returns on investments from micro-credit programs (which typically have lending rates between 12 and 20 percent per annum) are profitably at those lending rates. One would need more research but the range of investments for which households usually borrow have much higher returns (and quicker) than 8-10 percent.

(Copenhagen Consensus 2008 Perspective Paper, Pg8) 

Huh this take did not survive the test of time well, given the last 13 years of research on microfinance. 

Comment by Linch on Lant Pritchett on the futility of "smart buys" in developing-world education · 2021-07-20T06:38:51.640Z · EA · GW

Now, there are many other ways that spending on primary education can be justified—that education is a universal human right, education is a merit good, the demands of political socialization demand universal education. I suspect that the actual positive theory of education has more to do with those than with the economic returns. But for the purposes of the present exercise of comparing alternative uses of public funds across sectors one cannot invoke “human rights” as a reason to spend on schooling without a counter of “intrinsic values” of an unchanged natural environment, which returns the debate to non-quantifiable values.

I find this (Copenhagen Consensus 2008 Perspective Paper, Pg7) the weakest argument so far, since of course rights have non-infinite moral value and we can extract things like willingness to pay, hedonic utility, etc from how much rights on worth.

Comment by Linch on You should write about your job · 2021-07-19T04:31:58.077Z · EA · GW

I'm a generalist researcher at Rethink Priorities. I'm on the longtermism team and that's what I try to spend most of my time doing, but some of my projects touch on global health and some of projects are relevant to animal welfare as well (I think doing work across cause areas is fairly common at RP, though this will likely decrease with time as the org gets larger and individual researchers become more specialized). 

I'm happy to talk about my job, but unclear how valuable this is, given that a) "generalist researcher" is probably one of the most well-known of EA jobs and b) Rethink is probably one of the more public EA orgs, at least among orgs that aren't primarily doing community building, and people interested can look at things like our AMAs (2019, 2020).

Comment by Linch on What foundational science would help produce clean meat? · 2021-07-18T07:00:11.711Z · EA · GW

In addition to what avacyn said about hydrolysates (very important! Amino acids are really expensive!), off the top of my head:

  • Figuring out ways to do extreme sanitation/fully aseptic operations cheaply at scale
    • mammal stem cells double every 21-48 hours, E.coli doubles every 25 minutes, if you have a giant bioreactor full of yummy meat cells + growth media at ph~=7.0 and temp~=37C, one stray bacterium or virus can ruin your day. 
    • maybe more of an engineering problem than a foundational science problem, but solving this would also be fairly helpful for a number of medical and other bioengineering scaleup questions
  • novel(?) materials science that lets you build high-quality reusable bioreactors without being as expensive as stainless steel or "use and discard" as has become common(?) in biologics
  • genetic engineering to create cells that have much longer lifespans (or are immortal), breaking the Hayflick limit.
  • Other cell line genetic engineering, including but not limited to:
    • faster growth rates
    • higher metabolic efficiency
    • countering catabolite and CO2 inhibition
  • Tissue engineering/scaffolding, useful both for adding structure to clean meat (eg in steaks) and (in the limit) for creating replacement body parts in surgery for humans
    • though I've been advised that scaffolding is unlikely to be the most significant bottleneck for cultured meat.
Comment by Linch on All Possible Views About Humanity's Future Are Wild · 2021-07-18T06:05:23.541Z · EA · GW

Not all actions humans are capable of doing are good. 

Comment by Linch on EA Superpower?! 😋 · 2021-07-17T00:06:39.694Z · EA · GW

I'd want the orange pill, I think. 

Comment by Linch on Linch's Shortform · 2021-07-15T23:42:17.177Z · EA · GW

Can you be less abstract and point, quantitatively, to which numbers I gave seem vastly off to you and insert your own numbers? I definitely think my numbers are pretty fuzzy but I'd like to see different ones before just arguing verbally instead. 

(Also I think my actual original argument was a conditional claim, so it feels a little bit weird to be challenged on the premises of them! :)). 

Comment by Linch on How do you communicate that you like to optimise processes without people assuming you like tricks / hacks / shortcuts? · 2021-07-15T21:45:15.756Z · EA · GW

Can you give a concrete and detailed (anonymized) example of this? As presented, it feels like the people you're talking to aren't saying something very useful, but I only have your side of the conversation so it might be helpful for us to understand in a bit more detail what was actually going on.

Comment by Linch on New blog: Cold Takes · 2021-07-15T18:33:52.340Z · EA · GW

It was a serious question, maybe presented in a slightly glib way.

Comment by Linch on New blog: Cold Takes · 2021-07-13T21:48:58.797Z · EA · GW

I too am excited about this! In the "about" page, you say:

Most of the posts on this blog are written at least a month before they're posted, sometimes much longer. I try to post things that are worth posting even so, hence the name "Cold Takes."

So my question here is, what's your preferred feedback policy/commenting norms? Should we bias towards more normal "EA Forum commenting norms" or closer to "write out our comments at least a month before they're posted, sometimes much longer, and only comment if upon >1 month of reflection we still think they're worth your time/attention to read?"

Comment by Linch on How to explain AI risk/EA concepts to family and friends? · 2021-07-12T09:57:32.396Z · EA · GW

This is not exactly the answer you're looking for, and I'm not confident about this, but I think it's maybe good to refine your reasons for working on AI risk and being clear what you mean first, and after you get a good sense of what you mean (at least enough to convince a much more skeptical version of yourself), a more easily explainable version of the arguments may come naturally to you. 

(Take everything I say here with a huge lump of salt...FWIW I don't know how to explain EA or longtermism or forecasting stuff to my own parents, partially due to the language barrier). 

Comment by Linch on Jumping in Front of Bullets vs. Organ Donation · 2021-07-11T08:27:42.336Z · EA · GW

This seems surprisingly low to me. Do you have some notes or a writeup of the analysis somewhere?

Comment by Linch on Open Thread: July 2021 · 2021-07-08T02:57:56.809Z · EA · GW

On a semi-related note, Peter Singer appeared on the podcast of a Canadian MP, which I thought was pretty cool.

Comment by Linch on Linch's Shortform · 2021-07-07T07:38:47.548Z · EA · GW

One additional risk: if done poorly, harsh criticism of someone else's blog post from several years ago could be pretty unpleasant and make the EA community seem less friendly.

I think I agree this is a concern. But just so we're on the same page here, what's your threat model? Are you more worried about

  1. The EA community feeling less pleasant and friendly to existing established EAs, so we'll have more retention issues with people disengaging?
  2. The EA community feeling less pleasant and friendly to newcomers, so we have more issues with recruitment and people getting excited to join novel projects?
  3. Criticism makes being open about your work less pleasant, and open Red Teaming about EA projects makes EA move even further in the direction of being less open than we used to be. See also Responsible Transparency Consumption
  4. Something else?
Comment by Linch on Linch's Shortform · 2021-07-07T07:34:26.790Z · EA · GW

I'm actually super excited about this idea though - let's set some courtesy norms around contacting the author privately before red-teaming their paper and then get going!

Thanks for the excitement! I agree that contacting someone ahead of time might be good (so at least someone doesn't learn about their project being red teamed until social media blows up), but I feel like it might not mitigate most of the potential unpleasantness/harshness. Like I don't see a good cultural way to both incentivize Red Teaming and allow a face-saving way to refuse to let your papers be Red Teamed. 

Like if Red Teaming is opt-in by default, I'd worry a lot about this not taking off the ground, while if Red Teaming is opt-out by default I'd just find it very suss for anybody to refuse (speaking for myself, I can't imagine ever refusing Red Teaming even if I would rather it not happen).

Comment by Linch on microCOVID.org: A tool to estimate COVID risk from common activities · 2021-07-06T19:57:06.319Z · EA · GW

Is Microcovid.org or other people in EA tracking Delta and the possibility of scarier variants? Personally, I continue to follow some epidemiologists and virologists and data people on Twitter but other than that I've stopped following Covid almost completely. I'm wondering if this is a sane choice to assume that "the community" (or broader society) has enough of a grip on things and can give us forewarning in case the correct choice later is for (even fully vaccinated) people to go into partial or full lockdowns again. 

Comment by Linch on Linch's Shortform · 2021-07-06T01:46:00.794Z · EA · GW

No, weaker claim than that, just saying that P(we spread to the stars|we don't all die or are otherwise curtailed from AI in the next 100 years) > 1%. 

(I should figure out my actual probabilities on AI and existential risk with at least moderate rigor at some point, but I've never actually done this so far).

Comment by Linch on EA Infrastructure Fund: Ask us anything! · 2021-07-06T01:01:40.710Z · EA · GW

That's great to hear! But to be clear, not for risk adjustment? Or are you just not sure on that point? 

Comment by Linch on Linch's Shortform · 2021-07-05T23:31:46.730Z · EA · GW

Upon (brief) reflection I agree that relying on the epistemic savviness of the mentors might be too much and the best version of the training program will train a sort of keen internal sense of scientific skepticism that's not particularly reliant on social approval.  

If we have enough time I would float a version of a course that slowly goes from very obvious crap (marketing tripe, bad graphs) into things that are subtler crap (Why We Sleep, Bem ESP stuff) into weasely/motivated stuff (Hickel? Pinker? Sunstein? popular nonfiction in general?) into things that are genuinely hard judgment calls (papers/blog posts/claims accepted by current elite EA consensus). 

But maybe I'm just remaking the Calling Bullshit course but with a higher endpoint.

___

(I also think it's plausible/likely that my original program of just giving somebody an EA-approved paper + say 2 weeks to try their best to Red Team it will produce interesting results, even without all these training wheels). 

Comment by Linch on Help Rethink Priorities Use Data for Animals, Longtermism, and EA · 2021-07-05T23:20:48.718Z · EA · GW

We'll likely have at least one more internship round before you graduate, so stay tuned! 

Comment by Linch on Linch's Shortform · 2021-07-05T23:07:40.211Z · EA · GW

Hmm I feel more uneasy about the truthiness grounds of considering some of these examples as "ground truth" (except maybe the Clauset et al example, not sure). I'd rather either a) train people to Red Team existing EA orthodoxy stuff and let their own internal senses + mentor guidance decide whether the red teaming is credible or b) for basic scientific literacy stuff where you do want clear ground truths, let them challenge stuff that's closer to obvious junk (Why We Sleep, some climate science stuff, maybe some covid papers, maybe pull up examples from Calling Bullshit, which I have not read).

Comment by Linch on Linch's Shortform · 2021-07-05T22:22:57.900Z · EA · GW

Hmm I think the most likely way downside stuff will happen is by flipping the sign rather than reducing the magnitude, curious why your model is different.

I wrote a bit more in the linked shortform.

Comment by Linch on Linch's Shortform · 2021-07-05T17:59:40.347Z · EA · GW

FWIW I'm also skeptical of naive ex ante differences of >~2 orders of magnitude between causes, after accounting for meta-EA effects. That said, I also think maybe our culture will be better if we celebrate doing naively good things over doing things that are externally high status.* 

But I don't feel too strongly, main point of the shortform was just that I talk to some people who are disillusioned because they feel like EA tells them that their jobs are less important than other jobs, and I'm just like, whoa, that's just such a weird impression on an absolute scale (like knowing that you won a million dollars in a lottery but being sad that your friend won a billion). I'll think about how to reframe the post so it's less likely to invite such relative comparisons, but I also think denying the importance of the relative comparisons is the point. 

*I also do somewhat buy arguments by you and Holden Karnofsky and others that it's more important for skill/career capital etc building to try to do really hard things even if they're naively useless. The phrase "mixed strategy" comes to mind.

Comment by Linch on Linch's Shortform · 2021-07-05T17:46:47.807Z · EA · GW

I agree with this, and also I did try emphasizing that I was only using MIRI as an example. Do you think the post would be better if I replaced MIRI with a hypothetical example? The problem with that is that then the differences would be less visceral. 

Comment by Linch on Linch's Shortform · 2021-07-05T17:41:38.525Z · EA · GW

I think the world either ends or some other form of (implied permanent) x-risk in the next 100 years or it doesn't. And if the world doesn't end in the next 100 years, we eventually will either a) settle the stars or b) ends or drastically curtails at some point >100 years out. 

I guess I assume b) is pretty low probability with AI, like much less than 99% chance. And 2 orders of magnitude isn't much when all the other numbers are pretty fuzzy and spans that many orders of magnitude.

(A lot of this is pretty fuzzy).

Comment by Linch on EA needs consultancies · 2021-07-04T01:59:41.858Z · EA · GW

I do agree with you that client quality and incentives are a serious potential problem here, especially when we consider potential funders other than Open Phil. A potential solution here is for the rest of the EA movement to make it clear that "you are more likely to get future work if you write truthful things,  even if they are critical of your direct client/more negative than your client wants or is incentivizing you to write/believe," but maybe this message/nuance is hard to convey and/or may not initially seem believable to people more used to other field norms. 

Comment by Linch on EA needs consultancies · 2021-07-04T00:16:09.181Z · EA · GW

Thanks for the detailed response! 

The only factor particular to consulting that I could see weighing against truth-seeking would be the desire to sell future work to the client... but to me that's resolved by clients making clear that what the client values is truth, which would keep incentives well-aligned. 

Hmm, on reflection maybe the issue isn't as particular to consulting, like I think the issue here isn't that people by default have overwhelming incentives against truth, but just that actually seeking truth is such an unusual preference in the vast majority of contexts that the whole idea is almost alien to most people. Like they hear the same words but don't know what it means/internalize this at all.

I'm probably not phrasing this well, but to give a sense of my priors: I guess my impression is that my interactions with approximately every entity that perceives themself as directly doing good outside of EA* is that they are not seeking truth, and this systematically corrupts them in important ways. Non-random smattering of examples that comes to mind: {public health (on covid, vaping, nutrition), bio-ethics, social psychology, developmental econ, climate change, vegan advocacy, religion, US Democratic party, diversity/inclusion} as instantiated in {academia, activist groups, media, regulatory groups, "mission-oriented" companies}. My limited experience with "mission-oriented" consultancies is that they're not an exception. 

I think the situation is plausibly  better outside of do-gooders. For example, I sort of believe that theoretical CS has much better publication norms than the listed academic fields, and that finance or poker people are too focused on making money to be doing much grandstanding.** 

Similarly, I would be surprised but not overwhelmingly so if mission alignment is the issue here, and if we take random McKinsey associates who are used to working in profit-seeking industries with higher standards, things would be okay/great. 

I wonder the extent to which employees of EA organizations feel competing forces against truth (e.g., I need to keep my job, not rock the boat, say controversial things that could upset donors) - I think you could make a case that consultants are actually better poised to do some of that truth-seeking e.g., if it's a true one-off contract

This seems plausible yeah, though if it's a one-off contract I also don't see a positive incentive to seek truth (To the extent my hypothesis is correct, what you want is consultants who are only motivated by profit + high professional standards). 

* The natural Gricean implicature of that claim is that I'm saying that EA orgs are an exception. I want to disavow that implication. For context, I think this is plausibly the second or third biggest limitation for my own work.

** Even that's not necessarily true to be clear. 

Comment by Linch on COVID: How did we do? How can we know? · 2021-07-03T21:01:57.905Z · EA · GW

I place >60% on the herding belief fwiw, especially if we limit to countries that have enough power to actually shake things (eg China, US, UK, Russia etc).


An additional piece of evidence for this is the degree of correlated beliefs about things like HCTs and genetic enhancement. 

Comment by Linch on Shallow evaluations of longtermist organizations · 2021-07-02T21:57:39.470Z · EA · GW

I'm providing numerical context for RP's longtermism team here because it's both a) easier to evaluate the costs than research(er) quality when you have the data, and b) that the costs are by default more invisible when you don't have the data. 
 

Rethink Priorities has recently been expanding into the longtermist sphere, and it did so by hiring Linch Zhang and Michael Aird, the latter part-time, as well as some volunteers/interns.

Just for some numerical context, I'm full-time at RP right now and Michael is half-time at RP and half-time at FHI RSP (so we have 1.5 FTEs). At the beginning of 2021, DaveRhysBenard was half-time and Michael was full-time (2.5 FTEs). That said, David was poached internally for some neartermist work (so not sure how to count that, FTE wise). I also spent ~1.5 months on neartermist work (not public). In total, this will be about 1.9 FTE-years by EOY (unless you count neartermist work as not part of the longtermist team), not including new hires.

We currently have 4 longtermist summer interns (all paid), 2 of which are ~half-time (3 FTEs total). They're here for approximately 3 months and are paid for by the EAIF + an additional donation which we earmarked for longtermist interns. Our internal theory of change for the internship focuses more on (a) helping the interns test and improve their fit for EA-style research and (b) helping their managers built management experience to facilitate further scaling, rather than on producing longtermist research outputs. 3 FTEs x 3 months ~= .75 FTE-years.

Finally, we have one volunteer, Charles Dillon (special circumstances). He's approximately half-time for us and started in May. We may have had other volunteers but if so I think they dropped off quickly without time investment on either our part or theirs, so low cost on both ends.* If we assume Charles will be with us until EOY (I sure hope so!), then we have .5 x 2/3 ~= .33 FTE-years in volunteers by EOY 2021. We do not expect to take on additional volunteers.

In total, by EOY 2021 on the RP longtermist team we'll have ~1.9 FTE years if you only count employees, ~2.65 if you count interns, and ~3 FTE years if you count both interns and unpaid volunteers. This is not including additional hires, which we may want to make in late Fall 2021.

*(If I'm missing someone, sincere apologies! Let me know and I'll add you)

Comment by Linch on EA needs consultancies · 2021-07-02T03:55:14.898Z · EA · GW

Thanks so much for the comment and congrats for staying on the 80k-suggested job train for 8 years! In your experience as a consultant, how much do people in the field care about truth? As opposed to satisfying what customers think they want, solving principal-agent problems within a company, etc. 

Put another way, what percentage of the time did consultants in your firm provide results that >70% of senior management in a client company initially disagreed with? 

I've heard a (perhaps flippant) claim that analysts at even top consulting companies believe that their job  is more about justifying client beliefs  than about uncovering the correct all-things-considered belief (and have recently observed evidence that is more consistent with this explanation than other nearby ones). So I would like to calibrate expectations here.

Comment by Linch on What are some skills useful for EA that one could learn in ~6 months? · 2021-07-02T00:37:16.033Z · EA · GW

I'm interested in answering a specific question  so my answer's at least useful for a specific real (or hypothetical) person, rather than provide generically worded advice that ends up being useful to nobody. :P

Comment by Linch on EA needs consultancies · 2021-06-30T21:45:49.682Z · EA · GW

In this case, do you think RP should focus more on quality and less on quantity as we scale, by satisficing on quantity and focusing/optimizing on research quality (concretely, this may mean being very slow to add additional researchers and primarily using them as additional quality checks on existing work, over trying to have more output in novel work)? This is very much not the way we currently plan to scale, which is closer to focusing on maintaining research quality and trying to increase quantity/output.  

(reiterating that all impressions here are my own).

Comment by Linch on EA needs consultancies · 2021-06-30T18:16:36.919Z · EA · GW

Hmm I think the main reason to start a consultancy is for scalability, since for whatever reasons existing orgs can't hire fast while maintaining quality.

I do think value of time is unusually high at Open Phil compared to the majority of other EA orgs I'm aware of, which points against people leaving Open Phil specifically.

Comment by Linch on What are some skills useful for EA that one could learn in ~6 months? · 2021-06-30T07:30:18.589Z · EA · GW

How many hours/week?

Comment by Linch on My current impressions on career choice for longtermists · 2021-06-30T04:21:59.575Z · EA · GW

Related to this discussion, Paul Graham has a recent article called "How to Work Hard," which readers here might find valuable.

Comment by Linch on [Meta] Is it legitimate to ask people to upvote posts on this forum? · 2021-06-30T02:53:29.438Z · EA · GW

Should the EA community be ok with people publishing a post here and then asking others—in small EA-related groups—to upvote the post

Generally no.

If not, are there exceptions?

The following would be a situation where I'd consider it appropriate to beg for upvotes:

My uncle Pierce, inheritor of the Hawthorne Wipes family fortune, died recently and left 100 million dollars to be given to a charity picked the first family member who gets 100 upvotes on an internet forum. Please upvote this fast! I would love a chance to get this money and allocate it to the EA Infrastructure Fund, before my cousin wins and donates it all to the Center for Applied Eschatology

Feel free to undo your upvote after the donation has been sent! Thanks fellow EAs. 

Or something like that.