Posts

What's the Theory of Change/Theory of Victory for Farmed Animal Welfare? 2021-12-01T00:52:32.246Z
How would you define "existential risk?" 2021-11-29T05:17:33.359Z
How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? 2021-11-27T23:46:00.740Z
[Linkpost] Don't Look Up - a Netflix comedy about asteroid risk and realistic societal reactions (Dec. 24th) 2021-11-18T21:40:55.260Z
[Job ad] Research important longtermist topics at Rethink Priorities! 2021-10-06T19:09:08.967Z
Cultured meat: A comparison of techno-economic analyses 2021-09-24T22:20:40.077Z
The motivated reasoning critique of effective altruism 2021-09-14T20:43:14.571Z
How valuable is ladder-climbing outside of EA for people who aren't unusually good at ladder-climbing or unusually entrepreneurial? 2021-09-01T00:47:31.983Z
What are examples of technologies which would be a big deal if they scaled but never ended up scaling? 2021-08-27T08:47:16.911Z
What are some key numbers that (almost) every EA should know? 2021-06-18T00:37:17.794Z
Epistemic Trade: A quick proof sketch with one example 2021-05-11T09:05:25.181Z
[Linkpost] New Oxford Malaria Vaccine Shows ~75% Efficacy in Initial Trial with Infants 2021-04-23T23:50:20.545Z
Some EA Forum Posts I'd like to write 2021-02-23T05:27:26.992Z
RP Work Trial Output: How to Prioritize Anti-Aging Prioritization - A Light Investigation 2021-01-12T22:51:31.802Z
Some learnings I had from forecasting in 2020 2020-10-03T19:21:40.176Z
How can good generalist judgment be differentiated from skill at forecasting? 2020-08-21T23:13:12.132Z
What are some low-information priors that you find practically useful for thinking about the world? 2020-08-07T04:38:07.384Z
David Manheim: A Personal (Interim) COVID-19 Postmortem 2020-07-01T06:05:59.945Z
I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA 2020-06-30T19:35:13.376Z
Are there historical examples of excess panic during pandemics killing a lot of people? 2020-05-27T17:00:29.943Z
[Open Thread] What virtual events are you hosting that you'd like to open to the EA Forum-reading public? 2020-04-07T01:49:05.770Z
Should recent events make us more or less concerned about biorisk? 2020-03-19T00:00:57.476Z
Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar? 2020-03-12T21:19:19.565Z
All Bay Area EA events will be postponed until further notice 2020-03-06T03:19:24.587Z
Are there good EA projects for helping with COVID-19? 2020-03-03T23:55:59.259Z
How can EA local groups reduce likelihood of our members getting COVID-19 or other infectious diseases? 2020-02-26T16:16:49.234Z
What types of content creation would be useful for local/university groups, if anything? 2020-02-15T21:52:00.803Z
How much will local/university groups benefit from targeted EA content creation? 2020-02-15T21:46:49.090Z
Should EAs be more welcoming to thoughtful and aligned Republicans? 2020-01-20T02:28:12.943Z
Is learning about EA concepts in detail useful to the typical EA? 2020-01-16T07:37:30.348Z
8 things I believe about climate change 2019-12-28T03:02:33.035Z
Is there a clear writeup summarizing the arguments for why deep ecology is wrong? 2019-10-25T07:53:27.802Z
Linch's Shortform 2019-09-19T00:28:40.280Z
The Possibility of an Ongoing Moral Catastrophe (Summary) 2019-08-02T21:55:57.827Z
Outcome of GWWC Outreach Experiment 2017-02-09T02:44:42.224Z
Proposal for an Pre-registered Experiment in EA Outreach 2017-01-08T10:19:09.644Z
Tentative Summary of the Giving What We Can Pledge Event 2015/2016 2016-01-19T00:50:58.305Z
The Bystander 2016-01-10T20:16:47.673Z

Comments

Comment by Linch on Most research/advocacy charities are not scalable · 2022-01-25T17:56:52.373Z · EA · GW

To be clear this is just a jumble of random thoughts I have, not a clear plan or a deep research topic or anything. I'm just imagining something vaguely in the direction of being an activist shareholder, except your pet cause is alignment rather than eg environmental concerns or board room diversity. 

Comment by Linch on Most research/advocacy charities are not scalable · 2022-01-25T11:34:21.712Z · EA · GW

I don't have well-formed views here, but some quick notes:

Investors and researchers who don't believe in your stances or leadership can probably exit and form new companies, and if they do believe, you don't necessarily need to buy shares to get them to listen.

  1. There are transition costs. Forming a new company is nontrivial.
  2. People aren't going to just change companies immediately because they disagree with your strategic direction a little, so there are soft stuff you can do.

Even within the EA community there's disagreement on safety/capabilities tradeoffs, or what safety work actually works. I wonder how you'll pick good leadership for this that all of the EA community is comfortable with.

The bar isn't an amazing thing with consensus opinion that it's amazing, the bar is  most decisionmakers think it's better than the status quo, or more precisely, better than the "status quo + benefits of offsetting CO2"

Comment by Linch on MichaelA's Shortform · 2022-01-23T09:40:37.326Z · EA · GW

I think the general thrust of your argument is clearly right, and it's weird/frustrating that this is not the default assumption when people talk about megaprojects (though maybe I'm not reading the existing discussions of megaprojects sufficiently charitably). 

2 moderately-sized caveats:

  1. Re 2) "Projects with great EV are really the focus and always have been", I think in the early days of EA, and to a lesser degree still today, a lot of focus of EA isn't on great EV so much as high cost-effectiveness. To some degree the megaprojects discourse was set to push back against this.
  2. Re: 5, "It's probably also partly because a lot of people aren't naturally sufficiently ambitious or lack sufficient self-confidence" I think this is definitely true, but maybe I'd like to push back a bit on the individual framing of this lack of ambition, as I think it's partially cultural/institutional. That is, until very recently, we (EA broadly, or the largest funders etc), haven't made it as clear that EA supports and encourages extreme ambition in outputs in a way that means we (collectively) are potentially willing to pay large per-project costs in inputs.
     
Comment by Linch on Pathways to impact for forecasting and evaluation · 2022-01-18T22:09:36.566Z · EA · GW

But there are also ways in which evaluations can have zero or negative impact. The one that worries me the most at the moment is people taking noisy evaluations too seriously, i.e., outsourcing too much of their thinking to imperfect evaluators. Lack of stakeholder buy-in doesn't seem like that much of a problem for the EA community: Reception for some of my evaluations posts was fairly warm, and funders seem keen to pay for evaluations [emphasis mine]

This doesn't seem like much evidence to me, for what it's worth. It seems very plausible to me that there's enough stakeholder buy-in that people are willing to pay for evaluations in the off-chance they're useful (or worse, willing to get the brand advantages of being someone who is willing to pay for evaluations), but this is  very consistent with people not paying as much attention/as willing to change direction based on imperfect evaluators as they ought to. 

Comment by Linch on Where is a good place to start learning about Forecasting? · 2022-01-16T05:55:37.089Z · EA · GW

Also, Superforecasting is great but longer than it needs to be, I've heard that there are good summaries out there but don't personally know where they are. 

I like this summary from AI Impacts.

Comment by Linch on Comments for shorter Cold Takes pieces · 2022-01-15T07:14:29.934Z · EA · GW

Speaking of "I think that's a great point about the value of seeing people change their opinions in real time," if you don't mind me asking, would you like to mention a sentence or two on why you no longer endorse the above paragraphs?

Comment by Linch on Concrete Biosecurity Projects (some of which could be big) · 2022-01-14T20:29:53.181Z · EA · GW

Oh this is really interesting, thanks! 

Comment by Linch on Concrete Biosecurity Projects (some of which could be big) · 2022-01-14T05:49:33.450Z · EA · GW

Here's the study FYI.

Comment by Linch on You are probably underestimating how good self-love can be · 2022-01-12T19:35:31.560Z · EA · GW

I'm curious what the path to impact here is. Have people tried this and found themselves doing directly more impactful work, having more creative + useful research, etc? 

Comment by Linch on Bryan Caplan on EA groups · 2022-01-12T11:33:32.038Z · EA · GW

Yeah, fair.

Comment by Linch on Concrete Biosecurity Projects (some of which could be big) · 2022-01-12T08:49:15.036Z · EA · GW

I can't find the source anymore but I remember being fairly convinced (70%?) that rhinovirus is probably spread primarily via formites, fwiw. 

The main thing is that snot can carry a lot more viruses than  aerosols. It's also suggestive to me that covid restrictions often had major effects on influenza and RSV, but probably much less so on rhinoviruses

I also don't think we should necessarily overindex on viral respiratory diseases/pandemics, even though I agree they're the scariest. 

Comment by Linch on Bryan Caplan on EA groups · 2022-01-11T23:19:03.096Z · EA · GW

I don't have a good model of what this topping out will look like. My intuition is that there's quite a bit of variance in the top 0.1% though I agree that the case is weaker for a normal distribution. My reasoning for why "student goodness" is probably not a normal distribution is partially because if you care about multiple relatively independent factors (say smarts, conscientiousness, and general niceness) in a multiplicative way, and the individual factors are normally or log-normally distributed, your resulting distribution is a log-normal. 

One funny hypothesis that someone like Bryan could give is something like "oh, the top EA students are all libertarian (ie, it's the same picture). 

I could maybe buy this, based on the (US-biased, Bay Area-biased) student groups I interact with, and especially if I factor in probably some pro-libertarian bias in Caplan's judgements of students. 

Comment by Linch on Linch's Shortform · 2022-01-11T14:50:11.363Z · EA · GW

What is the empirical discount rate in EA? 

Ie, what is the empirical historical discount rate for donations...

  • overall?
  • in global health and development?
  • in farmed animal advocacy?
  • in EA movement building?
  • in longtermism?
  • in everything else?

What have past attempts to look at this uncovered, as broad numbers? 

And what should this tell us about the discount rate going forwards?

Comment by Linch on Should EA be explicitly long-termist or uncommitted? · 2022-01-11T14:48:07.041Z · EA · GW

This is interesting and I'm glad you're bringing the discussion up. I think your footnote 2 demonstrates a lot of my disagreements with your overall post:

I'm using resources in a broad sense here to include everything from funding to attention to advice to slots at EAG. Also, given the amount of resources being deployed by EA is increasing, a shift in the distribution of resources towards long-termism may still involve an increase in the absolute number of resources dedicated towards short-termist projects.

Consider this section: 

Secondly, many of the short-term projects that EA has pursued have been highly effective and I would see it as a great loss if such projects were to suddenly have the rug pulled out from underneath them. Large shifts and sudden shifts have all kinds of negative consequences from demoralizing staff, to wasting previous investments in staff and infrastructure, to potentially bankrupting what would otherwise have been sustainable.

As a practical matter, Alexander Berger (with a neartermism focus) was promoted to a co-ED position at Open Phil, and my general impression is that Open Phil very likely intends to spend much more $s on neartermism efforts in the foreseeable future. So I think it's likely that EA efforts with cost-effectiveness comparable or higher than GiveWell top charities will continue to be funded (and likely with larger sums) going forwards, rather than "have the rug pulled out from underneath them."

Also:

You may be wondering, is such a thing even possible? I think it is, although it would involve shifting some resources dedicated towards short-termism[7] from supporting short-termist projects to directly supporting short-termists[8]. I think that if the amount of resources available is reduced, it is natural to adopt a strategy that could be effective with smaller amounts of money[9].

You mention in footnote 2 that you're using the phrase "resources" very broadly, but now you're referring to money as the primary resource. I think this is wrong because (especially in LT and meta) we're bottlenecked more by human capital and vetting capacity. 

This confusion seems importantly wrong to me (and not just nitpicking) , as longtermism efforts are relatively more bottlenecked by human capital and vetting capacity, while neartermism efforts are more bottlenecked by money. So from a moral uncertainty/trade perspective, it makes a lot of things for EA to dump lots of $s (and relatively little oversight) into shovel-ready neartermism projects, while focusing the limited community building, vetting, etc capacity on longtermism projects. Getting more vetting capacity from LT people in return for $s from NT people seem like a bad trade on both fronts.

Comment by Linch on Bryan Caplan on EA groups · 2022-01-11T14:15:43.645Z · EA · GW

Why do I prefer EA to, say, libertarian student clubs?  First and foremost, libertarian student clubs don’t attract enough members.  Since their numbers are small, it’s simply hard to get a vibrant discussion going.  EA has much broader appeal.  

It's pretty cool that EA has a broader appeal among student clubs than the third most popular political party in America!

Furthermore, while the best libertarian students hold their own against the best EA students, medians tell a different story.  The median EA student, like the median libertarian student, like almost any young intellectual, needs more curiosity and less dogmatism.  But the median EA’s curiosity deficit and dogmatism surplus is less severe.

I'm surprised that Bryan thinks the best libertarian students are on par with the best EA students, given a) there are more EA students (in his telling) and b) he thinks the median EA student is better. Naively it should be surprising that a group with both more members and a more impressive median won't have a more impressive top...why does Bryan think EA student groups have a lower variance? And if he's right, how can we improve this?

Comment by Linch on Making large donation decisions as a person focused on direct work · 2022-01-11T13:55:52.450Z · EA · GW

I'm curious if anyone else has experienced the same dilemma (maybe unlikely since I think Wave pays unusually high salaries for a role focused on direct work)

FWIW I'm in a similar position, largely because I did well in tech and have had some significant asset growth after I started doing direct work.

In addition to things you've mentioned, I've considered just funding a large fraction of Rethink's/my team's funding gap, though I'm confused about the epistemics/community virtues of donating to an employer [1] I've also considered just funding individuals who I feel really positive about giving money to and the existing sources aren't (yet) funding[2], but a) if I don't lower the bar, I only come across a few of these opportunities a year and b) if I do lower the bar, I worry about unpleasantness in friendships [3]

  1. ^

    If you have decently strong inside views about Wave's future valuation/moral impact, can you do the same thing by re-investing in Wave?

  2. ^

    I think I almost certainly should have moved faster in every case where I seriously considered it.

  3. ^

    "once you become known as a philanthropist, you can never tell a bad joke."

Comment by Linch on Rhetorical Abusability is a Poor Counterargument · 2022-01-10T09:21:07.322Z · EA · GW

Consider proposition P:

P: consequentialism leads people to believing predictably wrong things or undertake predictably harmful actions

I think if it were the case that we received evidence for P, it would be reasonable to conclude that consequentialism is more likely to be wrong as a decision procedure[1] than if we received evidence for not-P.

Do you disagree? If not, we should examine the distinction between "(heightened) rhetorical abusability" and P. My best guess is something that I often tritely summarize as "anything is possible when you lie":

Anybody could make up arguments about whether X decision procedure or ethical framework justifies or permits Y. What matters isn't the sophistication  (sophistry) of the arguments, but what adherents actually believe. As I have seen little evidence that consequentialists in history have done predictably worse actions than non-consequentialists, I'm not particularly bothered by the hypothetical claimed harms of consequentialism. 

Notably, unlike your post, my argument is contingent upon a specific scaffolding of empirical facts. Strong and representative evidence that consequentialist beliefs are predictably harmful in history, or conceptual, empirically informed, arguments that consequentialist beliefs will lead people to predictably do harm in the future, will cause me to update against consequentialism as a decision procedure.

 

  1. ^

    Though it was unclear to me from this post whether you were considering consequentialism as a criterion of rightness vs a decision procedure. If the former, I think the question is less interesting under moral antirealism or nihilism (which I suspect most critics of consequentialism subscribe to).

Comment by Linch on Rowing and Steering the Effective Altruism Movement · 2022-01-09T22:53:40.632Z · EA · GW

Thanks for this post! I think it's interesting, I'm very glad you wrote it, and I'm inclined to agree with a narrow version of it. I imagine many senior EAs will agree as well with a statement like "often to do much more good we're on the margin more bottlenecked by conceptual and empirical clarity than we are by lack of resources" (eg I'd usually be happier for a researcher-year to be spent helping the movement become less confused than for them to acquire resources via eg writing white papers that policymakers will agree with or via public comms or via making money), though of course I imagine degree of agreement varies a lot depending on the operationalizations in question.

One thread of thought I had that's somewhere between an extension and a critique:

In a recent blog post, Open Philanthropy Co-CEO Holden Karnofsky introduces the analogy of “the-world-as-a-ship” as a framework for thinking about efforts to make the world better.

Under this analogy, if you are ‘rowing’ the ship, you are trying to “help the ship reach its current destination faster.” By contrast, if you are ‘steering’ the ship, you aim to “navigate to a better destination than the current one.” Alternative options include anchoring, equity, and mutiny [emphasis mine]

I'd be interested in seeing someone develop into what anchoring, equity, and mutiny looks like for the EA and LT movements.

This is particularly relevant here because in practice. attempts of steering cannot easily be differentiated from attempts at anchoring, equity, or mutiny

For example, "maybe we should stop what we're doing and critically re-evaluate whether longtermism is good" might be an attempt at steering, but it might also be an attempt at anchoring

Steering options that look like taking into account more voices within effective altruism, or considering a broader range of perspectives and causes, with an (implicit) inference that equity and diversity leads to better decisions, might likewise be hard to differentiate in practice from an intrinsic preference for within-effective altruism equity, whether among individual EA people or EA cause areas. Hadyn's post of alternatives to donor lotteries comes to mind here. 

Finally, some critiques of the EA orthodoxy by heterodox folks, particularly ones that are a) highly prominent, b) seek external influence as their primary readers, and c) read as personal attacks, verge on mutiny (albeit poorly executed).  Dylan Matthews 2015 article on EA Global arguably reads this way to me, as does attempts by Torres et.al of painting longtermism as an instantiation of white supremacy.

I'm interested in seeing people developing and clarifying this ontology further. 

If I were to be critical of your post (and to be clear, this is stronger than what I actually believe), I'd argue that steering has overly positive associations in people's heads (not least because Holden identifies EA as trying to do steering!). If people thought through the implications of how steering might happen in practice, when it may be hard to differentiate from the more negatively-connoted (and probably more negative in practice) anchoring, equity, or mutiny, then their emotional associations and conclusions w/r/t the rowing vs. steering debate might be somewhat more measured. 

Comment by Linch on The Bioethicists are (Mostly) Alright · 2022-01-08T21:01:57.179Z · EA · GW

In addition to the what AllAmericanBreakfast said, the issue with "all Muslims are complicit in terrorism unless they loudly and publicly condemn terrorism" is that 

a. not all terrorism is committed by Muslims. 

b. The shared notion between a) Islamic terrorist and b) normal guy who happens to be Muslim is that they share a belief in the "the will of God", and they have deferring notions about what the will of God tells them to do.

c. In contrast, (at least in AllAmericanBreakfast's telling)"the establishment" specifically appeals to the bioethics illusion in their choice to be conservative and allowing people to die by omission. The appropriate comparison might instead be a Muslim imam [1]whose teachings were specifically cited by terrorists as a justification for terror and who chose not to condemn terror (or alternatively, blaming God himself, assuming God is real).

I think this is reasonable. I think it's also reasonable to assign partial blame to Karl Marx (and contemporary Marxist scholars and firebrands) for the failures of the Soviet Union, and it's reasonable to assign a small amount of blame to Nietzsche and Kant (as well as contemporary scholars who did not disavow such actions) for the harms of Nazi Germany. Or closer to home, if animal rights terrorism are conducted in the name of Peter Singer, it's reasonable to assign partial blame to Singer for his speech acts, and especially if he does not disavow such terrorism.

  1. ^

    Though that comparison is not exact either, since the illusion is not propagated by individual bioethicists so much as the field overall. So perhaps it's closer to whether Muslim imams overall have a duty to disavow terror, and I think this is also reasonable (I'd say the same thing about contemporary Marxist scholars re: Stalin and Nietzsche scholars re: Hitler).

Comment by Linch on The Bioethicists are (Mostly) Alright · 2022-01-08T19:31:06.107Z · EA · GW

Taking what you said at face value, what's going on here, institutionally? Philosophy is a nontrivially competitive field, and Stanford professorships aren't easy to get.

Comment by Linch on The Bioethicists are (Mostly) Alright · 2022-01-08T19:28:03.142Z · EA · GW

Fwiw when I see criticisms of a field, especially in a technical/semi-academic setting, I rarely assume the criticisms are about individuals and generally assume it's about institutions. 

This is possibly epistemically unwise/to our detriment, see Dan Luu's article, and I do think maybe EA currently pays too much attention to ideas and institutions and not enough to people, at least publicly. But I think at the very least, the broad trend in public conversations is for e.g. a criticism about CEA to be more about the institution than specific individuals in it, or a criticism of the US CDC to be more about the inputs and outputs of their decision-making and less about the personal foible of the director of the US CDC, or specific bureaucrats within it.

Perhaps EA critiques of the bioethics profession shed more heat than light, but that's a different claim than whether individual bioethicists have good opinions or not.

Comment by Linch on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2022-01-05T22:04:48.322Z · EA · GW

“There is beauty in the world and there is a horror,” she said, “and I would not miss a second of the beauty and I will not close my eyes to the horror.”

From The Unweaving of a Beautiful Thing, the winner of the recent creative writing contest.

Comment by Linch on The Unweaving of a Beautiful Thing · 2022-01-05T21:48:58.153Z · EA · GW

This is a beautiful story. Just yesterday I remarked that I thought I've lost my ability to cry. Reading this story has restored that ability.

Comment by Linch on Linch's Shortform · 2022-01-05T05:37:29.450Z · EA · GW

I think I have a preference for typing "xrisk" over "x-risk," as it is easier to type out, communicates the same information, and like other transitions (e-mail to email, long-termism to longtermism), the time has come for the unhyphenated version.

Curious to see if people disagree.

Comment by Linch on Linch's Shortform · 2022-01-03T22:24:44.942Z · EA · GW

I've now responded though I still don't see the connection clearly. 

Comment by Linch on [Linkpost] Don't Look Up - a Netflix comedy about asteroid risk and realistic societal reactions (Dec. 24th) · 2022-01-03T10:08:57.112Z · EA · GW

My impression was that in early 2020, there was a lot of serious-sounding articles in the news about how worries about covid were covering up the much bigger problem of the flu.

Comment by Linch on Convergence thesis between longtermism and neartermism · 2022-01-03T05:37:48.614Z · EA · GW

This lens is in contrast to the approach that Effective Institutions Project is taking to the issue, which considers institutions on a case-by-case basis and tries to understand what interventions would cause those specific institutions to contribute more to the net good of humanity.

I'm excited about this! Do people on the Effective Institutions Project consider these institutions from a LT lens? If so, do they mostly have a "broad tent" approach to LT impacts, or more of a "targeted/narrow theory of change" approach?

Comment by Linch on Convergence thesis between longtermism and neartermism · 2022-01-03T05:29:22.610Z · EA · GW

I appreciate the (politer than me) engagement!

These are the key diagrams from Lizka's post:

The key simplifying assumption is one in which decision quality is orthogonal to value alignment. I don't believe this is literally true, but is a good start. MichaelA et. al's BIP (Benevolence, Intelligence, Power) ontology* is also helpful here.

If we think of Lizka's B in the first diagram ("a well-run government") is only weakly positive or neutral on the value alignment axis from an LT perspective, and most other dots are negative, we'd yield a simplified result that what Lizka calls "un-targeted, value-neutral IIDM" -- that is, improving decision quality of unaligned actors (which is roughly what much of EA work/grantmaking in IIDM in practice looks like, eg in forecasting or alternative voting) as broadly having the same effect as improving technological progress or economic growth. 

I'm more optimistic about IIDM that's either more targeted (e.g. specialized in improving the decision quality of EA institutions, or perhaps via picking a side in great power stuff) or value-aligned (e.g.  having predictive setups where we predict certain types of IIDM work differentially benefits the LT future over other goals an institution can have, I think your(?) work on "institutions for future generations" plausibly fall here). 

One way to salvage these efforts' LT impact is claiming that in practice work that apparently looks like "un-targeted, value-neutral IIDM" (e.g. funding academic work in forecasting or campaigning for approval voting) is in practice pretty targeted or value-gnostic, e.g. because EAs are the only ones who care about forecasting. 

A secondary reason (not covered by Lizka's post) I'm leery is that influence goes both ways, and I worry that LT people who get stuck on IIDM may (eventually) get corrupted by the epistemics or values of institutions they're trying to influence, or that of other allies. I don't think this is a dominant consideration however, and ultimately I'd reluctantly lean towards EA being too small by ourselves to save the world without at least risking this form of corruption**.

*MichaelA wrote this while he was at Convergence Analysis. He now works at RP. As an aside, I do think there's a salient bias I have where I'm more likely to read/seriously consider work by coworkers than other work of equivalent merit, unfortunately I do not currently have active plans to fix this bias.

**Aside 2: I'm worried that my word choice in this post is too strong, with phrases like "corruption" etc. I'd be interested in more neutral phrasing that conveys the same concepts.

Comment by Linch on Convergence thesis between longtermism and neartermism · 2022-01-02T19:39:36.580Z · EA · GW

Yeah since almost all x-risk is anthropogenic, our prior for economic growth and scientific progress is very close to 50-50, and I have specific empirical  (though still not very detailed) reasons to update in the negative direction (at least on the margin, as of 2022).

With regards to IIDM I don't see why that wouldn't be net positive.

I think this disentanglement by Lizka might be helpful*, especially if (like me) your empirical views about external institutions are a bit more negative than Lizka's.

*Disclaimer: I supervised her when she was writing this

Comment by Linch on Convergence thesis between longtermism and neartermism · 2021-12-31T20:09:32.651Z · EA · GW

I'm personally pretty skeptical that much of IIDM, economic growth, and meta-science is net positive, am confused about moral circle expansion (though intuitively feel the most positive about it on this list), and while I agree that "(safe) research and development of biotech is good," I suspect the word "safe" is doing a helluva work here.

Also the (implicit) moral trade perspective here assumes that there exists large gains from trade from doing a bunch of investment into new cause/intervention areas that naively looks decent on both NT and LT grounds; it's not clear to me (and I'd tentatively bet against) that this is a better deal than people working on the best cause/intervention areas for each. 

Comment by Linch on Convergence thesis between longtermism and neartermism · 2021-12-31T20:04:40.342Z · EA · GW

The RCTs or other high-rigor evidence that would be most exciting for long-term impact probably aren't going to be looking at the same evidence base or metrics that would be the best for short-term impact. 

Comment by Linch on Convergence thesis between longtermism and neartermism · 2021-12-31T20:04:00.299Z · EA · GW

Here's my rushed, high-level take. I'll try to engage with specific subpoints later.

My views

Despite making the case for convergence being plausible this does still feel a bit contrived. I am sure if you put effort into it you could make a many weak arguments approach to show that nearterm and longterm approaches to doing good will diverge.

I feel like this is by far the most important part of this post and I think it should be highlighted more/put more upfront. The entire rest of the article felt like a concerted exercise in motivated reasoning (including the introductory framing and the word "thesis" in the title) and I almost didn't read it or bothered to comment as a result; I'm glad I did read to this section however. 

In short I don’t think we yet know how to do the most good and there is a case for much more exploratory research

I agree with this. As a result I was surprised at the "possible implications" section, since it presumes that the main conclusion of the post is correct.

Comment by Linch on Have you considered switching countries to save money? · 2021-12-31T07:07:47.600Z · EA · GW

Yeah, my impression is that the Bahamas is more expensive than many parts of the US (and more expensive than the vast majority of the US if you aren't including housing), particularly if you're planning to live in an "expat-y" lifestyle. 

Note that this impression is contested, see discussion here

Comment by Linch on "Disappointing Futures" Might Be As Important As Existential Risks · 2021-12-30T22:11:01.870Z · EA · GW

So I thought about this post a bit more, particularly the we never saturate the universe with maximally flourishing beings and  impossibility of reflective equilibrium sections. 

If we accept something like the total view of population ethics with linear aggregation, it follows that we should enrich the universe with as much goodness as possible. That means creating maximum pleasure, or eudaimonia, or whatever it is we consider valuable.

[...]

This vision does not depend on specific assumptions about what "flourishing" looks like. It could fit the hedonistic utilitarian idea of hedonium—matter that's organized to efficiently produce simple but maximally happy beings, who have no functions other to experience happiness—but it could also look like something else. 

The  situation feels much bleaker to me than that, because for any very specific and plausible definition of "flourishing" X under a total view with linear aggregation, it seems to me that X would likely capture <<1% of the astronomical value under the vast majority of other plausible total/linear definitions of flourishing. 

So it seems to me that if we have a fairly high bar of "existential win", that naively doesn't look too unreasonable (like ">1% of the value of the best possible utopia by Linch's values under reflective equilibrium"), then there's an intuitively compelling worldview that there's a <1% probability of an existential win and thus x-risk is >99%, even if we get the other things right like AI alignment, large-scale coordination, long reflection, etc.

My impression is that this belief is pretty unusual, which leads me to think that I'm missing some important steps. 

Although perhaps the most important consideration here is that all these questions can be deferred to until after the long reflection.

Comment by Linch on EA/Rationalist Safety Nets: Promising, but Arduous · 2021-12-29T22:22:33.752Z · EA · GW

Some anecdata that might or might not be helpful:

As I mentioned on FB, I didn't have a lot of money in 2017, and I was trying to transition jobs (not even to do something directly in EA, just to work in tech so I had more earning and giving potential). I'm really grateful to the EAs who lent me money, including you. If I instead did the standard "work a minimum wage job while studying in my off hours" (or worse, "work a minimum wage job while applying to normal grad jobs, and then work a normal grad job while studying in my off hours") route, I think my career trajectory would've been delayed for at least a year, probably longer.

Delaying my career trajectory would've cost ~100k Ev if I just stayed in tech and was donating, but I think my current work is significantly more valuable so I think it would've cost more than that.
The main counterpoint I could think of is that minimum wage jobs are good for the soul or something, and I think it's plausible that if I worked for one long enough I would be more "in touch" with average Americans and/or been more generally mature on specific axes. I currently do not believe the value of this type of maturity is very high, compared to my actual counterfactual (at Google etc) of the skills/career capital gained via having more experience interacting in "elite cultures," being around ambitious people, or thinking about EA stuff.

Comment by Linch on Linch's Shortform · 2021-12-29T21:50:27.632Z · EA · GW

I think many individual EAs should spend some time brainstorming and considering ways they can be really ambitious, eg come up with concrete plans to generate >100M in moral value, reduce existential risk by more than a basis point, etc.

Likewise, I think we as a community should figure out better ways to help people ideate and incubate such projects and ambitious career directions, as well as aim to become a community that can really help people both celebrate successes and to mitigate the individual costs/risks of having very ambitious plans fail.

Comment by Linch on Should I fly instead of taking trains? · 2021-12-28T06:06:53.324Z · EA · GW

you don't want to take a moral position where it's ok to harm some people in order to help others "more effectively".

This is not a full defense of my normative ethics, but I think it's reasonable to "pull" in the classical trolley problem, and I want to note that I think this is the most common position among EAs, philosophers, and laymen

In addition, the harm from increasing CO2 emissions is fairly abstract, and to me should not invoke many of the same non-consequentialist moral intuitions as e.g. agent-relative harms like lying. breaking a promise, ignoring duties to a loved one, etc.

Second, some cause areas lots of people here believe in are enticing in that investing in them moves the money back to you or to people you know, instead of directly to those you're trying to help. Which is not necessarily a reason to drop them, but is in my opinion certainly a reason not to treat them as the single cause you want to put all your eggs into. [emphasis mine]

I don't personally agree with this line of reasoning. There is a bunch of nuances here*, but at heart my view is that usually either you believe the cognitive bias arguments are strong enough to drop your top cause area(s), or you don't. So I do think we should be somewhat wary of arguments that lead to us having more resources/influence/comfort (but not infinitely so). However, the most productive use of this wariness is to subject to stronger scrutiny arguments or analysis that oh-so-coincidentally benefit ourselves overall, rather than hedge on less important levels.

Donation splitting is possibly a relevant prior discussion here.

*for example, there might be unusually tractable actions individuals can do for non-top cause areas that have amazing marginal utility (e.g. voting as a US citizen in a swing state)

Comment by Linch on Is EA compatible with technopessimism? · 2021-12-27T02:36:13.017Z · EA · GW

I recently convinced myself to be fairly technopessimistic in the short term (at least relative to some people I talk to in EA, unclear how this compares to e.g. online EAs or the population overall), though it's not a very exciting position and I don't know if I should prioritize writing up this argument over other things I can do that's productive. 

Comment by Linch on 2021 AI Alignment Literature Review and Charity Comparison · 2021-12-26T18:09:53.797Z · EA · GW

Great work as usual. Here's a minor comment before I dig more substantively:

In the past I have had very demanding standards around Conflicts of Interest, including being critical of others for their lax treatment of the issue. Historically this was not an issue because I had very few conflicts. However this year I have accumulated a large number of such conflicts, and worse, conflicts that cannot all be individually publically disclosed due to another ethical constraint.

As such the reader should assume I could be conflicted on any and all reviewed organisations. [Emphasis mine]

I think the issue with the last line is that if everything is seen as a conflict of interest, then nothing is. I obviously don't know the details of your ethical constraints, but I think readers who care about COIs might still benefit from lower-granularity announcement tags of the following form:

  • I have mild  conflicts of interest with this organization.
  • I have moderate or strong conflicts of interest with this organization.

If orgs are only split into 3 categories (no, mild, and moderate/strong), this may preserve your desired privacy/other ethical constraints while still leaking enough bits that donors who care a lot about COIs can productively use that information.

Comment by Linch on Should I fly instead of taking trains? · 2021-12-26T18:00:59.330Z · EA · GW

No, it could come from having a high-impact job (where nonzero marginal hours go into it) or from donating a fraction of the difference rather than all of the difference. 

I also think that if you believe that donations to other charities have higher marginal impact than donation to climate charities, it'd be less moral to donate to climate charities instead.

Comment by Linch on Movie review: Don't Look Up · 2021-12-26T17:52:02.032Z · EA · GW

As an anecdotal counterpoint, my girlfriend (not an EA, not an American) watched it with me and a friend on Christmas Eve and she said it was the best movie she saw this year, and enjoyed many parts of it (including parts I didn't like as much). 

Comment by Linch on Biosecurity needs engineers and materials scientists · 2021-12-17T14:56:23.525Z · EA · GW

Like Jackson mentioned, another biosecurity-relevant intervention where I think engineers would be useful would be in helping to design pandemic-safe refuges to help preserve civilization. My current belief as a non-expert is that this is quite high on I/N/T, though as usual there are nontrivial downside risks for a plan that's executed poorly. 

There are also cobenefits for shielding against risks other than bio, though my current best guess is that shielding against biorisk is the most important reason for refuges.

I'd be excited to talk to (civil) engineering types who are potentially interested in working on this, especially if they have prior experience running large projects and/or have at least some pre-existing network among biosecurity EAs.

Note that I'm very far from a biosecurity expert, and would not know many of the relevant crucial considerations.

Comment by Linch on The Possibility of an Ongoing Moral Catastrophe (Summary) · 2021-12-15T11:37:14.869Z · EA · GW

I considered Evan Williams' paper one of the most important papers in cause prioritization at the time, and I think I still broadly buy this. As I mention in this answer, there are at least 4 points his paper brought up that are nontrivial, interesting, and hard to refute.

If I were to write this summary again, I think I'd be noticeably more opinionated. In particular, a key disagreement I have with him (which I remember having at the time I was making the summary, but this never making it into my notes) is on the importance of the speed of moral progress vs the sustainability of continued moral progress. In "implementation of improved values", the paper focuses a lot on the flexibility of setting up society to be able  to make moral progress quickly, but naively I feel about as worried or more worried that society can make anti-progress and do horrifyingly dumb and new things in the name of good. So I'd be really worried about trajectory changes for the worse, especially longer-lasting ones ("lock-in" is a phrase that's in vogue these days).

I've also updated significantly on both the moral cost and the emprical probability of near-term extinction risks, and of course extinction is the archetypal existential risk that will dramatically curtail the value of the far future.

It feels weird getting my outline into the EA decade review, instead of the original paper, though I wouldn't be very surprised if at this point more EAs have read my outline than the paper itself.

I vaguely feel like Williams should get a lot more credit than he has received for this paper. Like EA should give him a prize or something, maybe help him identify more impactful research areas, etc.

Comment by Linch on Response to Recent Criticisms of Longtermism · 2021-12-14T02:00:16.820Z · EA · GW

Thanks for the response, from you and others! I think I had a large illusion of transparency about how obviously wrong Torres' critiques are to common-sense reason and morality. Naively I'd have thought that they'd come across as clearly dumb to target audiences the way (e.g.) the 2013 Charity Navigator critique of EA did. But I agree that if you and others think that many people who could potentially do useful work in EA (e.g., promising members of local groups, or academic collaborators at Cambridge) would otherwise have read Torres' article and been persuaded, then I agree that pointing out the obvious ways in which he misrepresents longtermism makes sense and is a good use of time!

I still vaguely have the gut feeling of "don't feed the energy creatures" where it's unwise to dedicate a lot of time to exhaustively try to engage with someone arguing in bad faith. So my first pass is that 1-2k words spent on quickly dissecting the biggest misrepresentations should be enough. But I think this feeling isn't very data- or reason- driven, and I don't have a principled policy of how applicable that feeling is in this case.

Comment by Linch on External Evaluation of the EA Wiki · 2021-12-13T21:55:54.714Z · EA · GW

The EA Wiki seems probably worth funding, but it is not the most ambitious project that the main person behind it could be doing.[emphasis mine]

This is a really minor point, but I think your phrasing here is overly euphemistic. "most ambitious project" taken very literally is a) a very high bar and b) usually not a bar we want people to go for [1]. To the extent I understand your all-things-considered views correctly, I would prefer phrasings like "While I think on the margin this project is worth spending EA dollars on, I do not believe that this project is  higher EV than other likely candidate options for Pablo to work on" or stronger wordings like "I am reasonably confident that other  likely career options for Pablo have significantly higher EV."

[1] A caricatured example of "most ambitious project" might look more like "become God-Emperor of the world" or "solve AI alignment in one week."
 

Comment by Linch on Response to Recent Criticisms of Longtermism · 2021-12-13T21:36:33.636Z · EA · GW

Hi! Thank you so much for this article. I have only skimmed it, but it appears substantive, interesting, and carefully done. 

Please don't take my question the wrong way, but may I ask what the motivation is for writing this article? Naively, this looks very detailed (43 minutes read according to the EAF, and you mention that you had to cut some sections) and possibly the most expansive public piece of research/communication you've done in effective altruism to date. While I applaud and actively encourage critiques of effective altruism and related concepts, as well as responses to them, my own independent impression is that the Torres pieces were somewhat sloppily argued. And while I have no direct representative evidence like survey results, my best guess based on public off-hand remarks and online private communications is that most other longtermist researchers broadly agree with me. So I'm interested in your reasoning for prioritizing this article over addressing other critiques, or generating your own critiques of longtermism, or other ways to summarize/introduce longtermism, or other ways to spend researcher time and effort. 

I want to re-iterate a general feeling of support and appreciation of someone a) taking critiques seriously and b) being willing to exhaustively analyze topics. I do think those are commendable attributes,  and my brief skim of your article suggests that your responses are well-done.

Comment by Linch on An Emergency Fund for Effective Altruists · 2021-12-12T09:47:52.481Z · EA · GW

I wrote a prior version of this idea when I first got interested in EA: https://forum.effectivealtruism.org/posts/5jBa7chCZudMHWe39/donation-insurance

Like many semi-decent ideas, it was never actually implemented.

I like your conceptualization better, and I also think compared to 6 or so years ago, EA now has both more money and more operational capacity, so I feel pretty good at partially or mostly refunding the earlier donors, particularly the ones who have fallen on hard times.

Comment by Linch on An Emergency Fund for Effective Altruists · 2021-12-12T09:43:27.640Z · EA · GW

A lot of these choices seem unnecessarily punitive to me, not sure.

Comment by Linch on EA megaprojects continued · 2021-12-11T02:08:21.508Z · EA · GW

Some quick thoughts: 

  • Word on the grapevine is that many universities have really poor operations capacity, including R1 research universities in the US and equivalent ones in Europe. It's unclear to me if an EA university can do better (eg by paying for more ops staff, by thinking harder about incentives), but it's at least not implausible.
    • Rethink Priorities, Open Phil, and MIRI all naively appear to have better ops than my personal impression of what ops at EA-affliated departments in research universities look like.
  • Promotion tracks in most (but not all) elite American universities are based on either a) (this is typical) paper publication record or b) (especially in liberal arts colleges) teaching. This can be bad if we (e.g.) want our researchers to study topics that may be unusually sensitive. So we might want to have more options like a more typical "research with management" track (like in thinktanks or non-academic EA research orgs), or prize funding like Thiel/All Souls (though maybe less extreme).
  • Having EAs work together seems potentially really good for wasting less time of both researchers and students.
  • Universities often just do a lot of things that I personally perceive as pretty immoral and dumb (eg in student admissions, possibly discriminate a lot against Asian descent or non-Americans, have punitive mental health services). Maybe this is just youthful optimism, but I would hope that an EA university can do better on those fronts.
Comment by Linch on EA megaprojects continued · 2021-12-11T01:49:05.581Z · EA · GW

I wouldn't treat the upvotes there as much evidence; I think most EAs voting on these things don't have very good qualitative or quantitative models of xrisks and what it'd take to stop them. 

A reductio ad absurdum here you might have is whether this is an indictment of the karma system in general. I don't think it is, because (to pick a sample of other posts on the frontpage) posts about burnout and productivity can simply invoke people's internal sense/vibe of what makes them worried, so just using affect isn't terrible, posts about internship/job opportunities can be voted based on which jobs EAs are internally excited for themselves or their acquaintances/coworkers to work at, posts about detailed specific topics have enough details in them that people can try to evaluate post on the merits of the posts themselves, etc.