Posts

What types of content creation would be useful for local/university groups, if anything? 2020-02-15T21:52:00.803Z · score: 6 (1 votes)
How much will local/university groups benefit from targeted EA content creation? 2020-02-15T21:46:49.090Z · score: 21 (9 votes)
Should EAs be more welcoming to thoughtful and aligned Republicans? 2020-01-20T02:28:12.943Z · score: 31 (15 votes)
Is learning about EA concepts in detail useful to the typical EA? 2020-01-16T07:37:30.348Z · score: 41 (21 votes)
8 things I believe about climate change 2019-12-28T03:02:33.035Z · score: 58 (36 votes)
Is there a clear writeup summarizing the arguments for why deep ecology is wrong? 2019-10-25T07:53:27.802Z · score: 11 (6 votes)
Linch's Shortform 2019-09-19T00:28:40.280Z · score: 5 (1 votes)
The Possibility of an Ongoing Moral Catastrophe (Summary) 2019-08-02T21:55:57.827Z · score: 43 (22 votes)
Outcome of GWWC Outreach Experiment 2017-02-09T02:44:42.224Z · score: 14 (16 votes)
Proposal for an Pre-registered Experiment in EA Outreach 2017-01-08T10:19:09.644Z · score: 11 (11 votes)
Tentative Summary of the Giving What We Can Pledge Event 2015/2016 2016-01-19T00:50:58.305Z · score: 7 (7 votes)
The Bystander 2016-01-10T20:16:47.673Z · score: 5 (5 votes)

Comments

Comment by linch on How much will local/university groups benefit from targeted EA content creation? · 2020-02-21T09:08:27.064Z · score: 2 (1 votes) · EA · GW

Thanks for the link and I agree that it's a valuable resource for a group starting out!

That said, I wonder if there is an illusion of transparency here and maybe we're talking past each other?

To be concrete, here are two problems I don't think the Hub's collection of resources currently fulfill.

1. My impression from looking through the content list on the EA Hub is that none of the sheets from the other groups can be directly adapted (even with significant modifications) for South Bay EA's audience, since the questions are either a) too broad and intro-level (like the CEA sheets) or b) have a lot of mandatory reading that's arguably not realistic for a heterogeneous group with many working professionals (eg the Harvard Arete stuff). I think SB EA is open to trying for more mandatory reading/high-engagement stuff among a subset of the members however. But right now if we are interested in an intermediate-level discussion on a topic that we haven't previously discussed (eg, geo-engineering, hinge of history), we basically have to make the sheets ourselves.

Historically we've found this to be true even for common topics that the online EA community has discussed for many years.

This isn't just a problem with the Hub to be clear; my group has been looking for a way to steal sheets from other groups since at least mid-2018. (It's possible our needs are really idiosyncratic but it'd be a bit of a surprise if that's true?)

2. I don't think of any of the existing sheets or guiding material as curriculum, per se. At least when we were creating the sheets, my co-organizers and I mostly "winged it" through a combination of intuition and rough guesses/surveys about what our members liked. At no point did we have a strong educational theory or built things with a mind towards the latest in the educational literature. I suspect other local groups are similar to us in that when they created sheets and organized discussions, they tried their best with limited time and care, rather than have a strong theory of education or change.

If I were to design things from scratch, I'd probably want to work in collaboration with eg, educational or edutech professionals who are also very familiar with EA (some of whom have expressed interest in this). It's possible that EA material is so out-of-distribution that being familiar with the pedagogical literature isn't helpful, but I feel like it's at least worth trying?

Comment by linch on Growth and the case against randomista development · 2020-02-19T02:58:28.611Z · score: 12 (4 votes) · EA · GW

(I talked more with brunoparga over PM).

For onlookers, I want to say I really appreciate bruno's top-level comment and that I have a lot of respect for bruno's contributions, both here and elsewhere. The comment I made two levels up was probably stronger than warranted and I really appreciate bruno taking it in stride, etc.

Comment by linch on Growth and the case against randomista development · 2020-02-17T08:46:19.680Z · score: 8 (5 votes) · EA · GW

On a meta-level, in general I think your conversation with lucy is overly acrimonious, and it would be helpful to identify clear cruxes, have more of a scout's mindset, etc.

My read of the situation is that you (and other EAs upvoting or downvoting content) have better global priors, but lucy has more domain knowledge in the specific areas they chose to talk about.

I do understand that it's very frustrating for you to be in a developing country and constantly see people vote against their economic best interests, so I understand a need to vent, especially in a "safe space" of a pro-growth forum like this one.

However, lucy likely also feels frustrated about saying what they believe to be true things (or at least well-established beliefs in the field) and getting what they may perceive to be unjustifiably attacked by people who have different politics or epistemic worldviews.

My personal suggestion is to have a stronger "collaborative truth-seeking attitude" and engage more respectfully, though I understand if either you or lucy aren't up for it, and would rather tap out.

Comment by linch on Growth and the case against randomista development · 2020-02-16T22:34:21.407Z · score: 3 (2 votes) · EA · GW

Apologies for the delayed response. I was surprised at not finding a single source (after several minutes of searching) that plotted literacy rates across time, however:

Prior to 1949, China faced a stark literacy rate of only 15 to 25 percent, as well as lacking educational facilities with minimal national curricular goals. But as the Chinese moved into the 1950s under a new leadership and social vision, a national agenda to expand the rate of literacy and provide education for the majority of Chinese youth was underway.

http://schugurensky.faculty.asu.edu/moments/1949china.html

In China, the literacy rate has developed from 79 percent in 1982 to 97 percent in 2010

https://www.statista.com/statistics/271336/literacy-in-china/

At least naively, this suggests a ~60% absolute change in literacy rates from 1949-~1980, which is higher than in the next 40 years (since you cannot go above 100%).

I think the change here actually understates the impact of the first 30 years, since there's an obvious delay between the implementation of a schooling system and the adult literacy rate (plus at least naively, we would expect the Cultural Revolution to have wiped out some of the progress).

One thing to flag with cobbling sources together is that there's a risk of using different (implicit or explicit) operationalizations, so the exact number can't be relied upon as much.

However, I think it's significantly more likely than not that under most reasonable operationalizations of adult literacy, the first 30 years of China under CCP rule was more influential than the next 40.

Comment by linch on How much will local/university groups benefit from targeted EA content creation? · 2020-02-16T22:03:21.763Z · score: 2 (1 votes) · EA · GW

Do you have a sense of whether/how much new material is needed vs. we already have all the existing material and it's just a question of compiling everything together?

If the former, a follow-up question is which new material will be helpful. Would be excited you (or anybody else) also answer this related question:

https://forum.effectivealtruism.org/posts/prrKzvCXuyRn4MHbu/what-types-of-content-creation-would-be-useful-for-local

Comment by linch on How much will local/university groups benefit from targeted EA content creation? · 2020-02-16T21:54:41.085Z · score: 4 (2 votes) · EA · GW

Yeah I guess that's the null hypothesis, thought it's possible that people don't use the current resources because it's not "good" enough (eg, insufficiently accessible, too much jargon, too many local context specific stuff, etc).

Another thing to consider is "curriculum", ie, right now discussion sheets, etc are shared to the internet without tips on how to adapt them (since local groups who wrote the sheets have enough local context/institutional knowledge on how the sheets should be used).

An interesting analogy is the "instructor's edition" of textbooks, which iirc in the US K-12 system often has almost as much supplementary material as the textbook's content itself!

Comment by linch on What are the best arguments that AGI is on the horizon? · 2020-02-16T10:05:52.181Z · score: 7 (6 votes) · EA · GW
I realize that for the EA community to dedicate so many resources to this topic there must be good reasons to believe that AGI really is not too far away

First, a technicality: you don't have to strongly believe that the median probability is that AGI/Transformative AI is happening soonish, just that the probability is high enough to be worth working on[1].

But in general, several points of evidence of a relatively soon AGI:

1. The first is that we can look at estimates from AI experts. (Not necessarily AI Safety people). It seems like their estimates for when Human Level AI/AGI/TAI are all over the place, but roughly speaking, the median is <60 years, so expert surveys say that it seems more likely than not to happen in our lifetimes[2]. You can believe that AI researchers are overconfident about this, but bias could be in either direction (eg, plenty of examples in history where famous people in a field dramatically underestimate progress in that field).

2. People working specifically on AGI (eg, people at OpenAI, DeepMind) seem especially bullish about transformative AI, even relative to experts not working on AGI. Note that this is not uncontroversial, see eg, criticisms from Jessica Taylor, among others. Note also that there's a strong selection effect for the people who're the most bullish on AGI to work on it.

3. Within EA, people working on AI Safety and AI Forecasting have more specific inside view arguments. For example, see this recent talk by Buck and a bunch of stuff by AI Impacts. I find myself confused about how much to update on believable arguments vs. just using them as one number among many of "what experts believe".

4. A lot of people working in AI Safety seem to have private information that updates them towards shorter timelines. My knowledge of a small(?) subset of them does lead me to believe in somewhat shorter timelines than expert consensus, but I'm confused about whether this information (or the potential of this information) feeds into expert intuitions for forecasting, so it's hard to know if this is in a sense already "priced in." (see also information cascades, this comment on epistemic modesty). Another point of confusion is how much you should trust people who claim to have private information; a potentially correct decision-procedure is to ignore all claims of secrecy as BS.

_

[1] Eg, if you believe with probability 1 that AGI won't happen for 100 years, I think a few people might still be optimistic about working now to hammer out the details of AGI safety, but most people won't be that motivated. Likewise, if you believe (as I think Will MacAskill does) that the probability of AGI/TAI in the next century is 1%, I think many people may believe there are marginally more important long-termist causes to work on. How high does X have to be for "X% chance of AGI in the next Y years", in your words, is a harder question.

[2] "Within our lifetimes" is somewhat poetic but obviously the "our" is doing a lot of the work in that phrase. I'm saying that as an Asian-American male in my twenties, I expect that if the experts are right, transformative AI is more likely than not to happen before I die of natural causes.

Comment by linch on How much will local/university groups benefit from targeted EA content creation? · 2020-02-16T08:52:31.987Z · score: 2 (1 votes) · EA · GW

No worries, thanks for pointing out a resource.

Comment by linch on How much will local/university groups benefit from targeted EA content creation? · 2020-02-16T08:14:23.132Z · score: 4 (2 votes) · EA · GW
Have you shared these with other local groups before now? Have they been adopted or adapted there?

I know Stanford EA sometimes use some of our old sheets with their modifications[1]. I believe they don't focus as much on the same type of discussion-focused meetups that we do anymore, so unclear if they have solid metrics on how helpful the sheets are for them (though at least we're saving them some time).

I've also shared our sheets (notably none of them are designed for any group in mind other than SB) online a few times. A lot of other local group organizers appeared excited about them but nobody followed up, so my guess is that uptake elsewhere probably is nonexistent or pretty low [2].

[1] SB EA sort of grew out of Stanford EA so it makes a lot of sense that our structure/content is sufficiently similar that it's usable for their purposes.

[2] Notably I wasn't really tracking that Stanford EA used our sheets until I explicitly asked a few weeks ago, so I guess it's unlikely though not impossible if, eg, a few groups saw my posts on FB or our material on the EA Hub and adapted our sheets but never bothered contacting us.

Comment by linch on How much will local/university groups benefit from targeted EA content creation? · 2020-02-16T07:37:33.255Z · score: 2 (1 votes) · EA · GW

Thanks! Though this seems more like a comment than an answer.

Comment by linch on My personal cruxes for working on AI safety · 2020-02-14T06:40:14.106Z · score: 4 (2 votes) · EA · GW

(Only attempting to answer this because I want to practice thinking like Buck, feel free to ignore)

Now that I think about it, I don't think I understand how your definition of AGI is different to the results of whole-brain emulation, apart from the fact that they used different ways to get there

My understanding is that Buck defines AGI to point at a cluster of things such that technical AI Safety work (as opposed to, eg., AI policy work or AI safety movement building, or other things he can be doing) is likely to be directly useful. You can imagine that "whole-brain emulation safety" will look very different as a problem to tackle, since you can rely much more on things like "human values", introspection, the psychology literature, etc.

Comment by linch on What posts you are planning on writing? · 2020-02-11T01:01:32.722Z · score: 5 (3 votes) · EA · GW

1. Framing issues with the unilateralist's curse.

I'd like to expand this shortform comment into a more detailed post with slightly better examples, some tentative conclusions, and a clear takeaway for what types of future research would be desirable.

2. A Post on Power Law distributions

Two possible posts here:

A. Power Law Distributions? It's less likely than you think.

a. Basically, lots of EAs arguing that the distribution over {charitable organizations, interventions, people, causes} is ~power law.

b. I claim that this is unlikely. The distribution over most things that matter seem to be a heavy tail distribution that's less extreme than power law.

c. outline here: https://docs.google.com/document/d/17n27ygtUloGrFGqJyOV0Q-yUdGrK5HQoEI-de8lXTy0/edit

d. Unfortunately understanding this well involves some mathematical machinery and a lot of real-world stats that's been somewhat hard for me to make progress on (happy to hand it off to somebody else!)

B. What to do if we live in a power law world

The alternative post is to argue for why if were to take the power law hypothesis about EA-relevant things seriously, we should change our actions dramatically in key ways. I think it might be helpful to start a conversation about this.

3. Thoughts on South Bay EA

I cofounded and co-organized South Bay EA, and had a pretty comprehensive write-up about what futures we should be planning for. My co-organizers and I are still debating between whether to anonymize and share the write-up to benefit future organizers.

4. EA SF tentative plan

Similarly, I've vaguely been thinking of having a public write-up about plans for EA San Francisco so it's easier to a) get feedback through external criticism and b) find collaborators/potential co-organizers online rather than entirely through my network.

Comment by linch on Growth and the case against randomista development · 2020-02-10T23:20:12.298Z · score: 3 (2 votes) · EA · GW

I enjoyed reading Development as Freedom by Sen in undergrad. It was an interesting read for me to get an understanding of non-consequentialist approaches to development, though I still think he underestimated the value of flow-through effects from GDP/scientific progress.

Comment by linch on Should Longtermists Mostly Think About Animals? · 2020-02-10T23:18:10.890Z · score: 3 (2 votes) · EA · GW
Yeah, the idea of looking into longtermism for nonutilitarians is interesting to me. Thanks for the suggestion!

Yeah I think that'd be useful to do.

I think regardless, this helped clarify a lot of things for me about particular beliefs longtermists might hold (to various degrees)

I'm glad it was helpful!

Comment by linch on Growth and the case against randomista development · 2020-02-09T13:32:47.731Z · score: 5 (3 votes) · EA · GW
Could I please have a source on China being that good, especially pre-Deng Xiaoping's reforms?

The life expectancy of China has consistently gone up since 1960[1] (where the World Bank data starts).

There is a larger change, in absolute terms, from 1960 to 1980 (roughly when the reforms seriously started) than from 1980 to 2017. The increase is from 44.3 in 1960 to 66.4 in 1979, which is much larger than the rest of the world(52.6 to 62.6). To put it in perspective, if you're an average[2] Chinese person, it means that your life expectancy rose ~ as rapidly as your age for 20 full years, so if the curve continued you'd never die.

Of course, this is partially because the low-hanging fruits are plucked first because they are easier to pluck, but nonetheless it's substantive evidence that public health before the reforms must have done something right.

[1] https://data.worldbank.org/indicator/SP.DYN.LE00.IN?locations=CN

[2] Somewhat misleading to use the average since some of the advances came from infant mortality, but still.

Comment by linch on Are we living at the most influential time in history? · 2020-02-09T13:19:07.890Z · score: 3 (2 votes) · EA · GW

Wikipedia gives the physicist's version, but EAs (and maybe philosophers?) use it more broadly.

https://en.wikipedia.org/wiki/Copernican_principle

The short summary I use to describe it is that "we" are not that special, for various definitions of the word we.

Some examples on FB.

Comment by linch on Snails used for human consumption: The case of meat and slime · 2020-02-07T08:23:47.370Z · score: 9 (4 votes) · EA · GW

It was touched upon very briefly here:

https://forum.effectivealtruism.org/posts/C8247akhZpyMXkRb3/snails-used-for-human-consumption-the-case-of-meat-and-slime#Are_snails_sentient_individuals__If_so__how_should_they_be_morally_considered_

Comment by linch on What posts you are planning on writing? · 2020-02-07T03:45:33.752Z · score: 2 (1 votes) · EA · GW

FWIW, I think tutoring EAs can be a valuable intervention, though maybe won't ever be big enough for an org (or possibly even a single person) to work on this full-time.

Comment by linch on evelynciara's Shortform · 2020-02-07T01:01:06.646Z · score: 3 (2 votes) · EA · GW

I don't have statistics, but my best guess is that if you sample random points across all public buses running in America, in over 3/4 of the time, less than half of the seats are filled.

This is extremely unlike my experiences in Asia (in China or Singapore).

Comment by linch on When to post here, vs to LessWrong, vs to both? · 2020-02-06T07:39:35.608Z · score: 3 (2 votes) · EA · GW
On the other hand, I expect that few of its users will want to discuss the technical aspects of artificial intelligence, anthropics or decision theory

It's fun to see how different the EA Forum (and maybe the community as a whole?) is from 6 years ago, since these days all three topics seem like fair game.

Comment by linch on aarongertler's Shortform · 2020-02-06T03:13:22.042Z · score: 8 (4 votes) · EA · GW

I think this makes it clear that people are deliberately being anonymous rather than carrying over old internet habits.

Also I think there's a possibility of information leakage if someone tries to be too cutesy with their pseudonyms. Eg, fluttershy_forever might lead someone to look for similar names in the My Little Pony forums, say, where the user might be more willing to "out" themselves than if they were writing a critical piece on the EA forum.

This is even more true for narrower interests. My Dominion username is a Kafka+Murakami reference, for example.

There's also a possibility of doxing in the other direction, where eg, someone may not want their EA Forum opinions to be associated with bad fanfiction they wrote when they were 16.

Comment by linch on aarongertler's Shortform · 2020-02-06T02:58:19.583Z · score: 2 (1 votes) · EA · GW

One possibility is to do what Google Docs does, and pick an animal at near-random (ideally a memorable one), and be AnonymousMouse, AnonymousDog, AnonymouseNakedMoleRat, etc.

Comment by linch on evelynciara's Shortform · 2020-02-06T02:57:00.324Z · score: 3 (2 votes) · EA · GW

Interesting post! Curious what you think of Jeff Kaufman's proposal to make buses more dangerous in the first world, the idea being that buses in the US are currently too far in the "safety" direction of the safety vs. convenience tradeoff.

GiveWell also has a standout charity (Zusha!) working in the opposite direction, trying to get public service vehicles in Kenya to be safer.

Comment by linch on Should Longtermists Mostly Think About Animals? · 2020-02-06T01:44:40.339Z · score: 2 (1 votes) · EA · GW
To wit, I think a lot of retorts to Abraham's argument appear to me to be of the form "well, this seems rather unlikely to happen", whereas I don't think such an argument actually succeeds.

Peter, do you find my arguments in the comments below persuasive? Basically I tried to argue that the relative probability of extremely good outcomes is much higher than the relative probability of extremely bad outcomes, especially when weighted by moral value. (And I think this is sufficiently true for both classical utilitarians and people with a slight negative leaning).

Comment by linch on Should Longtermists Mostly Think About Animals? · 2020-02-06T01:18:02.652Z · score: 4 (3 votes) · EA · GW

tl;dr: My above comment relies on longtermism + total utilitarianism (but I attempted to be neutral on the exact moral goods that compose the abstraction of "utility"). With those two strong assumptions + a bunch of other more reasonable ones, I think you can't escape thinking about science-fictiony scenarios. I think you may not need to care as much about science-fictiony scenarios with moderate probabilities (but extremely high payoffs in expected utility) if your views are primarily non-consequentialist, or if you're a consequentialist but the aggregative function is not additive.

I also appreciate your thoughtful post and responses!

This isn't usually assumed in the longtermist literature

Having read relatively little of it, my understanding is that the point of the academic literature (which do not usually assume total utilitarian views?) on longtermism is to show that longtermism is compatible (in some cases required) under a broad scope of moral views that are considered respectable within the academic literature.

So they don't talk about science-fictiony stuff, since their claim is that longtermism is robustly true (or compatible) with reasonable academic views in moral philosophy.

The point of my comment is that longtermism + total utilitarianism must lead you to think about these science-fictiony scenarios that have significant probability, rather than that longtermism itself must lead you to consider them.

I guess part of the issue here is you could have an incredibly tiny credence in a very specific number of things being true (the present being at the hinge of history, various things about future sci-fi scenarios), and having those credences would always justify deferral of action.

I think if the credence is sufficiently low, either moral uncertainty (since most people aren't total utilitarians with 100% probability) or model uncertainty will get you to do different actions.

At very low probabilities, you run into issues like Pascal's Wager and Pascal's Mugging, but right now the future is so hazy that I think it's too hubristic to say anything super-concrete about the future. I'm reasonably confident that I'm willing to defend that all of the claims I've made above has percentage points of probability[1], which I think is well above the threshold for "not willing to get mugged by low probabilities."

I suspect that longetermism + moral axiologies that are less demanding/less tail-driven than total utilitarianism will rely less on the speculative/weird/science-fictiony stuff. I hadn't thought about them in detail.

To demonstrate what I roughly mean, I made up two imaginary axilogies (I think I can understand other nonhedonic total utilitarian views well enough to represent them faithfully, but I'm not well-read enough on non-total utilitarian views, so I made up fake ones rather than accidentally strawman existing views):

1. An example of an axiology that is long-termist but not utilitarian is that you want to maximize the *probability* that the long-term future will be a "just" world, where "justice" is a binary variable (rather than a moral good that can be maximized). If you have some naive prior that this will have 50-50 chance of happening by default, then you might want to care somewhat about extremely good outcomes for justice (eg, creating a world which can never backslide into an unjust one), and extremely bad outcomes (avoiding a dictatorial lock-in that will necessitate permanent unjust society).

But your decisions are by default going to focus more on the median outcomes rather than the tails. Depending on whether you think animals are treated justly right now, this may entail doing substantial work on farmed animals (and whether to work on wild animal welfare depends on positive vs negative conceptions of justice).

2. An example of an axiology that is long-termist and arguably utilitarian but not total utilitarian is if you have something like instead of doing Sum(Utility across all beings), you instead have Average(Log(Utility per being)). In such a world, the tails dominate if and only if you can have a plausible case for why the tails will have extremely large outcomes even on a log scale. I think this is still technically possible, but you need much stronger assumptions or better arguments than the ones I outlined above.

I'd actually be excited for you or someone else with non-total utilitarian views to look into what other moral philosophies (that people actually believe) + longtermism will entail.

[1] A way to rephrase "there's less than a 1% probability that our descendants will wish to optimize for moral goods" is that "I'm over 99% confident that our descendants wouldn't care about moral goods, or care very little about them." And I just don't think we know enough about the longterm future to be that confident about anything like that.

Comment by linch on Joan Gass: How to Build a High-Impact Career in International Development · 2020-02-05T11:44:11.380Z · score: 2 (1 votes) · EA · GW

Hi Nathan, I think it is a log function (most likely natural log, but could have been shorthand for some other base).

People often use this when figuring out benefits of gdp, consumption growth etc (eg, I think Givewell assumes that a doubling of wealth is ~ equally good no matter what the baseline wealth is).

The approximate reasoning for this is that we expect there to be diminishing marginal returns to wealth per capita, and there is some weak empirical evidence that a log function specifically fits the data reasonably well.

Comment by linch on Should Longtermists Mostly Think About Animals? · 2020-02-05T10:52:29.575Z · score: 3 (2 votes) · EA · GW
If sentient tools are adapted to specific conditions (e.g. evolved), a random change in conditions is more likely to be detrimental than beneficial.

I don't think it's obvious that this is in expectation negative. I'm not at all confident that negative valence is easier to induce than positive valence today (though I think it's probably true), but conditional upon that being true, I also think it's a weird quirk of biology that negative valence may be more common than positive valence in evolved animals. Naively I would guess that the experiences of tool AI (that we may wrongly believe to not be sentient, or are otherwise callous towards) is in expectation zero. However, this may be enough for hedonic utilitarians with a moderate negative lean (3-10x, say) to believe that suffering overrides happiness in those cases.

I want to make a weaker claim however, which is that per unit of {experience, resource consumed}, I'd just expect intentional, optimized experience to be multiple orders of magnitude greater than incidental suffering or happiness (or other relevant moral goods).

If this is true, to believe that the *total* expected unintentional suffering (or happiness) of tool AIs to exceed that of intentional experiences of happiness (or suffering), you need to believe that the sheer amount of resources devoted to these tools are several orders of magnitude greater than the optimized resources.

This seems possible but not exceedingly likely.

If I was a negative utilitarian, I might think really hard about trying to prevent agents deliberately optimizing for suffering (which naively I would guess to be pretty unlikely but not vanishingly so).

Also, individuals who are indifferent to or unaware of negative utility (generally or in certain things) may threaten you with creating a lot of negative utility to get what they want. EAF is doing research on this now.

Yeah that's a good example. I'm glad someone's working on this!

Comment by linch on Should Longtermists Mostly Think About Animals? · 2020-02-05T09:50:29.249Z · score: 2 (1 votes) · EA · GW

Yes, thanks for the catch!

Comment by linch on Should Longtermists Mostly Think About Animals? · 2020-02-05T07:55:43.253Z · score: 15 (8 votes) · EA · GW

cross-posted from FB.

Really appreciate the time it took you to write this and detailed analysis!

That said, I strongly disagree with this post. This tl;dr of the post is

"Assume total utilitarianism and longtermism is given. Then given several reasonable assumptions and some simple math, wild animal welfare will dominate human welfare in the foreseeable future, so total utilitarian longtermists should predominantly be focused on animal welfare today."

I think this is wrong, or at least the conclusions don't follow from the premises, mostly due to weird science-fictiony reasons.

The rest of my rebuttal will be speculative and science-fictiony, so if you prefer reading comments that sound reasonable, I encourage you to read elsewhere.

Like the post I'm critiquing, I will assume longtermism and total utilitarianism for the sake of the argument, and not defend them here. (Unlike the poster, I personally have a lot of sympathy towards both beliefs).

By longtermism, I mean a moral discount rate of epsilon (epsilon >=0, epsilon ~=0). By total utilitarianism, I posit the existence of moral goods ("utility") that we seek to maximize, and the aggregation function is additive. I'm agnostic for most of the response about what the moral goods in question are (but will try to give plausible ones where it's necessary to explain subpoints).

I have two core, mostly disjunctive arguments for things the post missed about where value in the long-term future are:

A. Heavy tailed distribution of engineered future experiences
B. Cooperativeness with Future Agents

A1. Claim: Biological organisms today are mostly not optimized/designed for extreme experiences.

I think this is obviously true. Even within the same species (humans), there is a wide variance of reported, eg, happiness for people living in ~the same circumstances, and most people will agree that this represents wide variance in actual happiness (rather than entirely people being mistaken about their own experiences.

Evolutionarily, we're replicator machines, not experience machines.

This goes for negative as well as positive experiences. Billions of animals are tortured in factory farms, but the telos of factory farms isn't torture, it's so that humans get access to meat. No individual animal is *deliberately* optimized by either evolution or selective breeding to suffer.

A2. Claim: It's possible to design experiences that have much more utility than anything experienced today.
I can outline two viable paths (disjunctive):

A2a. Simulation
For this to hold, you have to believe:
A2ai. Humans or human-like things can be represented digitally.
I think there is philosophical debate, but most people who I trust think this is doable.

A2aii. Such a reproduction can be cheap

I think this is quite reasonable since again, existing animals
(including human animals) are not strongly optimized for computation.

A2aiii. simulated beings are capable of morally relevant experiences or otherwise production of goods of intrinsic moral value.

Some examples may be lots of happy experiences, or (if you have a factor for complexity) lots of varied happy experiences, or other moral goods that you may wish to produce, like great works of art, deep meaningful relationships, justice, scientific advances, etc.

A2b. Genetic engineering
I think this is quite viable. The current variance among human experiences is an existence proof. There are lots of seemingly simple ways that improve on current humans to suffer less, and be happier (eg, lots of unnecessary pain during childbirth just because we've evolved to be bipedal + have big heads).

A3. Claim: Our descendants may wish to optimize for positive moral goods.

I think this is a precondition for EAs and do-gooders in general "winning", so I almost treat the possibility of this as a tautology.

A4. Claim: There is a distinct possibility that a high % of vast future resources will be spent on building valuable moral goods, or the resource costs of individual moral goods are cheap, or both.

A4ai. Proportions: This mostly falls from A3. If enough of our descendants care about optimizing for positive moral goods, then we would reasonably expect them to devote a lot of our resources to producing more of them. Eg, 1% of resources being spent on moral goods isn't crazy.

A4aii. Absolute resources: Conservatively assuming that we never leave the solar system, right now ~1/10^9 of the Sun's energy reaches Earth. Of the one-billionth of light that reaches Earth, less than 1% of that energy is used by plants for photosynthesis (~ all of our energy needs, with the exception of nuclear power, comes from extracting energy that at one point came from photosynthesis -- the extraction itself being a particularly wasteful process. Call it another 1-2 of magnitude discount?).

All of life on Earth uses <1/10^11 (less than one in one hundred billionth!) of the Sun's energy. Humans use maybe 1/10^12 - 1/10^13 of that.

It's not crazy that one day we'll use multiple orders of magnitude of energy more for producing moral goods than we currently spend doing all of our other activities combined.

If, for example, you think the core intrinsic moral good we ought to optimize is "art", right now Arts and Culture compose 4% of the US GDP (Side note: this is much larger than I would have guessed), and probably a similar or smaller number for world GDP.

A4b. This mostly falls from A2.

A4bi. Genetic engineering: In the spirit of doing things with made-up numbers, it sure seems likely that we can engineer humans to be 10x happier, suffer 10x less, etc. If you have weird moral goals (like art or scientific insight), it's probably even more doable to genetically engineer humans 100x+ better at producing art, come up with novel mathematics, etc.

A4bii. It's even more extreme with digital consciousness. The upper bound for cost is however much it costs to emulate (genetically enhanced) humans, which is probably at least 10x cheaper than the biological version, and quite possibly much less than that. But in theory, so many other advances can be made by not limiting ourselves to the human template, and abstractly consider what moral goods we want and how to get there.

A5. Conclusion: for total utilitarians, it seems likely that A1-A4 will lead us to believe that expected utility in the future will be dominated by scenarios of heavy-tails of extreme moral goods.

A5a. Thus, people now should work on some combination of preventing existential risks and steering our descendants (wherever possible) to those heavy-tailed scenarios of producing lots of positive moral goods.

A5b. The argument that people may wish to directly optimize for positive utility, but nobody actively optimizes for negative utility, is in my mind some (and actually quite strong) evidence that total or negative-leaning *hedonic* utilitarians should focus more on avoiding extinction + ensuring positive outcomes than on avoiding negative outcomes.

A5c. If you're a negative or heavily negative-leaning hedonic utilitarian, then your priority should be to prevent the extreme tail of really really bad engineered negative outcomes. (Another term for this is "S-Risks")

B. Short argument for Future Agent Cooperation:

B1. Claim: (Moral) Agents will have more power in the future than they do today.

This is in contrast to sub- or above- agent entities (eg, evolution), which held a lot of sway in the past.

B1a. As an aside, this isn't central to my point that we may expect more computational resources in the future to be used by moral agents rather than just moral patients without agency (like factory farmed animals today).

B2. Claim: Most worlds we care about are ones where agents have a lot of power.

World ruled by non-moral processes rather than agents probably have approximately zero expected utility.

This is actually a disjunctive claim from B1. *Even* if we think B1 is wrong, we still want to care more about agent-ruled worlds, since the future with those are more important.

B3. Claim: Moral uncertainty may lead us to defer to future agents on questions of moral value.

For example, total utilitarians today may be confused about measurements of utility today. To the extent that either moral objectivity or "moral antirealism+get better results after long reflection" is true, our older and wiser descendants (who may will be continuous with us!) will have a better idea of what to do than we do.

B4. Conclusion: While this conclusion is weaker than the previous point, there is prima facie reason that we should be very cooperative to reasonable goals that future agents may have.

C. Either A or B should be substantial evidence that a lot of future moral value comes from thinking through (and getting right) weird futurism concerns. This is in some contrast to doing normal, "respectable" forecasting research and just following expected trendlines like population numbers.

D. What does this mean about animals? I think it's unclear. A lot of animal work today may help with moral circle expansion.

This is especially true if you think (as I do, but eg, Will Macaskill mostly does not) that we live in a "hinge of history" moment where our values are likely to be locked in in the near future (next 100 years) AND you think that the future is mostly composed of moral patients that are themselves not moral agents (I do not) AND that our descendants are likely to be "wrong" in important ways relative to our current values (I'm fairly neutral on this question, and slightly lean against).

Whatever you believe, it seems hard to escape the conclusion that "weird, science-fictiony scenarios" have non-trivial probability, and that longtermist total utilitarians can't ignore thinking about them.

Comment by linch on Volunteering isn't free · 2020-02-04T11:03:21.463Z · score: 7 (5 votes) · EA · GW

I think this is consistent with the general perception I've had about why charities have volunteers, which is that the work the volunteers do is net negative but it's helpful to get them invested in the charity and therefore donate later. To a lesser extent, volunteers are sometimes a good pool to draw future employees.

There's an analogous situation in the corporate world, where interns do net negative work but companies have them anyway, because interns are useful as a hiring pool to draw future employees from.

Comment by linch on Linch's Shortform · 2020-02-04T08:17:46.126Z · score: 1 (1 votes) · EA · GW

That was not my intent, and it was not the way I parsed Caplan's argument.

Comment by linch on Should Longtermists Mostly Think About Animals? · 2020-02-04T02:31:41.865Z · score: 3 (3 votes) · EA · GW
If I did believe animals were going to be brought on space settlement, I would think the best wild-animal-focussed project would be to prevent that from happening, by figuring out what could motivate people to do so,

One way this could happen is if the deep ecologists or people who care about life-in-general "win", and for some reason have an extremely strong preference for spreading biological life to the stars without regard to sentient suffering.

I'm pretty optimistic this won't happen however. I think by default we should expect that the future (if we don't die out), will be predominantly composed of humans and our (digital) descendants, rather than things that look like wild animals today.

Another thing that the analysis leaves out is that even aside from space colonization, biological evolved life is likely to be an extremely inefficient method of converting energy to positive (or negative!) experiences.

Comment by linch on Linch's Shortform · 2020-02-04T02:23:17.497Z · score: 1 (1 votes) · EA · GW
Almost any situation with enough disutility to counter GDP doubling seems like it would, paradoxically, involve conditions that would reduce GDP (war, large-scale civil unrest, huge tax increases to support a bigger welfare state).

I think there was substantial ambiguity in my original phrasing, thanks for catching that!

I think there are at least four ways to interpret the statement.

It's not hard for me to imagine situations bad enough to be worse than doubling GDP is good

1. Interpreting it literally: I am physically capable (without much difficulty) of imagining situations that are bad to a degree worse than doubling GDP is good.

2. Caplan gives some argument for doubling of GDP that seems persuasive, and claims this is enough to override a conservatism prior, but I'm not confident that the argument is true/robust, and I think it's reasonable to believe that there are possible bad consequences that are bad enough that even if I give >50% probability (or >80%), this is not automatically enough to override a conservatism prior, at least not without thinking about it a lot more.

3. Assume by construction that world GDP will double in the short term. I still think there's a significant chance that the world will be worse off.

4. Assume by construction that world GDP will double, and stay 2x baseline until the end of time. I still think there's a significant chance that the world will be worse off.

__

To be clear, when writing the phrasing, I meant it in terms of #2. I strongly endorse #1 and tentatively endorse #3, but I agree that if you interpreted what I meant as #4, what I said was a really strong claim and I need to back it up more carefully.

Comment by linch on Linch's Shortform · 2020-01-30T01:58:18.660Z · score: 16 (7 votes) · EA · GW

cross-posted from Facebook.

Reading Bryan Caplan and Zach Weinersmith's new book has made me somewhat more skeptical about Open Borders (from a high prior belief in its value).

Before reading the book, I was already aware of the core arguments (eg, Michael Huemer's right to immigrate, basic cosmopolitanism, some vague economic stuff about doubling GDP).

I was hoping the book will have more arguments, or stronger versions of the arguments I'm familiar with.

It mostly did not.

The book did convince me that the prima facie case for open borders was stronger than I thought. In particular, the section where he argued that a bunch of different normative ethical theories should all-else-equal lead to open borders was moderately convincing. I think it will have updated me towards open borders if I believed in stronger "weight all mainstream ethical theories equally" moral uncertainty, or if I previously had a strong belief in a moral theory that I previously believed was against open borders.

However, I already fairly strongly subscribe to cosmopolitan utilitarianism and see no problem with aggregating utility across borders. Most of my concerns with open borders are related to Chesterton's fence, and Caplan's counterarguments were in three forms:

1. Doubling GDP is so massive that it should override any conservativism prior.
2. The US historically had Open Borders (pre-1900) and it did fine.
3. On the margin, increasing immigration in all the American data Caplan looked at didn't seem to have catastrophic cultural/institutional effects that naysayers claim.

I find this insufficiently persuasive.
___
Let me outline the strongest case I'm aware of against open borders:
Countries are mostly not rich and stable because of the physical resources, or because of the arbitrary nature of national boundaries. They're rich because of institutions and good governance. (I think this is a fairly mainstream belief among political economists). These institutions are, again, evolved and living things. You can't just copy the US constitution and expect to get a good government (IIRC, quite a few Latin American countries literally tried and failed).

We don't actually understand what makes institutions good. Open Borders means the US population will ~double fairly quickly, and this is so "out of distribution" that we should be suspicious of the generalizability of studies that look at small marginal changes.
____
I think Caplan's case is insufficiently persuasive because a) it's not hard for me to imagine situations bad enough to be worse than doubling GDP is good, 2)Pre-1900 US was a very different country/world, 3) This "out of distribution" thing is significant.

I will find Caplan's book more persuasive if he used non-US datasets more, especially data from places where immigration is much higher than the US (maybe within the EU or ASEAN?).

___

I'm still strongly in favor of much greater labor mobility on the margin for both high-skill and low-skill workers. Only 14.4% of the American population are immigrants right now, and I suspect the institutions are strong enough that changing the number to 30-35% is net positive. [EDIT: Note that this is intuition rather than something backed by empirical data or explicit models]

I'm also personally in favor (even if it's negative expected value for the individual country) of a single country (or a few) trying out open borders for a few decades and for the rest of us to learn from their successes and failures. But that's because of an experimentalist social scientist mindset where I'm perfectly comfortable with "burning" a few countries for the greater good (countries aren't real, people are), and I suspect the government of most countries aren't thrilled about this.

___

Overall, 4/5 stars. Would highly recommend to EAs, especially people who haven't thought much about the economics and ethics of immigration.

Comment by linch on Seeking Advice: Arab EA · 2020-01-30T01:12:48.910Z · score: 15 (8 votes) · EA · GW

As an addendum, if you did not use a pseudonym, please ask one of the moderators or admins to change your name on this forum and/or delete this post.

Comment by linch on Love seems like a high priority · 2020-01-24T20:58:47.940Z · score: 3 (2 votes) · EA · GW

Hmm, so to be explicit, the claim I'm making is that marriage has a causal effect on mortality, mediated through complications in childbirth.

In Pearl's do-calculus, this is

1. Marriage -> greater rates of childbirth -> Death.

I haven't fully established this connection. The main way this argument falls is if it turns out marriage does not increase rates of childbirth. I assumed that marriage increases childbirth, but I admit to not looking into it.

I think when people are thinking about a strong causal relationship between marriage and mortality, they are mostly thinking of other mediating variables (weak claim, since most things are not childbirth). So:

2. Marriage -> (collection of other mediating variables) -> Death.

However, based on your and Liam's comments, I'm starting to suspect that both of you mean causality in a much more direct sense. In that framework, perhaps the only relationship that will be considered "causal" should be:

3. Marriage -> Death (no mediating variable).

If that is the case for your definition of causality, I agree that #3 is pretty unlikely. I also think it's too strong since it probably rules out eg, smoking causing death (since you can't use the mediating variable of lung cancer).

Comment by linch on Love seems like a high priority · 2020-01-24T06:29:05.547Z · score: 2 (2 votes) · EA · GW

I mean, there's literally a strong causal relationship between marriage and having a shorter lifespan.

I assume sociologists are usually referring to other effects however.

Comment by linch on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-01-24T02:14:04.372Z · score: 10 (5 votes) · EA · GW

Credible Interval:

In Bayesian statistics, a credible interval is an interval within which an unobserved parameter value falls with a particular probability. It is an interval in the domain of a posterior probability distribution or a predictive distribution.[1] The generalisation to multivariate problems is the credible region. Credible intervals are analogous to confidence intervals in frequentist statistics,[2] although they differ on a philosophical basis:[3] Bayesian intervals treat their bounds as fixed and the estimated parameter as a random variable, whereas frequentist confidence intervals treat their bounds as random variables and the parameter as a fixed value. Also, Bayesian credible intervals use (and indeed, require) knowledge of the situation-specific prior distribution, while the frequentist confidence intervals do not.

Credence:

Credence is a statistical term that expresses how much a person believes that a proposition is true

Why this matters:

It seems like a lot of questions EAs are interested in involves subjective Bayesian probabilities. A lot of people misuse the frequentist term "confidence interval" for these purposes (to be fair, this isn't just a problem with EAs/rationalists, I've seen scientists make this mistake too, akin to how the p-value is commonly misunderstood). I think it's helpful to use the right statistical jargon so we can more easily engage with the statistical literature, and with statisticians.

Comment by linch on Should EAs be more welcoming to thoughtful and aligned Republicans? · 2020-01-23T11:55:52.259Z · score: 1 (1 votes) · EA · GW

Wow I totally missed this! Edited the post accordingly.

Comment by linch on Should EAs be more welcoming to thoughtful and aligned Republicans? · 2020-01-23T11:55:37.014Z · score: 2 (2 votes) · EA · GW

Strongly agree that we should be cooperative in general.

On my Facebook there's extensive discussion about whether registering as Republican is cooperative or not (one big difference here is that party registration seems meaningfully different in the US and UK, eg, in the US there's no dues).

Personally I will strongly recommend against registering for a party unless you want to think of yourself as belonging to that party, potentially for years or longer.

Comment by linch on Should EAs be more welcoming to thoughtful and aligned Republicans? · 2020-01-23T11:52:52.640Z · score: 3 (2 votes) · EA · GW

I asked a few EA-aligned Brits what % of EAs they think voted for the Conservatives. I suspect this number is not robust across election cycles, but I'm also not confident that mean regression should be the default hypothesis given a) how young EA is and b) modern British politics seem to be in somewhat uncharted territory.

Comment by linch on Love seems like a high priority · 2020-01-21T07:23:21.405Z · score: 1 (1 votes) · EA · GW

I don't know what the time period is, but at the risk of saying the obvious, the historical rate of maternal mortality is much higher than it is today in the First World.

Our world in data[1] estimates historical rates at .5-1% per birth. So assuming 6 births per woman[2], you get 3-6% of married women dying from childbirth alone, at a relatively young age.

[1] https://ourworldindata.org/maternal-mortality

[2] https://ourworldindata.org/fertility-rate#the-number-of-children-per-woman-over-the-very-long-run

Comment by linch on Love seems like a high priority · 2020-01-20T02:49:48.546Z · score: 9 (6 votes) · EA · GW

I'm pretty convinced by Kelsey's summary here of why Paul Dolan's methods were substantively lacking: https://www.vox.com/future-perfect/2019/6/4/18650969/married-women-miserable-fake-paul-dolan-happiness

Kelsey has stronger words on (public) Facebook.

Comment by linch on Love seems like a high priority · 2020-01-20T02:46:27.398Z · score: 1 (1 votes) · EA · GW

Another point against tractability is that you're in some important ways directly fighting against evolution.

Comment by linch on A small observation about the value of having kids · 2020-01-20T02:44:39.447Z · score: 4 (3 votes) · EA · GW

N=1, but I enjoyed reading the Autobiography of John Stuart Mill.

Comment by linch on Linch's Shortform · 2020-01-19T04:15:12.399Z · score: 1 (1 votes) · EA · GW

Good point. Now that you bring this up, I vaguely remember a Reddit AMA where an evolutionary biologist made the (obvious in hindsight, but never occurred to me at the time) claim that with multilevel selection, altruism on one level is often means defecting on a higher (or lower) level. Which probably unconsciously inspired this post!

As for making it top level, I originally wanted to include a bunch of thoughts on the unilateralist's curse as a post, but then I realized that I'm a one-trick pony in this domain...hard to think of novel/useful things that Bostrom et. al hasn't already covered!

Comment by linch on Is learning about EA concepts in detail useful to the typical EA? · 2020-01-17T13:42:39.397Z · score: 10 (7 votes) · EA · GW

To answer my own question, I suspect a lot of this comes from EA still being young and relatively pre-paradigmatic. A lot of valuable careers and projects seem to be in the timescale of not decades but rather years or even months, so keeping very up-to-date with what the hivemind is thinking, plus interfacing with your existing plans/career capital/network, allows you to spot new opportunities for valuable projects to do that you otherwise may not have even considered.

I suspect that as EA matures and formalizes, the value of following current EA thinking becomes less and less fruitful for the typical EA, and engagement (after some possibly significant initial investment) will look more like "read a newsletter once in a while", and "have a few deep conversations a year", even for very serious and dedicated EAs with some free time.

Comment by linch on Is learning about EA concepts in detail useful to the typical EA? · 2020-01-17T13:28:47.008Z · score: 4 (3 votes) · EA · GW

Hmm, different people vary a lot on what they find effortful, but I'm guessing a reasonable substitute for Facebook and the EA Forum for someone interested in development economics isn't doing an online degree, but probably something like following (other?) developmental econ academics or practitioners on Twitter.

Comment by linch on Linch's Shortform · 2020-01-17T13:24:11.084Z · score: 9 (7 votes) · EA · GW

I find the unilateralist’s curse a particularly valuable concept to think about. However, I now worry that “unilateralist” is an easy label to tack on, and whether a particular action is unilateralist or not is susceptible to small changes in framing.

Consider the following hypothetical situations:

  1. Company policy vs. team discretion
    1. Alice is a researcher in a team of scientists at a large biomedical company. While working on the development of an HIV vaccine, the team accidentally created an air-transmissible variant of HIV. The scientists must decide whether to publish their discovery with the rest of the company, knowing that leaks may exist, and the knowledge may be used to create a devastating biological weapon, but also that it could help those who hope to develop defenses against such weapons, including other teams within the same company. Most of the team thinks they should keep it quiet, but company policy is strict that such information must be shared with the rest of the company to maintain the culture of open collaboration.
    2. Alice thinks the rest of the team should either share this information or quit. Eventually, she tells her vice president her concerns, who relayed it to the rest of the company in a company-open document.
    3. Alice does not know if this information ever leaked past the company.
  2. Stan and the bomb
    1. Stan is an officer in charge of overseeing a new early warning system intended to detect (nuclear) intercontinental ballistic missiles from an enemy country. A warning system appeared to have detected five missiles heading towards his homeland, quickly going through 30 early layers of verification. Stan suspects this is a false alarm, but is not sure. Military instructions are clear that such warnings must immediately be relayed upwards.
    2. Stan decided not to relay the message to his superiors, on the grounds that it was probably a false alarm and he didn’t want his superiors to mistakenly assume otherwise and therefore start a catastrophic global nuclear war.
  3. Listen to the UN, or other countries with similar abilities?
    1. Elbonia, a newly founded Republic, has an unusually good climate engineering program. Elbonian scientists and engineers are able to develop a comprehensive geo-engineering solution that they believe can reverse the climate crisis at minimal risk. Further, the United Nations’ General Assembly recently passed a resolution that stated in no uncertain terms that any nation in possession of such geo-engineering technology must immediately a) share the plans with the rest of the world and b) start the process of lowering the world’s temperature by 2 °C.
    2. However, there’s one catch: Elbonian intelligence knows (or suspects) that five other countries have developed similar geo-engineering plans, but have resolutely refused to release or act on them. Furthermore, four of the five countries have openly argued that geo-engineering is dangerous and has potentially catastrophic consequences, but refused to share explicit analysis why (Elbonia’s own risk assessment finds little evidence of such dangers).
    3. Reasoning that he should be cooperative with the rest of the world, the prime minister of Elbonia made the executive decision to obey the General Assembly’s resolution and start lowering the world’s temperature.
  4. Cooperation with future/past selves, or other people?
    1. Ishmael’s crew has a white elephant holiday tradition, where individuals come up with weird and quirky gifts for the rest of the crew secretly, and do not reveal what the gifts are until Christmas. Ishmael comes up with a brilliant gift idea and hides it.
    2. While drunk one day with other crew members, Ishmael accidentally lets slip that he was particularly proud of his idea. The other members egg him on to reveal more. After a while, Ishmael finally relents when some other crew members reveal their ideas, reasoning that he shouldn’t be a holdout. Ishmael suspects that he will regret his past self’s decision when he becomes more sober.

Putting aside whether the above actions were correct or not, in each of the above cases, have the protagonists acted unilaterally?

I think this is a hard question to answer. My personal answer is “yes,” but I think another reasonable person can easily believe that the above protagonists were fully cooperative. Further, I don’t think the hypothetical scenarios above were particularly convoluted edge cases. I suspect that in real life, figuring out whether the unilateralist’s curse applies to your actions will hinge on subtle choices of reference classes. I don’t have a good solution to this.

Comment by linch on [Notes] Could climate change make Earth uninhabitable for humans? · 2020-01-17T00:03:00.388Z · score: 4 (3 votes) · EA · GW

Thanks for the quick response, and really appreciate your (and Louis's) hard work on getting this type of sophisticated/nuanced information out in a way that other EAs can easily understand!