Quantifying anthropic effects on the Fermi paradox

2019-02-15T10:47:04.239Z · score: 34 (12 votes)
Comment by lukas_finnveden on Climate Change Is, In General, Not An Existential Risk · 2019-01-13T09:51:37.157Z · score: 2 (2 votes) · EA · GW

Given that the risk of nuclear war conditional on climate change seems considerably lower than the unconditional risk of nuclear war

Do you really mean that P(nuclear war | climate change) is less than P(nuclear war)? Or is this supposed to say that the risk of nuclear war and climate change is less than the unconditional probability of nuclear war? Or something else?

Comment by lukas_finnveden on An integrated model to evaluate the impact of animal products · 2019-01-09T21:57:48.863Z · score: 9 (4 votes) · EA · GW

It's 221 million neurons. Source: http://reflectivedisequilibrium.blogspot.com/2013/09/how-is-brain-mass-distributed-among.html

You might be thinking about fruit flies, they have 250k

Comment by lukas_finnveden on If You’re Young, Don’t Give To Charity · 2018-12-24T23:35:45.120Z · score: 3 (3 votes) · EA · GW

Wealth almost entirely belongs to the old. The median 60-year-old has 45 times (yes, forty-five times) the net worth of the median 30-year-old.

Hm, I think income should be a better measurment than wealth? I'm not sure what they count as wealth, since the link is broken, but a pretty large fraction of that may be due to the fact that 60-year-olds needs to own their house and their retirement savings. If the real reason that 30-year-old lack wealth is that they don't need wealth, someone determined to give to charity might be able to gather money comparable to most 60-year-olds.

Comment by lukas_finnveden on Should donor lottery winners write reports? · 2018-12-23T10:50:33.331Z · score: 7 (3 votes) · EA · GW

Carl's comment renders this irrelevant for CEA lotteries, but I think this reasoning is wrong even for the type of lotteries you imagine.

In either one the returns are good in expectation purely based on you getting a 20% chance to 5x your donation (which is good if you think there's increasing marginal returns to money at this level), but also in the other 80% of worlds you have a preference for your money being allocated by people who are more thoughtful.

What you're forgetting is that in the 20 % of worlds where you get your donation, you'd rather have been in the pool without thoughtful people. If you were, you will get to regrant 50k smartly, and a thoughtful person will get to regrant 40k. However, if you were in the pool with thoughtful people, the thoughtful people won't get to regrant any money, and the 40k in the thoughtless group will go to some thoughtless cause.

When joining a group (under your assumptions, that aren't true for CEA), you increase the winnings of everyone while decreasing the probability that they win. In expectation, they all get to regrant the same amount of money. So the only situation where the decision between groups matter is if you have some very specific ideas about marginal utility, e.g. if you want to ensure that there exists at least one thoughtful lottery winner, and doesn't care much about the second.

Comment by lukas_finnveden on The expected value of extinction risk reduction is positive · 2018-12-18T23:33:34.231Z · score: 1 (1 votes) · EA · GW

Since the post is very long, and since a lot of readers are likely to be familiar with some arguments already, I think a table of contents in the beginning would be very valuable. I sure would like one.

I see that it's already possible to link to individual sections (like https://www.effectivealtruism.org/articles/the-expected-value-of-extinction-risk-reduction-is-positive/#a-note-on-disvalue-focus) so I don't think this would be too hard to add?

Comment by lukas_finnveden on Lessons Learned from a Prospective Alternative Meat Startup Team · 2018-12-13T23:06:21.594Z · score: 3 (3 votes) · EA · GW
Reports we’ve heard indicate that extrusion capacity is currently the limiting factor driving up costs for plant-based alternatives in the United States. As a result, we’d only want to pursue this path if we have strong reason to believe that our plant-based alternative was not displacing a better plant-based alternative in the market.

What's the connection between extrusion capacity and not displacing better alternatives?

Comment by lukas_finnveden on Critique of Superintelligence Part 1 · 2018-12-13T22:44:33.247Z · score: 2 (2 votes) · EA · GW
To see how these two arguments rest on different conceptions of intelligence, note that considering Intelligence(1), it is not at all clear that there is any general, single way to increase this form of intelligence, as Intelligence(1) incorporates a wide range of disparate skills and abilities that may be quite independent of each other. As such, even a superintelligence that was better than humans at improving AIs would not necessarily be able to engage in rapidly recursive self-improvement of Intelligence(1), because there may well be no such thing as a single variable or quantity called ‘intelligence’ that is directly associated with AI-improving ability.

While I'm not entirely convinced of a fast take-off, this particular argument isn't obvious to me. If the AI is better than humans at every cognitive task, then for every ability that we care about X, it will be better at the cognitive task of improving X. Additionally, it will be better at the cognitive task of improving it's ability to improve X, etc. It will be better than humans at constructing an AI that is good at every cognitive task, and will thus be able to create one better than itself.

This should become clear if one considers that ‘essentially all human cognitive abilities’ includes such activities as pondering moral dilemmas, reflecting on the meaning of life, analysing and producing sophisticated literature, formulating arguments about what constitutes a ‘good life’, interpreting and writing poetry, forming social connections with others, and critically introspecting upon one’s own goals and desires. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips.

This doesn't seem very unlikely to me. As a proof-of-concept, consider a paper-clip maximiser able to simulate several clever humans at high speeds. If it was posed a moral dilemma (and was motivated to answer it) it could perform at above human-level by simulating humans at fast speeds (in a suitable situation where they are likely to produce an honest answer to the question), and directly report their output. However, it wouldn't have to be motivated by it.

Comment by lukas_finnveden on Open Thread #43 · 2018-12-09T22:40:52.632Z · score: 2 (2 votes) · EA · GW

I definitely except that there are people who will lose out on happiness from donating.

Making it a bit more complicated, though, and moving out of the area where it's easy to do research, there are probably happiness benefits of stuff like 'being in a community' and 'living with purpose'. Giving 10 % per year and adopting the role 'earning to give', for example, might enable you to associate life-saving with every hour you spend on your job, which could be pretty positive (I think that feeling that your job is meaningful is associated with happiness). My intuition is that the difference between 10 % and 1 % could be important to be able to adopt this identity, but I might be wrong. And a lot of the gains from high incomes probably comes from increased status, which donating money is a way to get.

I'd be surprised if donating lots of money was the optimal thing to do if you wanted to maximise your own happiness. But I don't think there's a clear case that it's worse than the average person's spending.

Comment by lukas_finnveden on Existential risk as common cause · 2018-12-09T09:12:50.774Z · score: 3 (3 votes) · EA · GW
Of course, a deep ecologist who sided with extinction would be hoping for a horrendously narrow event, between ‘one which ends all human life’ and ‘one which ends all life’. They’d still have to work against the latter, which covers the artificial x-risks.

I agree that it covers AI, but I'm not sure about the other artificial x-risks. Nuclear winter severe enough to eventually kill all humans would definitely kill all large animals, but some smaller forms of life would survive. And while bio-risk could vary a lot in how many species were susceptible to it, I don't think anyone could construct a pathogen that affects everything.

Comment by lukas_finnveden on Thoughts on short timelines · 2018-10-24T09:25:22.168Z · score: 3 (3 votes) · EA · GW

Seems like there's still self-selection going on, depending on how much you think 'a lot' is, and how good you are at finding everyone who have thought about it that much. You might be missing out on people who thought about it for, say, 20 hours, decided it wasn't important, and moved on to other cause areas without writing up their thoughts.

On the other hand, it seems like people are worried about and interested in talking about AGI happening in 20 or 30 or 50 years time, so it doesn't seem likely that everyone who thinks 10-year timelines are <10% stops talking about it.

Comment by lukas_finnveden on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-28T09:23:08.864Z · score: 1 (1 votes) · EA · GW

I remain unconvinced, probably because I mostly care about observer-moments, and don't really care what happens to individuals independently of this. You could plausibly construct some ethical theory that cares about identity in particular way such that this works, but I can't quite see how it would look, yet. You might want to make those ethical intuitions as concrete as you can, and put them under 'Assumptions'.

Comment by lukas_finnveden on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-27T15:20:47.788Z · score: 5 (5 votes) · EA · GW

However, this trick will increase the total suffering in the multiverse, from the purely utilitarian perspective, by 1000 times, as the number of suffering observer-moments will increase. But here we could add one more moral assumption: “Very short pain should be discounted”, based on the intuition that 0.1 seconds of intense pain is bearable (assuming it does not cause brain damage)—simply because it will pass very quickly.

I'd say pain experienced during 0.1 seconds is about 10 times less bad than pain experienced during 1 second. I don't see why we should discount it any further than that. Our particular human psychology might be better at dealing with injury if we expect it to end soon, but we can't change what the observer-moment S(t) expects to happen without changing the state of it's mind. If we change the state of it's mind, it's not a copy of S(t) anymore, and the argument fails.

In general, I can't see how this plan would work. As you say, you can't decrease the absolute number of suffering oberver-moments, so it won't do any good from the perspective of total utilitarianism. The closest thing I can imagine is to "dilute" pain by creating similar but somewhat happier copies, if you believe in some sort of average utilitarianism that cares about identity. That seems like a strange moral theory, though.

Comment by lukas_finnveden on EA Survey 2018 Series: Community Demographics & Characteristics · 2018-09-26T09:37:39.846Z · score: 1 (1 votes) · EA · GW

Neither the link in the text nor Chi's links work for me. They both give 404. I can't find the data when looking directly at Peter's github either https://github.com/peterhurford/ea-data/tree/master/data/2018