Posts

Quantifying anthropic effects on the Fermi paradox 2019-02-15T10:47:04.239Z · score: 60 (18 votes)

Comments

Comment by lukas_finnveden on Conversation on forecasting with Vaniver and Ozzie Gooen · 2019-08-03T00:50:19.475Z · score: 2 (2 votes) · EA · GW

The present lesswrong link doesn't work for me. This is the correct one: https://www.lesswrong.com/posts/AG6PAqsN5sjQHmKfm/conversation-on-forecasting-with-vaniver-and-ozzie-gooen

Comment by lukas_finnveden on I find this forum increasingly difficult to navigate · 2019-07-05T20:32:15.801Z · score: 1 (1 votes) · EA · GW
Images can't be added to comments; is that what you were trying to find a workaround for?

It's possible to add images to comments by selecting and copying them from anywhere public (note that it doesn't work if you right click and choose 'copy image'). In this thread, I do it in this comment.

I see how I can't do it manually, though, by selecting text. I wouldn't expect it to be too difficult to add that possibility, though, given that it's already possible in another way?

Comment by lukas_finnveden on I find this forum increasingly difficult to navigate · 2019-07-05T20:25:17.173Z · score: 18 (7 votes) · EA · GW

With regards to images, I get flawless behaviour when I copy-paste from googledocs. Somehow, the images automatically get converted, and link to the images hosted with google (in the editor only visible as small cameras). Maybe you can get the same behaviour by making your docs public?

Actually, I'll test copying an image from a google doc into this comment: (edit: seems to be working!)

Comment by lukas_finnveden on I find this forum increasingly difficult to navigate · 2019-07-05T20:11:06.197Z · score: 8 (4 votes) · EA · GW

Copying all relevant information from the lesswrong faq to an EA forum faq would be a good start. The problem of how to make its existence public knowledge remains, but that's partly solved automatically by people mentioning/linking to it, and it showing up in google.

Comment by lukas_finnveden on I find this forum increasingly difficult to navigate · 2019-07-05T20:03:22.404Z · score: 4 (3 votes) · EA · GW

There's a section on writing in the lesswrong faq (named Posting & Commenting). If any information is missing from there, you can suggest adding it in the comments.

Of course, even given that such instructions exists somewhere, it's important to make sure that it's findable. Not sure what the best way to do that is.

Comment by lukas_finnveden on Announcing the launch of the Happier Lives Institute · 2019-06-25T12:37:15.483Z · score: 5 (4 votes) · EA · GW

I'm by no means schooled in academic philosophy, so I could also be wrong about this.

I tend to think about e.g. consequentialism, hedonistic utilitarianism, preference utilitarianism, lesswrongian 'we should keep all the complexities of human value around'-ism, deontology, and virtue ethics as ethical theories. (This is backed up somewhat by the fact that these theories' wikipedia pages name them ethical theories.) When I think about meta-ethics, I mainly think about moral realism vs moral anti-realism and their varieties, though the field contains quite a few other things, like cole_haus mentions.

My impression is that HLI endorses (roughly) hedonistic utilitarianism, and you said that you don't, which would be an ethical disagreement. The borderlines aren't very sharp though. If HLI would have asserted that hedonistic utilitarianism was objectively correct, then you could certainly have made a metaethical argument that no ethical theory is objectively correct. Alternatively, you might be able to bring metaethics into it if you think that there is an ethical truth that isn't hedonistic utilitarianism.

(I saw you quoting Nate's post in another thread. I think you could say that it makes a meta-ethical argument that it's possible to care about things outside yourself, but that it doesn't make the ethical argument that you ought to do so. Of course, HLI does care about things outside themselves, since they care about other people's experiences.)

Comment by lukas_finnveden on Announcing the launch of the Happier Lives Institute · 2019-06-23T21:10:06.204Z · score: 5 (5 votes) · EA · GW
For whatever it's worth, my metaethical intuitions suggest that optimizing for happiness is not a particularly sensible goal.

Might just be a nitpick, but isn't this an ethical intuition, rather than a metaethical one?

(I remember hearing other people use "metaethics" in cases where I thought they were talking about object level ethics, as well, so I'm trying to understand whether there's a reason behind this or not.)

Comment by lukas_finnveden on Announcing the launch of the Happier Lives Institute · 2019-06-23T20:58:07.514Z · score: 8 (5 votes) · EA · GW

Has Kahneman actually stated that he thinks life satisfaction is more important than happiness? In the article that Habryka quotes, all he says is that most people care more about their life satisfaction than their happiness. As you say, this doesn't necessarily imply that he agrees. In fact, he does state that he personally thinks happiness is important.

(I don't trust the article's preamble to accurately report his beliefs when the topic is as open to misunderstandings as this one is.)

Comment by lukas_finnveden on Not getting carried away with reducing extinction risk? · 2019-06-05T21:21:30.582Z · score: 4 (3 votes) · EA · GW
We can also approach the issue abstractly: disruption can be seen as injecting more noise into a previously more stable global system, increasing the probability that the world settles into a different semi-stable configuration. If there are many more undesirable configurations of the world than desirable ones, increasing randomness is more likely to lead to an undesirable state of the world. I am convinced that, unless we are currently in a particularly bad state of the world, global disruption would have a very negative effect (in expectation) on the value of the long-term future.

If there are many more undesirable configurations of the world than desirable ones, then we should, a priori, expect that our present configuration is an undesirable one. Also, if the only effect of disruption was to re-randomize the world order, then the only thing you'd need for disruption to be positive is for the current state to be worse than the average civilisation from the distribution. Maybe this is what you mean with "particularly bad state", but intuitively, I interpret that more like the bottom 15 %.

There are certainly arguments to make for our world being better than average. But I do think that you actually have to make those arguments, and that without them, this abstract model won't tell you if disruption is good or bad.

Comment by lukas_finnveden on How to use the Forum · 2019-05-18T18:30:38.996Z · score: 5 (3 votes) · EA · GW

If you go to "Edit account", there's a check box that says "Activate markdown editor". If you un-check that one (I would've expected it to be unchecked by default, but maybe it isn't) you get formatting options just by selecting your text.

Comment by lukas_finnveden on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-10T20:34:48.903Z · score: 16 (12 votes) · EA · GW

Although psychadelics is plausibly good from a short-termist view, I think the argument from the long-termist view is quite weak. Insofar as I understand it, psychadelics would improve the long term by

1. Making EAs or other well-intentioned people more capable.

2. Making people more well-intentioned. I interpret this as either causing them to join/stay in the EA community, or causing capable people to become altruistically motivated (in a consequentialist fashion) without the EA community.

Regarding (1), I could see a case for privately encouraging well-intentioned people to use psychadelics, if you believe that psychedelics generally make people more capable. However, pushing for new legislation seems like an exceedingly inefficient way to go about this. Rationality interventions are unique in that they are quite targeted - they identify well-intentioned people and give them the techniques that they need. Pushing for new psychadelic legislation, however, could only help by making the entire population more capable, including the much smaller population of well-intentioned people. I don't know exactly how hard it is to change legislation, but I'd be surprised if it was worth doing solely due to the effect on EAs and other aligned people. New research suffers from a similar problem: good medical research is expensive, so you probably want to have a pretty specific idea about how it benefits EAs before you invest a lot in it.

Regarding (2), I'd be similarly surprised if
campaigning for new legislation -> more people use psychadelics -> more people become altruistically motivated -> more people join the EA community
was a better way to get people into EA than just directly investing in community building.

For both (1) and (2), these conclusions might change if you cared less about EAs in particular, and thought that the future would be significantly better if the average person was somewhat more altruistic or somewhat more capabable. I could be interested in hearing such a case. This doesn't seem very robust to cluelessness, though, given the uncertainty of how psychedelics affect people, and the uncertainty about how increasing general capabilities affects the long term.

Comment by lukas_finnveden on Why we should be less productive. · 2019-05-09T22:58:19.728Z · score: 18 (11 votes) · EA · GW
Meta note: that you got downvotes (I can surmise this from the number of votes and the total score) seems to suggest this is advice people don't want to hear, but maybe they need.

I don't think this position is unpopular in the EA community. You have more than one goal and that's fine got lots of upvotes, and my impression is that there's a general consensus that breaks are important and that burnout is a real risk (even though people might not always act according to that consensus).

I'd guess that it's getting downvotes because it doesn't really explain why we should be less productive: it just stakes out the position. In my opinion, it would have been more useful if it, for example, presented evidence showing that unproductive time is useful for living a fulfilled life, or presented an argument for why living a fulfilled life is important even for your altruistic values (which Jakob does more of in the comments).

Meta meta note: In general, it seems kind of uncooperative to assume that people need more of things they downvote.

Comment by lukas_finnveden on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-20T22:20:59.767Z · score: 4 (3 votes) · EA · GW
If I remember correctly, 80,000 Hours has stated that they think 15% of people in the EA Community should be pursuing earning to give.

I think this is the article you're thinking about, where they're talking about the paths of marginal graduates. Note that it's from 2015 (though at least Will said he still thought it seemed right in 2016) and explicitly labeled with "Please note that this is just a straw poll used as a way of addressing the misconception stated; it doesn’t represent a definitive answer to this question".

Comment by lukas_finnveden on Top Charity Ideas 2019 - Charity Entrepreneurship · 2019-04-17T18:07:05.439Z · score: 3 (5 votes) · EA · GW

Fantastic work! Nitpicks:

The last paragraph is repeated in the second to last paragraph.

However, the beneficial effects of the cash transfer may be much lower in a UCT

Is this supposed to say "lower in a CCT"?

Comment by lukas_finnveden on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-17T14:03:42.265Z · score: 7 (5 votes) · EA · GW

As a problem with the 'big list', you mention

2. For every reader, such a list would include many paths that they can’t take.

But it seems like there's another problem, closely related to this one: for every reader, the paths on such a list could have different orderings. If someone has a comparative advantage for a role, it doesn't necessarily mean that they can't aim for other roles: but it might mean that they should prefer the role that they have a comparative advantage for. This is especially true once we consider that most people don't know exactly what they could do and what they'd be good at - instead, their personal lists contains a bunch of things they could aim for, ordered according to different probabilities of having different amounts of impact.

In particular, I think it's a bad idea to take a 'big list', winnow away all the jobs that looks impossible, and then aim for whatever is on top of the list. Instead, your personal list might overlap with others', but have a completely different ordering (yet hopefully contain a few items that other people haven't even considered, given that 80k can't evaluate all opportunities, like you say).

Comment by lukas_finnveden on The case for delaying solar geoengineering research · 2019-03-24T09:36:13.727Z · score: 5 (5 votes) · EA · GW
This suggests that for solar geoengineering to be feasible, all major global powers would have to agree on the weather, a highly chaotic system.

Hm, I thought one of the main worries was that major global powers wouldn't have to agree, since any country would be able to launch a geoengineering program on their own, changing the climate for the whole planet.

Do you think that global governance is good enough to disincentivize lone states from launching a program, purely from fear of punishment? Or would it be possible to somehow reverse the effects?

Actually, would you even need to be a state to launch a program like this? I'm not sure how cheap it could become, or if it'd be possible to launch in secret.

Comment by lukas_finnveden on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T23:49:18.786Z · score: 22 (11 votes) · EA · GW

Good point, but this one has still received the most upvotes, if we assume that a negligible number of people downvoted it. At writing time, it has received 100 votes. According to https://ea.greaterwrong.com/archive, the only previous posts that received more than 100 points has less than 50 votes each. Insofar as I can tell, the second and third most voted-on posts are Empirical data on value drift at 75 and Effective altruism is a question at 68.

Comment by lukas_finnveden on Quantifying anthropic effects on the Fermi paradox · 2019-02-28T23:24:55.616Z · score: 3 (2 votes) · EA · GW
I am not so sure about the specific numerical estimates you give, as opposed to the ballpark being within a few orders of magnitude for SIA and ADT+total views (plus auxiliary assumptions)

I definitely agree about some numbers. Maybe I should have been more explicit about this in the post, but I have low credence in the exact distribution of (as well as , , and ): it depends far too much on the absolute rate of planet formation and the speed at which civilisations travel.

However, I'm much more willing to believe that the average fraction of space that would be occupied by alien civilisations in our absence is somewhere between 30 % and 95 %, or so. A lot of the arbitrary assumptions that affects cancels out when running the simulation, and the remaining parameters affects the result surprisingly little. My main (known) uncertainties are

  • Whether it's safe to assume that intergalactic colonisation is possible. From the perspective of total consequentialism, this is largely a pragmatic question about where we can have the most impact (which is affected by a lot of messy empirical questions).
  • How much the results would change if we allowed for a late increase in life more sudden than the one in Appendix C (either because of a sudden shift in planet formation or because of something like gamma ray bursts). Anthropics should affect our credence in this, as you point out, and the anthropic update would be quite large in favor. However, the prior probability of a very sudden increase seems small. That prior is very hard to quantify, and I think my simulation would be less reliable in the more extreme cases, so this possibility is quite hard to analyse.

Do you agree, or do you have other reasons to doubt the 30%-95% number?

This seems overall too pessimistic to me as a pre-anthropic prior for colonization

I agree that the mean is too pessimistic. The distribution is too optimistic about the impossibility of lower numbers, though, which is what matters after the anthropic update. I mostly just wanted a distribution that illustrated the idea about the late filter without having it ruin the rest of the analysis. has almost exactly the same distribution after updating, anyway, as long as assigns negligible probability to numbers below .

Comment by lukas_finnveden on Climate Change Is, In General, Not An Existential Risk · 2019-01-13T09:51:37.157Z · score: 2 (2 votes) · EA · GW

Given that the risk of nuclear war conditional on climate change seems considerably lower than the unconditional risk of nuclear war

Do you really mean that P(nuclear war | climate change) is less than P(nuclear war)? Or is this supposed to say that the risk of nuclear war and climate change is less than the unconditional probability of nuclear war? Or something else?

Comment by lukas_finnveden on An integrated model to evaluate the impact of animal products · 2019-01-09T21:57:48.863Z · score: 9 (4 votes) · EA · GW

It's 221 million neurons. Source: http://reflectivedisequilibrium.blogspot.com/2013/09/how-is-brain-mass-distributed-among.html

You might be thinking about fruit flies, they have 250k

Comment by lukas_finnveden on If You’re Young, Don’t Give To Charity · 2018-12-24T23:35:45.120Z · score: 3 (3 votes) · EA · GW

Wealth almost entirely belongs to the old. The median 60-year-old has 45 times (yes, forty-five times) the net worth of the median 30-year-old.

Hm, I think income might be a better measurement than wealth. I'm not sure what they count as wealth, since the link is broken, but a pretty large fraction of that may be due to the fact that 60-year-olds needs to own their house and their retirement savings. If the real reason that 30-year-old lack wealth is that they don't need wealth, someone determined to give to charity might be able to gather money comparable to most 60-year-olds.

Comment by lukas_finnveden on Should donor lottery winners write reports? · 2018-12-23T10:50:33.331Z · score: 7 (3 votes) · EA · GW

Carl's comment renders this irrelevant for CEA lotteries, but I think this reasoning is wrong even for the type of lotteries you imagine.

In either one the returns are good in expectation purely based on you getting a 20% chance to 5x your donation (which is good if you think there's increasing marginal returns to money at this level), but also in the other 80% of worlds you have a preference for your money being allocated by people who are more thoughtful.

What you're forgetting is that in the 20 % of worlds where you get your donation, you'd rather have been in the pool without thoughtful people. If you were, you will get to regrant 50k smartly, and a thoughtful person will get to regrant 40k. However, if you were in the pool with thoughtful people, the thoughtful people won't get to regrant any money, and the 40k in the thoughtless group will go to some thoughtless cause.

When joining a group (under your assumptions, that aren't true for CEA), you increase the winnings of everyone while decreasing the probability that they win. In expectation, they all get to regrant the same amount of money. So the only situation where the decision between groups matter is if you have some very specific ideas about marginal utility, e.g. if you want to ensure that there exists at least one thoughtful lottery winner, and don't care much about the second.

Comment by lukas_finnveden on The expected value of extinction risk reduction is positive · 2018-12-18T23:33:34.231Z · score: 1 (1 votes) · EA · GW

Since the post is very long, and since a lot of readers are likely to be familiar with some arguments already, I think a table of contents in the beginning would be very valuable. I sure would like one.

I see that it's already possible to link to individual sections (like https://www.effectivealtruism.org/articles/the-expected-value-of-extinction-risk-reduction-is-positive/#a-note-on-disvalue-focus) so I don't think this would be too hard to add?

Comment by lukas_finnveden on Lessons Learned from a Prospective Alternative Meat Startup Team · 2018-12-13T23:06:21.594Z · score: 3 (3 votes) · EA · GW
Reports we’ve heard indicate that extrusion capacity is currently the limiting factor driving up costs for plant-based alternatives in the United States. As a result, we’d only want to pursue this path if we have strong reason to believe that our plant-based alternative was not displacing a better plant-based alternative in the market.

What's the connection between extrusion capacity and not displacing better alternatives?

Comment by lukas_finnveden on Critique of Superintelligence Part 1 · 2018-12-13T22:44:33.247Z · score: 2 (2 votes) · EA · GW
To see how these two arguments rest on different conceptions of intelligence, note that considering Intelligence(1), it is not at all clear that there is any general, single way to increase this form of intelligence, as Intelligence(1) incorporates a wide range of disparate skills and abilities that may be quite independent of each other. As such, even a superintelligence that was better than humans at improving AIs would not necessarily be able to engage in rapidly recursive self-improvement of Intelligence(1), because there may well be no such thing as a single variable or quantity called ‘intelligence’ that is directly associated with AI-improving ability.

While I'm not entirely convinced of a fast take-off, this particular argument isn't obvious to me. If the AI is better than humans at every cognitive task, then for every ability that we care about X, it will be better at the cognitive task of improving X. Additionally, it will be better at the cognitive task of improving it's ability to improve X, etc. It will be better than humans at constructing an AI that is good at every cognitive task, and will thus be able to create one better than itself.

This should become clear if one considers that ‘essentially all human cognitive abilities’ includes such activities as pondering moral dilemmas, reflecting on the meaning of life, analysing and producing sophisticated literature, formulating arguments about what constitutes a ‘good life’, interpreting and writing poetry, forming social connections with others, and critically introspecting upon one’s own goals and desires. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips.

This doesn't seem very unlikely to me. As a proof-of-concept, consider a paper-clip maximiser able to simulate several clever humans at high speeds. If it was posed a moral dilemma (and was motivated to answer it) it could perform at above human-level by simulating humans at fast speeds (in a suitable situation where they are likely to produce an honest answer to the question), and directly report their output. However, it wouldn't have to be motivated by it.

Comment by lukas_finnveden on Open Thread #43 · 2018-12-09T22:40:52.632Z · score: 2 (2 votes) · EA · GW

I definitely except that there are people who will lose out on happiness from donating.

Making it a bit more complicated, though, and moving out of the area where it's easy to do research, there are probably happiness benefits of stuff like 'being in a community' and 'living with purpose'. Giving 10 % per year and adopting the role 'earning to give', for example, might enable you to associate life-saving with every hour you spend on your job, which could be pretty positive (I think that feeling that your job is meaningful is associated with happiness). My intuition is that the difference between 10 % and 1 % could be important to be able to adopt this identity, but I might be wrong. And a lot of the gains from high incomes probably comes from increased status, which donating money is a way to get.

I'd be surprised if donating lots of money was the optimal thing to do if you wanted to maximise your own happiness. But I don't think there's a clear case that it's worse than the average person's spending.

Comment by lukas_finnveden on Existential risk as common cause · 2018-12-09T09:12:50.774Z · score: 3 (3 votes) · EA · GW
Of course, a deep ecologist who sided with extinction would be hoping for a horrendously narrow event, between ‘one which ends all human life’ and ‘one which ends all life’. They’d still have to work against the latter, which covers the artificial x-risks.

I agree that it covers AI, but I'm not sure about the other artificial x-risks. Nuclear winter severe enough to eventually kill all humans would definitely kill all large animals, but some smaller forms of life would survive. And while bio-risk could vary a lot in how many species were susceptible to it, I don't think anyone could construct a pathogen that affects everything.

Comment by lukas_finnveden on Thoughts on short timelines · 2018-10-24T09:25:22.168Z · score: 3 (3 votes) · EA · GW

Seems like there's still self-selection going on, depending on how much you think 'a lot' is, and how good you are at finding everyone who have thought about it that much. You might be missing out on people who thought about it for, say, 20 hours, decided it wasn't important, and moved on to other cause areas without writing up their thoughts.

On the other hand, it seems like people are worried about and interested in talking about AGI happening in 20 or 30 or 50 years time, so it doesn't seem likely that everyone who thinks 10-year timelines are <10% stops talking about it.

Comment by lukas_finnveden on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-28T09:23:08.864Z · score: 1 (1 votes) · EA · GW

I remain unconvinced, probably because I mostly care about observer-moments, and don't really care what happens to individuals independently of this. You could plausibly construct some ethical theory that cares about identity in particular way such that this works, but I can't quite see how it would look, yet. You might want to make those ethical intuitions as concrete as you can, and put them under 'Assumptions'.

Comment by lukas_finnveden on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-27T15:20:47.788Z · score: 5 (5 votes) · EA · GW

However, this trick will increase the total suffering in the multiverse, from the purely utilitarian perspective, by 1000 times, as the number of suffering observer-moments will increase. But here we could add one more moral assumption: “Very short pain should be discounted”, based on the intuition that 0.1 seconds of intense pain is bearable (assuming it does not cause brain damage)—simply because it will pass very quickly.

I'd say pain experienced during 0.1 seconds is about 10 times less bad than pain experienced during 1 second. I don't see why we should discount it any further than that. Our particular human psychology might be better at dealing with injury if we expect it to end soon, but we can't change what the observer-moment S(t) expects to happen without changing the state of it's mind. If we change the state of it's mind, it's not a copy of S(t) anymore, and the argument fails.

In general, I can't see how this plan would work. As you say, you can't decrease the absolute number of suffering oberver-moments, so it won't do any good from the perspective of total utilitarianism. The closest thing I can imagine is to "dilute" pain by creating similar but somewhat happier copies, if you believe in some sort of average utilitarianism that cares about identity. That seems like a strange moral theory, though.

Comment by lukas_finnveden on EA Survey 2018 Series: Community Demographics & Characteristics · 2018-09-26T09:37:39.846Z · score: 1 (1 votes) · EA · GW

Neither the link in the text nor Chi's links work for me. They both give 404. I can't find the data when looking directly at Peter's github either https://github.com/peterhurford/ea-data/tree/master/data/2018