Posts

Quantifying anthropic effects on the Fermi paradox 2019-02-15T10:47:04.239Z

Comments

Comment by Lukas_Finnveden on A ranked list of all EA-relevant (audio)books I've read · 2021-02-24T14:35:39.958Z · EA · GW

This has been discussed on lw here: www.lesswrong.com/posts/xBAeSSwLFBs2NCTND/do-you-vote-based-on-what-you-think-total-karma-should-be

Strong opinions on both sides, with a majority of people currently thinking about current karma levels occasionally but not always.

Comment by Lukas_Finnveden on Were the Great Tragedies of History “Mere Ripples”? · 2021-02-12T21:29:55.093Z · EA · GW

It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear.

Agreed.

There are many longtermists that don't hold these views (eg. Will MacAskill is literally about to publish the book on longtermism and doesn't think we're at an especially influential time in history, and patient philanthropy gets taken seriously by lots of longtermists).

Yeah this seems right, maybe with the caveat that Will has (as far as I know) mostly expressed skepticism about this being the most influential century, and I'd guess he does think this century is unusually influential, or at least unusually likely to be unusually influential.

And yes, I also agree that the quoted views are very extreme, and that longtermists at most hold weaker versions of them.

Comment by Lukas_Finnveden on Were the Great Tragedies of History “Mere Ripples”? · 2021-02-12T19:36:00.091Z · EA · GW

Granted, there are probably longtermists that do hold these views, but these views are not longtermism. I don’t know whether Bostrom (whose views seems to be the focus of the book) holds these views. Even if he does, these views are not longtermism

I haven't read the top-level post (thanks for summarising!); but in general, I think this is a weak counterargument. If most people in a movement (or academic field, or political party, etc) holds a rare belief X, it's perfectly fair to criticise the movement for believing X. If the movement claims that X isn't a necessary part of their ideology, it's polite for a critic to note that X isn't necessarily endorsed as the stated ideology, but it's important that their critique of the movement is still taken seriously. Otherwise, any movement can choose a definition that avoids mentioning the most objectionable part of their ideology without changing their beliefs or actions. (Similar to the motte-and-bailey fallacy). In this case, the author seems to be directly worried about longtermists' beliefs and actions; he isn't just disputing the philosophy.

Comment by Lukas_Finnveden on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-23T23:02:20.625Z · EA · GW

As a toy example, say that  is some bounded sigmoid function, and my utility function is to maximize ; it's always going to be the case that  so I am in some sense scope sensitive, but I don't think I'm open to Pascal's mugging

This seems right to me.

I think it means that there is something which we value linearly, but that thing might be a complicated function of happiness, preference satisfaction, etc.

Yeah, I have no quibbles with this. FWIW, I personally didn't  interpret the passage as saying this, so if that's what's meant, I'd recommend reformulating.

(To gesture at where I'm coming from: "in expectation bring about more paperclips" seems much more specific than "in expectation increase some function defined over the number of paperclips"; and I assumed that this statement was similar, except pointing towards the physical structure of "intuitively valuable aspects of individual lives" rather than the physical structure of "paperclips". In particular, "intuitively valuable aspects of individual lives" seems like a local phenomena rather than something defined over world-histories, and you kind of need to define your utility function over world-histories to represent risk-aversion.)

Comment by Lukas_Finnveden on Lessons from my time in Effective Altruism · 2021-01-16T17:16:24.462Z · EA · GW

I agree it's partly a lucky coincidence, but I also count it as some general evidence. Ie., insofar as careers are unpredictable, up-skilling in a single area may be a bit less reliably good than expected, compared with placing yourself in a situation where you get exposed to lots of information and inspiration that's directly relevant to things you care about. (That last bit is unfortunately vague, but seems to gesture at something that there's more of in direct work.)

Comment by Lukas_Finnveden on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-16T13:12:53.203Z · EA · GW

Endorsing actions which, in expectation, bring about more intuitively valuable aspects of individual lives (e.g. happiness, preference-satisfaction, etc), or bring about fewer intuitively disvaluable aspects of individual lives

If this is the technical meaning of "in expectation", this brings in a lot of baggage. I think it implicitly means that you value those things ~linearly in their amount (which makes the second statement superfluous?), and it opens you up to pascal's mugging.

Comment by Lukas_Finnveden on Lessons from my time in Effective Altruism · 2021-01-16T10:19:38.620Z · EA · GW

when I graduated, I was very keen to get started in an AI safety research group straightaway. But I now think that, for most people in that position, getting 1-2 years of research engineering experience elsewhere before starting direct work has similar expected value

If you'd done this, wouldn't you have missed out on this insight:

I’d assumed that the field would make much more sense once I was inside it, that didn’t really happen: it felt like there were still many unresolved questions (and some mistakes) in foundational premises of the field.

or do you think you could've learned that some other way?

Also, in your case, skilling up in engineering turned out to be less important than updating on personal fit and philosophising. I'm curious if you think you would've updated as hard on your personal fit in a non-safety workplace, and if you think your off-work philosophy would've been similarly good?

(Of course, you could answer: yes there were many benefits from working in the safety team; but the benefits from working in other orgs – e.g. getting non-EA connections – are similarly large in expectation.)

Comment by Lukas_Finnveden on Lessons from my time in Effective Altruism · 2021-01-16T10:07:00.863Z · EA · GW

Great post!

EAs tend to lack experience with more formal or competitive interactions, such as political maneuvering in big organisations. This is particularly important for interacting with prestigious or senior people, who as a rule don’t have much time for naivety, and who we don’t want to form a bad impression of EA.

I can't immediately see why a lack of experience with political maneuvering would mean that we often waste prestigious peoples' time. Could you give an example? Is this just when an EA is talking to somoene prestigious and asks a silly question? (e.g. "Why do you  need a managing structure when you could just write up your goals and then ask each employee to maximize those goals?" or whatever)

Comment by Lukas_Finnveden on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-06T01:14:44.650Z · EA · GW

When considering whether to cure a billion headaches or save someone's life, I'd guess that people's prioritarian intuition would kick in, and say that it's better to save the single life. However, when considering whether to cure a billion headaches or to increase one person's life from ok to awesome, I imagine that most people prefer to cure a billion headaches. I think this latter situation is more analogous to the repugnant conclusion. Since people's intuition differ in this case and in the repugnant conclusion, I claim that "The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future" is incorrect. The fact that the repugnant conclusion concerns is about merely possible people clearly matters for people's intuition in some way.

I agree that the repugnace can't be grounded by saying that merely possible people don't matter at all. But there are other possible mechanics that treat merely possible people differently from existing people, that can ground the repugnance. For example, the paper that we're discussing under!

Comment by Lukas_Finnveden on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T20:41:15.713Z · EA · GW

The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future.

It doesn't? That's not my impression. In particular:

There are current generation perfect analogues of the repugnant conclusion. Imagine you could provide a medicine that provides a low quality life to billions of currently existing people or provide a different medicine to a much smaller number of people giving them brilliant lives.

But people don't find these cases intuitively identical, right? I imagine that in the current-generation case, most people who oppose the repugnant conclusion instead favor egalitarian solutions, granting small benefits to many (though I haven't seen any data on this, so I'd be curious if you disagree!). Whereas when debating who to bring into existence, people who oppose the repugnant conclusion aren't just indifferent about what happens to these merely-possible people; they actively think that the happy, tiny population is better. 

So the tricky thing is that people intuitively support granting small benefits to many already existing people above large benefits to a few already existing people, but don't want to extend this to creating many barely-good lives above creating a few really good ones.

Comment by Lukas_Finnveden on The Fermi Paradox has not been dissolved · 2020-12-14T23:12:48.711Z · EA · GW

with your preferred parameter choices, the 6% chance of no life in the Milky Way still almost certainly implies that the lack of alien signals is due to the fact that they are simply too far away to have been seen

I haven't run the numbers, but I wouldn't be quite so dismissive. Intergalactic travel is probably possible, so with numbers as high as these, I would've expected us to encounter some early civilisation from another galaxy. So if these numbers were right, it'd be some evidence that intergalactic travel is impossible, or that something else strange is going on.

(Also, it would be an important consideration for whether we'll encounter aliens in the future, which has at least some cause prio implications.)

(But also, I don't buy the argument for these numbers, see my other comment.)

Comment by Lukas_Finnveden on The Fermi Paradox has not been dissolved · 2020-12-13T11:11:39.400Z · EA · GW

I hadn't seen the Lineweaver and Davis paper before, thanks for pointing it out! I'm sceptical of the methodology, though. They start out with a uniform prior between 0 and 1 of the probability that life emerges in a ~0.5B year time window. This is pretty much assuming their conclusion already, as it assigns <0.1% probability to life emerging with less than 0.1% probability (I much prefer log-uniform priors). The exact timing of abiogenesis is then used to get a very modest bayesian update (less than 2:1 in favor of "life always happens as soon as possible" vs any other probability of life emerging) which yields the 95% credible interval with 13% at the bottom. Note that even before they updated on any evidence, they had already assumed a 95% credible interval with 2.5% at the bottom!

As an aside, I do mostly agree that alien life is likely to be common outside our galaxy (or at least that we should assume that it is). However, this is because I'm sympathetic to another account of anthropics, which leads to large numbers of aliens almost regardless of our prior, as I explain here.

Comment by Lukas_Finnveden on Thoughts on whether we're living at the most influential time in history · 2020-11-06T17:24:10.485Z · EA · GW

I actually think the negative exponential gives too little weight to later people, because I'm not certain that late people can't be influential. But if I had a person from the first 1e-89 of all people who've ever lived and a random person from the middle, I'd certainly say that the former was more likely to be one of the most influential people. They'd also be more likely to be one of the least influential people! Their position is just so special!

Maybe my prior would be like 30% to a uniform function, 40% to negative exponentials of various slopes, and 30% to other functions (e.g. the last person who ever lived seems more likely to be the most influential than a random person in the middle.)

Only using a single, simple function for something so complicated seems overconfident to me. And any mix of functions where one of them assigns decent probability to early people being the most influential is enough that it's not super unlikely that early people are the most influential.

Comment by Lukas_Finnveden on Thoughts on whether we're living at the most influential time in history · 2020-11-05T11:23:48.757Z · EA · GW

One way to frame this is that we do need extraordinarily strong evidence to update from thinking that we're almost certainly not the most influential time to thinking that we might plausibly be the most influential time. However, we don't  need extraordinarily strong evidence pointing towards us almost certainly being the most influential (that then "averages out" to thinking that we're plausibly the most influential). It's sufficient to get extraordinarily strong evidence that we are at a point in history which is plausibly the most influential. And if we condition on the future being long and that we aren't in a simulation (because that's probably when we have the most impact), we do in fact have extraordinarily strong evidence that we are very early in history, which is a point that's plausibly the most influential.

Comment by Lukas_Finnveden on Thoughts on whether we're living at the most influential time in history · 2020-11-05T10:53:21.774Z · EA · GW

I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early.

If we're doing things right, it shouldn't matter whether we're building earliness into our prior or updating on the basis of earliness.

Let the set H="the 1e10 (i.e. 10 billion) most influential people who will ever live"  and let E="the 1e11 (i.e. 100 billion) earliest people who will ever live". Assume that the future will contain 1e100 people. Let X be a randomly sampled person.

For our unconditional prior P(X in H), everyone agrees that uniform probability is appropriate, i.e., P(X in H) = 1e-90. (I.e. we're not giving up on the self-sampling assumption.)

However, for our belief over P(X in H | X in E), i.e. the probability that a randomly chosen early person is one of the most influential people, some people argue we should utilise an e.g. exponential function where earlier people are more likely to be influential (which could be called a prior over "X in H" based on how early X is). However, it seems like you're saying that we shouldn't assess P(X in H | X in E) directly from such a prior, but instead get it from bayesian updates. So lets do that.

P(X in H | X in E) = P(X in E | X in H) * P(X in H) / P(X in E) = P(X in E | X in H) * 1e-90 / 1e-89 = P(X in E | X in H) * 1e-1 = P(X in E | X in H) / 10

So now we've switched over to instead making a guess about P(X in E | X in H), i.e. the probability that one of the 1e10 most influential people also is one of the 1e11 earliest people, and dividing by 10. That doesn't seem much easier than making a guess about P(X in H | X in E), and it's not obvious whether our intuitions here would lead us to expect more or less influentialness.

Also, the way that 1e-90 and 1e-89 are both extraordinarily unlikely, but divide out to becoming 1e-1, illustrates Buck's point:

if you condition on us being at an early time in human history (which is an extremely strong condition, because it has incredibly low prior probability), it’s not that surprising for us to find ourselves at a hingey time.

Comment by Lukas_Finnveden on Getting money out of politics and into charity · 2020-10-07T06:26:26.565Z · EA · GW

Another relevant post is Paul Christiano's Repledge++, which suggests some nice variations. (It might still be worth going with something simple to ease communication, but it seems good to consider options and be aware of concerns.)

As one potential problem with the basic idea, it notes that

I'm not donating to politics, so wouldn't use it.

isn't necessarily true, because if you thought that your money would be matched with high probability, you could remove money from the other campaign at no cost to your favorite charity. This is bad, because it gives people on the other side less incentive to donate to the scheme, because they might just match people who otherwise wouldn't have donated to campaigns.

Comment by Lukas_Finnveden on Getting money out of politics and into charity · 2020-10-07T06:03:11.418Z · EA · GW

We were discussing the idea back in 2009. Toby Ord has written a relevant paper.

Both links go to the same felicifia page. I suspect you're referring to the moral trade paper: http://www.amirrorclear.net/files/moral-trade.pdf

Comment by Lukas_Finnveden on How Dependent is the Effective Altruism Movement on Dustin Moskovitz and Cari Tuna? · 2020-09-22T17:57:10.172Z · EA · GW

Givewell estimates that they directed or influenced about 161 million dollars in 2018. 64 million came from Good Ventures grants. Good Ventures is the philanthropic foundation founded and funded by Dustin and Cari. It seems like the 161 million directed by Give Well represents a comfortable majority of total 'EA' donation.

If you want to count OpenPhil's donations as EA donations, that majority isn't so comfortable. In 2018, OpenPhil recommended a bit less than 120 million (excluding Good Venture's donations to GiveWell charities) of which almost all came from Good Ventures, and they recommended more in both 2017 and 2019. This is a great source on OpenPhil's funding.

Comment by Lukas_Finnveden on AMA: Markus Anderljung (PM at GovAI, FHI) · 2020-09-22T13:36:11.482Z · EA · GW

Thanks, that's helpful.

The fewer competitive organisations there are in the space where you're aiming to build career capital and the narrower the career capital you want to build (e.g. because you're unsure about cause prior or because the roles you're aiming at require wide skillsets), the less frequently changing roles makes sense.

Is this a typo? I expect uncertainty about cause prio and requirements of wide skillsets to favor less narrow career capital (and increased benefits of changing roles), not narrower career capital.

Comment by Lukas_Finnveden on AMA: Markus Anderljung (PM at GovAI, FHI) · 2020-09-22T11:33:41.642Z · EA · GW

Hi Markus! I like the list of unusal views.

I think EAs tend to underestimate the value of specialisation. For example, we need more people to become experts in a narrow domain / set of skills and then make those relevant to the wider community. Most of the impact you have in a role comes when you’ve been in it for more than a year.

I would've expected you to cite the threshold for specialisation as longer than a year; as stated, I think most EAs would agree with the last sentence. Do you think that the gains from specialisation keep accumulating after a year, or do you think that someone switching roles every three years will achieve at least half as much as someone who keeps working in the same role? (This might also depend on how narrowly you define a "role".)

Comment by Lukas_Finnveden on Space governance is important, tractable and neglected · 2020-07-10T15:34:45.485Z · EA · GW

Why is that? I don't know much about the area, but my impression is that we currently don't know what space governance would be good from an EA perspective, so we can't advocate for any specific improvement. Advocating for more generic research into space-governance would probably be net-positive, but it seems a lot less leveraged than having EAs look into the area, since I expect longtermists to have different priorities and pay attention to different things (e.g. that laws should be robust to vastly improved technology, and that colonization of other solar systems matter more than asteroid mining despite being further away in time).

Comment by Lukas_Finnveden on saulius's Shortform · 2020-05-07T09:25:30.238Z · EA · GW

If you have images in your posts, you have to upload them somewhere on the internet (e.g. https://imgur.com/)

If you've put the images in a google doc, and made the doc public, then you've already uploaded the images to the internet, and can link to them there. If you use the WYSIWYG editor, you can even copypaste the images along with the text.

I'm not sure whether I should expect google or imgur to preserve their image-links for longer.

Comment by Lukas_Finnveden on What are examples of EA work being reviewed by non-EA researchers? · 2020-03-29T19:15:01.272Z · EA · GW

Since then, the related paper Cheating Death in Damascus has apparently been accepted by The Journal of Philosophy, though it doesn't seem to be published yet.

Comment by Lukas_Finnveden on EAF/FRI are now the Center on Long-Term Risk (CLR) · 2020-03-09T12:45:14.193Z · EA · GW

Good job on completing the rebranding! Do you have an opinion on whether CLR should be pronounced as "see ell are" or as "clear"?

Comment by Lukas_Finnveden on Insomnia with an EA lens: Bigger than malaria? · 2020-03-07T10:19:07.185Z · EA · GW

(Nearly) every insomniac I’ve spoken to knows multiple others

Just want to highlight a potential selection effect: If these people spontaneously tell you that they're insomniacs, they're the type of people who will tell other people about their insomnia, and thus get to know multiple others. There might also be silent insomniacs, who don't tell people they're insomniacs and don't know any others. You're less likely to speak with those, so it would be hard to tell how common they are.

Comment by Lukas_Finnveden on Thoughts on doing good through non-standard EA career pathways · 2020-01-12T21:22:34.330Z · EA · GW

Owen speaks about that in his 80k interview.

Comment by Lukas_Finnveden on 8 things I believe about climate change · 2019-12-28T19:52:24.657Z · EA · GW
Climate change by itself should not be considered a global catastrophic risk (>10% chance of causing >10% of human mortality)

I'm not sure if any natural class of events could be considered global catastrophic risks under this definition, except possibly all kinds of wars and AI. It seems pretty weird to not classify e.g. asteroids or nuclear war as global catastrophic risks, just because they're relatively unlikely. Or is the 10% supposed to mean that there's a 10% probability of >10% of humans dying conditioned on some event in the event class happening? If so, this seems unfair to climate change, since it's so much more likely than the other risks (indeed, it's already happening). Under this definition, I think we could call extreme climate change a global catastrophic risk, for some non-ridiculous definition of extreme.

Comment by Lukas_Finnveden on 8 things I believe about climate change · 2019-12-28T13:34:19.406Z · EA · GW
It’s very difficult to communicate to someone that you think their life’s work is misguided

Just emphasizing the value of prudence and nuance, I think that this^ is a bad and possibly false way to formulate things. Being the "marginal best thing to work on for most EA people with flexible career capital" is a high bar to scale, that most people are not aiming towards, and work to prevent climate change still seems like a good thing to do if the counterfactual is to do nothing. I'd only be tempted to call work on climate change "misguided" if the person in question believes that the risks from climate change are significantly bigger than they in fact are, and wouldn't be working on climate change if they knew better. While this is true for a lot of people, I (perhaps naively) think that people who've spent their life fighting climate change know a bit more. And indeed, someone who have spent their life fighting climate change probably has career capital that's pretty specialized towards that, so it might be correct for them to keep working on it.

I'm still happy to inform people (with extreme prudence, as noted) that other causes might be better, but I think that "X is super important, possibly even more important than Y" is a better way to do this than "work on Y is misguided, so maybe you want to check out X instead".

Comment by Lukas_Finnveden on JP's Shortform · 2019-10-11T10:06:15.219Z · EA · GW

It works fairly well right now, with the main complaints (images, tables) being limitations of our current editor.

Copying images from public Gdocs to the non-markdown editor works fine.

Comment by Lukas_Finnveden on Competition is a sign of neglect in important causes with long time horizons for impact. · 2019-09-15T13:36:20.823Z · EA · GW

It seems to say that the same level of applicant pool growth produces fewer mentors in mentorship-bottlenecked fields than in less mentorship-bottlenecked fields, but I don't understand why.

If a field is bottlenecked on mentors, it has too few mentors per applicants, or put differently, more applicants than the mentors can accept. Assuming that each applicant needs some fixed amount of time with a mentor before becoming senior themselves, increasing the size of the applicant-pool doesn't increase the number of future senior people, because the present mentors won't be able to accept more people just because the applicant-pool is bigger.

Caveats:

  • More people in the applicant-pool may lead to future senior people being better (because the best people in a larger pool are probably better).
  • It's not actually true that a fixed amount of mentor-input makes someone senior. With a larger applicantpool, you might be able to select for people who requires less mentor-input, or who has a larger probability of staying in the field, which will translate to more future senior people (but still significantly less than in applicant-bottlenecked fields).
  • My third point above: some people might be able to circumvent applying to the mentor-constrained positions altogether, and still become senior.
Comment by Lukas_Finnveden on Are we living at the most influential time in history? · 2019-09-04T21:07:49.813Z · EA · GW

Did you make a typo here? "if simulations are made, they're more likely to be of special times than of boring times" is almost exactly what “P(seems like HoH | simulation) > P(seems like HoH | not simulation)” is saying. The only assumptions you need to go between them is that the world is more likely to seem like HoH for people living in special times than for people living in boring times, and that the statement "more likely to be of special times than of boring times" is meant relative to the rate at which special times and boring times appear outside of simulations.

Comment by Lukas_Finnveden on Are we living at the most influential time in history? · 2019-09-04T17:34:06.886Z · EA · GW

Ok, I see.

people seem to put credence in it even before Will’s argument.

This is kind of tangential, but some of the reasons that people put credence in it before Will's argument are very similar to Will's argument, so one has to make sure to not update on the same argument twice. Most of the force from the original simulation argument comes from the intuition that ancestor simulations are particularly interesting. (Bostrom's trilemma isn't nearly as interesting for a randomly chosen time-and-space chunk of the universe, because the most likely solution is that nobody ever hade any reason to simulate it.) Why would simulations of early humans be particularly interesting? I'd guess that this bottoms out in them having disproportionately much influence over the universe relative to how cheap they are to simulate, which is very close to the argument that Will is making.

Comment by Lukas_Finnveden on Are we living at the most influential time in history? · 2019-09-04T08:39:58.062Z · EA · GW

Not necessarily.

P(simulation | seems like HOH) = P(seems like HOH | simulation)*P(simulation) / (P(seems like HOH | simulation)*P(simulation) + P(seems like HOH | not simulation)*P(not simulation))

Even if P(seems like HoH | simulation) >> P(seems like HoH | not simulation), P(simulation | seems like HOH) could be much less than 50% if we have a low prior for P(simulation). That's why the term on the right might be wrong - the present text is claiming that our prior probability of being in a simulation should be large enough that HOH should make us assign a lot more than 50% to being in a simulation, which is a stronger claim than HOH just being strong evidence for us being in a simulation.

Comment by Lukas_Finnveden on Competition is a sign of neglect in important causes with long time horizons for impact. · 2019-09-01T13:49:15.559Z · EA · GW

It's certainly true that fields bottlenecked on mentors could make use of more mentors, right now. If you're already skilled in the area, you can therefore have very high impact by joining/staying in the field.

However, when young people are considering whether they should join in order to become mentors, as you suggest, they should consider whether the field will be bottlenecked on mentors at the time when they would become one, in 10 years time or so. Since there are lots of junior applicants right now, the seniority bottleneck will presumably be smaller, then.

Moreover, insofar as the present lack of mentors is the main bottleneck preventing junior applicants from eventually becoming senior, adding an extra person to the pool of applicants (yourself) will create fewer counterfactual future mentors than if you were in a field that was less mentorship-constrained. (This doesn't mean it isn't worth doing, though. You adding yourself to the pool will still increase its value.)

It also implies that it can be extra valuable to move into the field if you're able to learn relevant skills without making use of present mentors (e.g. by being in a good and relevant PhD-program, or by doing focused studying that few others are doing).

Comment by Lukas_Finnveden on Conversation on forecasting with Vaniver and Ozzie Gooen · 2019-08-03T00:50:19.475Z · EA · GW

The present lesswrong link doesn't work for me. This is the correct one: https://www.lesswrong.com/posts/AG6PAqsN5sjQHmKfm/conversation-on-forecasting-with-vaniver-and-ozzie-gooen

Comment by Lukas_Finnveden on I find this forum increasingly difficult to navigate · 2019-07-05T20:32:15.801Z · EA · GW
Images can't be added to comments; is that what you were trying to find a workaround for?

It's possible to add images to comments by selecting and copying them from anywhere public (note that it doesn't work if you right click and choose 'copy image'). In this thread, I do it in this comment.

I see how I can't do it manually, though, by selecting text. I wouldn't expect it to be too difficult to add that possibility, though, given that it's already possible in another way?

Comment by Lukas_Finnveden on I find this forum increasingly difficult to navigate · 2019-07-05T20:25:17.173Z · EA · GW

With regards to images, I get flawless behaviour when I copy-paste from googledocs. Somehow, the images automatically get converted, and link to the images hosted with google (in the editor only visible as small cameras). Maybe you can get the same behaviour by making your docs public?

Actually, I'll test copying an image from a google doc into this comment: (edit: seems to be working!)

Comment by Lukas_Finnveden on I find this forum increasingly difficult to navigate · 2019-07-05T20:11:06.197Z · EA · GW

Copying all relevant information from the lesswrong faq to an EA forum faq would be a good start. The problem of how to make its existence public knowledge remains, but that's partly solved automatically by people mentioning/linking to it, and it showing up in google.

Comment by Lukas_Finnveden on I find this forum increasingly difficult to navigate · 2019-07-05T20:03:22.404Z · EA · GW

There's a section on writing in the lesswrong faq (named Posting & Commenting). If any information is missing from there, you can suggest adding it in the comments.

Of course, even given that such instructions exists somewhere, it's important to make sure that it's findable. Not sure what the best way to do that is.

Comment by Lukas_Finnveden on Announcing the launch of the Happier Lives Institute · 2019-06-25T12:37:15.483Z · EA · GW

I'm by no means schooled in academic philosophy, so I could also be wrong about this.

I tend to think about e.g. consequentialism, hedonistic utilitarianism, preference utilitarianism, lesswrongian 'we should keep all the complexities of human value around'-ism, deontology, and virtue ethics as ethical theories. (This is backed up somewhat by the fact that these theories' wikipedia pages name them ethical theories.) When I think about meta-ethics, I mainly think about moral realism vs moral anti-realism and their varieties, though the field contains quite a few other things, like cole_haus mentions.

My impression is that HLI endorses (roughly) hedonistic utilitarianism, and you said that you don't, which would be an ethical disagreement. The borderlines aren't very sharp though. If HLI would have asserted that hedonistic utilitarianism was objectively correct, then you could certainly have made a metaethical argument that no ethical theory is objectively correct. Alternatively, you might be able to bring metaethics into it if you think that there is an ethical truth that isn't hedonistic utilitarianism.

(I saw you quoting Nate's post in another thread. I think you could say that it makes a meta-ethical argument that it's possible to care about things outside yourself, but that it doesn't make the ethical argument that you ought to do so. Of course, HLI does care about things outside themselves, since they care about other people's experiences.)

Comment by Lukas_Finnveden on Announcing the launch of the Happier Lives Institute · 2019-06-23T21:10:06.204Z · EA · GW
For whatever it's worth, my metaethical intuitions suggest that optimizing for happiness is not a particularly sensible goal.

Might just be a nitpick, but isn't this an ethical intuition, rather than a metaethical one?

(I remember hearing other people use "metaethics" in cases where I thought they were talking about object level ethics, as well, so I'm trying to understand whether there's a reason behind this or not.)

Comment by Lukas_Finnveden on Announcing the launch of the Happier Lives Institute · 2019-06-23T20:58:07.514Z · EA · GW

Has Kahneman actually stated that he thinks life satisfaction is more important than happiness? In the article that Habryka quotes, all he says is that most people care more about their life satisfaction than their happiness. As you say, this doesn't necessarily imply that he agrees. In fact, he does state that he personally thinks happiness is important.

(I don't trust the article's preamble to accurately report his beliefs when the topic is as open to misunderstandings as this one is.)

Comment by Lukas_Finnveden on [deleted post] 2019-06-05T21:21:30.582Z
We can also approach the issue abstractly: disruption can be seen as injecting more noise into a previously more stable global system, increasing the probability that the world settles into a different semi-stable configuration. If there are many more undesirable configurations of the world than desirable ones, increasing randomness is more likely to lead to an undesirable state of the world. I am convinced that, unless we are currently in a particularly bad state of the world, global disruption would have a very negative effect (in expectation) on the value of the long-term future.

If there are many more undesirable configurations of the world than desirable ones, then we should, a priori, expect that our present configuration is an undesirable one. Also, if the only effect of disruption was to re-randomize the world order, then the only thing you'd need for disruption to be positive is for the current state to be worse than the average civilisation from the distribution. Maybe this is what you mean with "particularly bad state", but intuitively, I interpret that more like the bottom 15 %.

There are certainly arguments to make for our world being better than average. But I do think that you actually have to make those arguments, and that without them, this abstract model won't tell you if disruption is good or bad.

Comment by Lukas_Finnveden on How to use the Forum · 2019-05-18T18:30:38.996Z · EA · GW

If you go to "Edit account", there's a check box that says "Activate markdown editor". If you un-check that one (I would've expected it to be unchecked by default, but maybe it isn't) you get formatting options just by selecting your text.

Comment by Lukas_Finnveden on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-10T20:34:48.903Z · EA · GW

Although psychadelics is plausibly good from a short-termist view, I think the argument from the long-termist view is quite weak. Insofar as I understand it, psychadelics would improve the long term by

1. Making EAs or other well-intentioned people more capable.

2. Making people more well-intentioned. I interpret this as either causing them to join/stay in the EA community, or causing capable people to become altruistically motivated (in a consequentialist fashion) without the EA community.

Regarding (1), I could see a case for privately encouraging well-intentioned people to use psychadelics, if you believe that psychedelics generally make people more capable. However, pushing for new legislation seems like an exceedingly inefficient way to go about this. Rationality interventions are unique in that they are quite targeted - they identify well-intentioned people and give them the techniques that they need. Pushing for new psychadelic legislation, however, could only help by making the entire population more capable, including the much smaller population of well-intentioned people. I don't know exactly how hard it is to change legislation, but I'd be surprised if it was worth doing solely due to the effect on EAs and other aligned people. New research suffers from a similar problem: good medical research is expensive, so you probably want to have a pretty specific idea about how it benefits EAs before you invest a lot in it.

Regarding (2), I'd be similarly surprised if
campaigning for new legislation -> more people use psychadelics -> more people become altruistically motivated -> more people join the EA community
was a better way to get people into EA than just directly investing in community building.

For both (1) and (2), these conclusions might change if you cared less about EAs in particular, and thought that the future would be significantly better if the average person was somewhat more altruistic or somewhat more capabable. I could be interested in hearing such a case. This doesn't seem very robust to cluelessness, though, given the uncertainty of how psychedelics affect people, and the uncertainty about how increasing general capabilities affects the long term.

Comment by Lukas_Finnveden on Why we should be less productive. · 2019-05-09T22:58:19.728Z · EA · GW
Meta note: that you got downvotes (I can surmise this from the number of votes and the total score) seems to suggest this is advice people don't want to hear, but maybe they need.

I don't think this position is unpopular in the EA community. You have more than one goal and that's fine got lots of upvotes, and my impression is that there's a general consensus that breaks are important and that burnout is a real risk (even though people might not always act according to that consensus).

I'd guess that it's getting downvotes because it doesn't really explain why we should be less productive: it just stakes out the position. In my opinion, it would have been more useful if it, for example, presented evidence showing that unproductive time is useful for living a fulfilled life, or presented an argument for why living a fulfilled life is important even for your altruistic values (which Jakob does more of in the comments).

Meta meta note: In general, it seems kind of uncooperative to assume that people need more of things they downvote.

Comment by Lukas_Finnveden on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-20T22:20:59.767Z · EA · GW
If I remember correctly, 80,000 Hours has stated that they think 15% of people in the EA Community should be pursuing earning to give.

I think this is the article you're thinking about, where they're talking about the paths of marginal graduates. Note that it's from 2015 (though at least Will said he still thought it seemed right in 2016) and explicitly labeled with "Please note that this is just a straw poll used as a way of addressing the misconception stated; it doesn’t represent a definitive answer to this question".

Comment by Lukas_Finnveden on Top Charity Ideas 2019 - Charity Entrepreneurship · 2019-04-17T18:07:05.439Z · EA · GW

Fantastic work! Nitpicks:

The last paragraph is repeated in the second to last paragraph.

However, the beneficial effects of the cash transfer may be much lower in a UCT

Is this supposed to say "lower in a CCT"?

Comment by Lukas_Finnveden on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-17T14:03:42.265Z · EA · GW

As a problem with the 'big list', you mention

2. For every reader, such a list would include many paths that they can’t take.

But it seems like there's another problem, closely related to this one: for every reader, the paths on such a list could have different orderings. If someone has a comparative advantage for a role, it doesn't necessarily mean that they can't aim for other roles: but it might mean that they should prefer the role that they have a comparative advantage for. This is especially true once we consider that most people don't know exactly what they could do and what they'd be good at - instead, their personal lists contains a bunch of things they could aim for, ordered according to different probabilities of having different amounts of impact.

In particular, I think it's a bad idea to take a 'big list', winnow away all the jobs that looks impossible, and then aim for whatever is on top of the list. Instead, your personal list might overlap with others', but have a completely different ordering (yet hopefully contain a few items that other people haven't even considered, given that 80k can't evaluate all opportunities, like you say).

Comment by Lukas_Finnveden on The case for delaying solar geoengineering research · 2019-03-24T09:36:13.727Z · EA · GW
This suggests that for solar geoengineering to be feasible, all major global powers would have to agree on the weather, a highly chaotic system.

Hm, I thought one of the main worries was that major global powers wouldn't have to agree, since any country would be able to launch a geoengineering program on their own, changing the climate for the whole planet.

Do you think that global governance is good enough to disincentivize lone states from launching a program, purely from fear of punishment? Or would it be possible to somehow reverse the effects?

Actually, would you even need to be a state to launch a program like this? I'm not sure how cheap it could become, or if it'd be possible to launch in secret.