Posts

Truthful AI 2021-10-20T15:11:10.363Z
Quantifying anthropic effects on the Fermi paradox 2019-02-15T10:47:04.239Z

Comments

Comment by Lukas_Finnveden on We're Redwood Research, we do applied alignment research, AMA · 2021-10-08T20:24:57.795Z · EA · GW

Hm, could you expand on why collusion is one of the most salient ways in which "it’s possible to build systems that are performance-competitive and training-competitive, and do well on average on their training distribution" could fail?

Is the thought here that — if models can collude — then they can do badly on the training distribution in an unnoticeable way, because they're being checked by models that they can collude with?

Comment by Lukas_Finnveden on When pooling forecasts, use the geometric mean of odds · 2021-10-04T19:57:47.686Z · EA · GW

My answer is that we need to understand the resilience of the aggregated prediction to new information.

This seems roughly right to me. And in particular, I think this highlights the issue with the example of institutional failure. The problem with aggregating predictions to a single guess p of annual failure, and then using p to forecast, is that it assumes that the probability of failure in each year is independent from our perspective. But in fact, each year of no failure provides evidence that the risk of failure is low. And if the forecasters' estimates initially had a wide spread, then we're very sensitive to new information, and so we should update more on each passing year. This would lead to a high probability of failure in the first few years, but still a moderately high expected lifetime.

Comment by Lukas_Finnveden on EA Hangout Prisoners' Dilemma · 2021-09-29T20:55:20.095Z · EA · GW

According to wikipedia, the $300  vs $100 is fine for a one-shot prisoner's dilemma. But an iterated prisoner's dilemma would require (defect against cooperate)+(cooperate against defect) < 2*(cooperate cooperate), since the best outcome is supposed to be permanent cooperate/cooperate rather than alternating cooperation/defection.

However, the fact that this games gives out the same 0$ for both cooperate/defect and defect/defect means it nevertheless doesn't count as an ordinary prisoner's dilemma. Defecting against someone who defects needs to be strictly better than cooperating against a defector. In fact, in this case, every EA is likely going to put some positive valuation on $300 to both miri and amf, so cooperating against a defector is actively preferred to defecting against a defector.

Comment by Lukas_Finnveden on MichaelA's Shortform · 2021-09-26T17:55:14.969Z · EA · GW

Thanks, I appreciate having something to link to! My independent impression is that it would be even easier to link to and easier to find as a top-level post.

Comment by Lukas_Finnveden on Why AI alignment could be hard with modern deep learning · 2021-09-26T17:48:09.091Z · EA · GW

FWIW, I think my median future includes humanity solving AI alignment but messing up reflection/coordination in some way that makes us lose out on most possible value. I think this means that longtermists should think more about reflection/coordination-issues than we're currently doing. But technical AI alignment seems more tractable than reflection/coordination, so I think it's probably correct for more total effort to go towards alignment (which is the status quo).

I'm undecided about whether these reflection/coordination-issues are best framed as "AI risk" or not. They'll certainly interact a lot with AI, but we would face similar problems without AI.

Comment by Lukas_Finnveden on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T16:59:14.071Z · EA · GW

This was proposed and discussed 2 years ago here.

Comment by Lukas_Finnveden on What should "counterfactual donation" mean? · 2021-09-26T16:51:45.945Z · EA · GW

Say I offer to make a counterfactual donation of $50 to the Against Malaria Foundation (AMF) if you do a thing; which of the following are ok for me to do if you don't?

I think this misses out on an important question, which is "What would you have done with the money if you hadn't offered the counterfactual donation?"

If you were planning to donate to AMF, but then realised that you could make me do X by commiting to burn the money if I don't do X, I think that's not ok, in two senses:

  • Firstly, if you just state that the donation is counterfactual, I would interpret it to be mean that you would've done something like (9), if you hadn't offered the counterfactual donation.
  • Secondly, even if you thoroughly clarified and communicated what you were doing, I think we should have a norm against this kind of behavior.

In fact, to make nitpicky distinctions... If I didn't do X, I feel reluctant to say that it's "not ok" for you to donate to AMF. I want to say that it is ok for you to donate to AMF at that point, but that doing so is strong evidence that you were behaving dishonestly when initially promising a counterfactual donation, and that said offering was not ok.

Comment by Lukas_Finnveden on How to succeed as an early-stage researcher: the “lean startup” approach · 2021-09-12T11:16:15.943Z · EA · GW

I'm confused about your FAQ's advice here. Some quotes from the longer example:

Let’s say that Alice is an expert in AI alignment, and Bob wants to get into the field, and trusts Alice’s judgment. Bob asks Alice what she thinks is most valuable to work on, and she replies, “probably robustness of neural networks”. [...]  I think Bob should instead spend some time thinking about how a solution to robustness would mean that AI risk has been meaningfully reduced. [...] It’s possible that after all this reflection, Bob concludes that impact regularization is more valuable than robustness. [...] It’s probably not the case that progress in robustness is 50x more valuable than progress in impact regularization, and so Bob should go with [impact regularization].

In the example, Bob "wants to get into the field", so this seems like an example of how junior people shouldn't defer to experts when picking research projects.

(Specualative differences: Maybe you think there's a huge difference between Alice giving a recommendation about an area vs a specific research project? Or maybe you think that working on impact regularization is the best Bob can do if he can't find a senior researcher to supervise him, but if Alice could supervise his work on robustness he should go with robustness? If so, maybe it's worth clarifying that in the FAQ.)

Edit: TBC, I interpret Toby Shevlane as saying ~you should probably work on whatever senior people find interesting; while Jan Kulveit says that "some young researchers actually have great ideas, should work on them, and avoid generally updating on research taste of most of the 'senior researchers'". The quoted FAQ example is consistent with going against Jan's strong claim, but I'm not sure it's consistent with agreeing with Toby's initial advice, and I interpret you as agreeing with that advice when writing e.g. "Defer to experts for ~3 years, then trust your intuitions".

Comment by Lukas_Finnveden on What is the EU AI Act and why should you care about it? · 2021-09-11T09:35:26.277Z · EA · GW

Thank you for this! Very useful.

The AI act creates institutions responsible for monitoring high-risk systems and the monitoring of AI progress as a whole.

In what sense is the AI board (or some other institution?) responsible for monitoring AI progress as a whole?

Comment by Lukas_Finnveden on How to succeed as an early-stage researcher: the “lean startup” approach · 2021-09-09T00:00:50.360Z · EA · GW

One reason to publish papers (specifically) about AI governance (specifically) is if you want to build an academic field working on AI governance. This is good both to get more brainpower and to get more people (who otherwise wouldn't read EA research) to take the research seriously, in the long term. C.f. the last section here https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact

Comment by Lukas_Finnveden on Moral dilemma · 2021-09-05T22:55:10.187Z · EA · GW

Sorry to hear you're struggling! As others have said, getting to a less tormented state of mind should likely be your top priority right now.

(I think this would be true even if  you only cared about understanding these issues and acting accordingly, because they're difficult enough that it's hard to make progress without being able to think clearly about them. I think that focusing on getting better would be your best bet even if there's some probability that you'll care less about these issues in the future, as you mentioned worrying about in a different comment, because decent mental health seems really important for grappling with these issues productively.)

But here's a concrete answer, for whenever you want to engage with it:

- Are there moral systems that avoid negligible probabilities and are consistent

Stochastic dominance as a general decision theory is a decision theory that agrees with expected-utility-maximization in most cases, but says that it's permissible to ignore sufficiently small probabilities. It's explained in a paper here and in a podcast here (on the 52:11 mark).

Comment by Lukas_Finnveden on Most research/advocacy charities are not scalable · 2021-08-08T09:41:11.291Z · EA · GW

With a bunch of unrealistic assumptions (like constant cost-effectiveness), the counterfactual impact should be (impact/resource  -  opportunitycost/resource)  *  resource.

If impact/resource  is much bigger than opportunitycost/resource  (so that the latter is negligible) this is roughly equal to impact/resource * resource, which is one reading of cost-effectiveness * scale.

If so, assuming that resource=$ in this case, this roughly translates to the heuristic "if the opportunity cost of money isn't that high (compared to your project), you should optimise for total impact without thinking much about  the monetary costs".

Comment by Lukas_Finnveden on Most research/advocacy charities are not scalable · 2021-08-08T09:19:26.302Z · EA · GW

Based on vaguely remembered hearsay, my heuristic has been that the large AI  labs like DeepMind and OpenAI spend roughly as much on compute as they do on people, which would make for a ~2x increase in costs. Googling around doesn't immediately get me any great sources, although this page says "Cloud computing services are a major cost for OpenAI, which spent $7.9 million on cloud computing in the 2017 tax year, or about a quarter of its total functional expenses for that year".

I'd be curious to get a better estimate, if anyone knows anything relevant.

Comment by Lukas_Finnveden on Most research/advocacy charities are not scalable · 2021-08-08T09:11:08.675Z · EA · GW

There may be reasons why building such 100m+ projects are different both from many smaller  "hits based" funding of Open Phil projects (as a high chance of failure is unacceptable) and also different than the GiveWell-style interventions.

One reason is that orgs like OpenAI and CSET require such scale just to get started, e.g. to interest the people involved

This sounds like CSET is a 100m+ project. Their OpenPhil grant was for $11m/year for 5 years, and wikipedia says they got a couple of millions from other sources, so my guess is they're currently spending like $10m-$20m / year.

Comment by Lukas_Finnveden on Further thoughts on charter cities and effective altruism · 2021-07-21T15:45:39.203Z · EA · GW

this page has some statistics on openphil's giving (though it is noted to be preliminary)  https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy

Comment by Lukas_Finnveden on [Future Perfect] How to be a good ancestor · 2021-07-03T00:02:30.884Z · EA · GW

Sweden has a “Ministry of the Future,”

Unfortunately, this is now a thing of the past. It only lasted 2014-2016. (Wikipedia on the minister post: https://en.wikipedia.org/wiki/Minister_for_Strategic_Development_and_Nordic_Cooperation )

Comment by Lukas_Finnveden on What are some key numbers that (almost) every EA should know? · 2021-06-18T11:28:30.011Z · EA · GW

The last two should be 10^11 - 10^12 and 10^11, respectively?

Comment by Lukas_Finnveden on A ranked list of all EA-relevant (audio)books I've read · 2021-02-24T14:35:39.958Z · EA · GW

This has been discussed on lw here: www.lesswrong.com/posts/xBAeSSwLFBs2NCTND/do-you-vote-based-on-what-you-think-total-karma-should-be

Strong opinions on both sides, with a majority of people currently thinking about current karma levels occasionally but not always.

Comment by Lukas_Finnveden on Were the Great Tragedies of History “Mere Ripples”? · 2021-02-12T21:29:55.093Z · EA · GW

It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear.

Agreed.

There are many longtermists that don't hold these views (eg. Will MacAskill is literally about to publish the book on longtermism and doesn't think we're at an especially influential time in history, and patient philanthropy gets taken seriously by lots of longtermists).

Yeah this seems right, maybe with the caveat that Will has (as far as I know) mostly expressed skepticism about this being the most influential century, and I'd guess he does think this century is unusually influential, or at least unusually likely to be unusually influential.

And yes, I also agree that the quoted views are very extreme, and that longtermists at most hold weaker versions of them.

Comment by Lukas_Finnveden on Were the Great Tragedies of History “Mere Ripples”? · 2021-02-12T19:36:00.091Z · EA · GW

Granted, there are probably longtermists that do hold these views, but these views are not longtermism. I don’t know whether Bostrom (whose views seems to be the focus of the book) holds these views. Even if he does, these views are not longtermism

I haven't read the top-level post (thanks for summarising!); but in general, I think this is a weak counterargument. If most people in a movement (or academic field, or political party, etc) holds a rare belief X, it's perfectly fair to criticise the movement for believing X. If the movement claims that X isn't a necessary part of their ideology, it's polite for a critic to note that X isn't necessarily endorsed as the stated ideology, but it's important that their critique of the movement is still taken seriously. Otherwise, any movement can choose a definition that avoids mentioning the most objectionable part of their ideology without changing their beliefs or actions. (Similar to the motte-and-bailey fallacy). In this case, the author seems to be directly worried about longtermists' beliefs and actions; he isn't just disputing the philosophy.

Comment by Lukas_Finnveden on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-23T23:02:20.625Z · EA · GW

As a toy example, say that  is some bounded sigmoid function, and my utility function is to maximize ; it's always going to be the case that  so I am in some sense scope sensitive, but I don't think I'm open to Pascal's mugging

This seems right to me.

I think it means that there is something which we value linearly, but that thing might be a complicated function of happiness, preference satisfaction, etc.

Yeah, I have no quibbles with this. FWIW, I personally didn't  interpret the passage as saying this, so if that's what's meant, I'd recommend reformulating.

(To gesture at where I'm coming from: "in expectation bring about more paperclips" seems much more specific than "in expectation increase some function defined over the number of paperclips"; and I assumed that this statement was similar, except pointing towards the physical structure of "intuitively valuable aspects of individual lives" rather than the physical structure of "paperclips". In particular, "intuitively valuable aspects of individual lives" seems like a local phenomena rather than something defined over world-histories, and you kind of need to define your utility function over world-histories to represent risk-aversion.)

Comment by Lukas_Finnveden on Lessons from my time in Effective Altruism · 2021-01-16T17:16:24.462Z · EA · GW

I agree it's partly a lucky coincidence, but I also count it as some general evidence. Ie., insofar as careers are unpredictable, up-skilling in a single area may be a bit less reliably good than expected, compared with placing yourself in a situation where you get exposed to lots of information and inspiration that's directly relevant to things you care about. (That last bit is unfortunately vague, but seems to gesture at something that there's more of in direct work.)

Comment by Lukas_Finnveden on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-16T13:12:53.203Z · EA · GW

Endorsing actions which, in expectation, bring about more intuitively valuable aspects of individual lives (e.g. happiness, preference-satisfaction, etc), or bring about fewer intuitively disvaluable aspects of individual lives

If this is the technical meaning of "in expectation", this brings in a lot of baggage. I think it implicitly means that you value those things ~linearly in their amount (which makes the second statement superfluous?), and it opens you up to pascal's mugging.

Comment by Lukas_Finnveden on Lessons from my time in Effective Altruism · 2021-01-16T10:19:38.620Z · EA · GW

when I graduated, I was very keen to get started in an AI safety research group straightaway. But I now think that, for most people in that position, getting 1-2 years of research engineering experience elsewhere before starting direct work has similar expected value

If you'd done this, wouldn't you have missed out on this insight:

I’d assumed that the field would make much more sense once I was inside it, that didn’t really happen: it felt like there were still many unresolved questions (and some mistakes) in foundational premises of the field.

or do you think you could've learned that some other way?

Also, in your case, skilling up in engineering turned out to be less important than updating on personal fit and philosophising. I'm curious if you think you would've updated as hard on your personal fit in a non-safety workplace, and if you think your off-work philosophy would've been similarly good?

(Of course, you could answer: yes there were many benefits from working in the safety team; but the benefits from working in other orgs – e.g. getting non-EA connections – are similarly large in expectation.)

Comment by Lukas_Finnveden on Lessons from my time in Effective Altruism · 2021-01-16T10:07:00.863Z · EA · GW

Great post!

EAs tend to lack experience with more formal or competitive interactions, such as political maneuvering in big organisations. This is particularly important for interacting with prestigious or senior people, who as a rule don’t have much time for naivety, and who we don’t want to form a bad impression of EA.

I can't immediately see why a lack of experience with political maneuvering would mean that we often waste prestigious peoples' time. Could you give an example? Is this just when an EA is talking to somoene prestigious and asks a silly question? (e.g. "Why do you  need a managing structure when you could just write up your goals and then ask each employee to maximize those goals?" or whatever)

Comment by Lukas_Finnveden on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-06T01:14:44.650Z · EA · GW

When considering whether to cure a billion headaches or save someone's life, I'd guess that people's prioritarian intuition would kick in, and say that it's better to save the single life. However, when considering whether to cure a billion headaches or to increase one person's life from ok to awesome, I imagine that most people prefer to cure a billion headaches. I think this latter situation is more analogous to the repugnant conclusion. Since people's intuition differ in this case and in the repugnant conclusion, I claim that "The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future" is incorrect. The fact that the repugnant conclusion concerns is about merely possible people clearly matters for people's intuition in some way.

I agree that the repugnace can't be grounded by saying that merely possible people don't matter at all. But there are other possible mechanics that treat merely possible people differently from existing people, that can ground the repugnance. For example, the paper that we're discussing under!

Comment by Lukas_Finnveden on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T20:41:15.713Z · EA · GW

The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future.

It doesn't? That's not my impression. In particular:

There are current generation perfect analogues of the repugnant conclusion. Imagine you could provide a medicine that provides a low quality life to billions of currently existing people or provide a different medicine to a much smaller number of people giving them brilliant lives.

But people don't find these cases intuitively identical, right? I imagine that in the current-generation case, most people who oppose the repugnant conclusion instead favor egalitarian solutions, granting small benefits to many (though I haven't seen any data on this, so I'd be curious if you disagree!). Whereas when debating who to bring into existence, people who oppose the repugnant conclusion aren't just indifferent about what happens to these merely-possible people; they actively think that the happy, tiny population is better. 

So the tricky thing is that people intuitively support granting small benefits to many already existing people above large benefits to a few already existing people, but don't want to extend this to creating many barely-good lives above creating a few really good ones.

Comment by Lukas_Finnveden on The Fermi Paradox has not been dissolved · 2020-12-14T23:12:48.711Z · EA · GW

with your preferred parameter choices, the 6% chance of no life in the Milky Way still almost certainly implies that the lack of alien signals is due to the fact that they are simply too far away to have been seen

I haven't run the numbers, but I wouldn't be quite so dismissive. Intergalactic travel is probably possible, so with numbers as high as these, I would've expected us to encounter some early civilisation from another galaxy. So if these numbers were right, it'd be some evidence that intergalactic travel is impossible, or that something else strange is going on.

(Also, it would be an important consideration for whether we'll encounter aliens in the future, which has at least some cause prio implications.)

(But also, I don't buy the argument for these numbers, see my other comment.)

Comment by Lukas_Finnveden on The Fermi Paradox has not been dissolved · 2020-12-13T11:11:39.400Z · EA · GW

I hadn't seen the Lineweaver and Davis paper before, thanks for pointing it out! I'm sceptical of the methodology, though. They start out with a uniform prior between 0 and 1 of the probability that life emerges in a ~0.5B year time window. This is pretty much assuming their conclusion already, as it assigns <0.1% probability to life emerging with less than 0.1% probability (I much prefer log-uniform priors). The exact timing of abiogenesis is then used to get a very modest bayesian update (less than 2:1 in favor of "life always happens as soon as possible" vs any other probability of life emerging) which yields the 95% credible interval with 13% at the bottom. Note that even before they updated on any evidence, they had already assumed a 95% credible interval with 2.5% at the bottom!

As an aside, I do mostly agree that alien life is likely to be common outside our galaxy (or at least that we should assume that it is). However, this is because I'm sympathetic to another account of anthropics, which leads to large numbers of aliens almost regardless of our prior, as I explain here.

Comment by Lukas_Finnveden on Thoughts on whether we're living at the most influential time in history · 2020-11-06T17:24:10.485Z · EA · GW

I actually think the negative exponential gives too little weight to later people, because I'm not certain that late people can't be influential. But if I had a person from the first 1e-89 of all people who've ever lived and a random person from the middle, I'd certainly say that the former was more likely to be one of the most influential people. They'd also be more likely to be one of the least influential people! Their position is just so special!

Maybe my prior would be like 30% to a uniform function, 40% to negative exponentials of various slopes, and 30% to other functions (e.g. the last person who ever lived seems more likely to be the most influential than a random person in the middle.)

Only using a single, simple function for something so complicated seems overconfident to me. And any mix of functions where one of them assigns decent probability to early people being the most influential is enough that it's not super unlikely that early people are the most influential.

Comment by Lukas_Finnveden on Thoughts on whether we're living at the most influential time in history · 2020-11-05T11:23:48.757Z · EA · GW

One way to frame this is that we do need extraordinarily strong evidence to update from thinking that we're almost certainly not the most influential time to thinking that we might plausibly be the most influential time. However, we don't  need extraordinarily strong evidence pointing towards us almost certainly being the most influential (that then "averages out" to thinking that we're plausibly the most influential). It's sufficient to get extraordinarily strong evidence that we are at a point in history which is plausibly the most influential. And if we condition on the future being long and that we aren't in a simulation (because that's probably when we have the most impact), we do in fact have extraordinarily strong evidence that we are very early in history, which is a point that's plausibly the most influential.

Comment by Lukas_Finnveden on Thoughts on whether we're living at the most influential time in history · 2020-11-05T10:53:21.774Z · EA · GW

I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early.

If we're doing things right, it shouldn't matter whether we're building earliness into our prior or updating on the basis of earliness.

Let the set H="the 1e10 (i.e. 10 billion) most influential people who will ever live"  and let E="the 1e11 (i.e. 100 billion) earliest people who will ever live". Assume that the future will contain 1e100 people. Let X be a randomly sampled person.

For our unconditional prior P(X in H), everyone agrees that uniform probability is appropriate, i.e., P(X in H) = 1e-90. (I.e. we're not giving up on the self-sampling assumption.)

However, for our belief over P(X in H | X in E), i.e. the probability that a randomly chosen early person is one of the most influential people, some people argue we should utilise an e.g. exponential function where earlier people are more likely to be influential (which could be called a prior over "X in H" based on how early X is). However, it seems like you're saying that we shouldn't assess P(X in H | X in E) directly from such a prior, but instead get it from bayesian updates. So lets do that.

P(X in H | X in E) = P(X in E | X in H) * P(X in H) / P(X in E) = P(X in E | X in H) * 1e-90 / 1e-89 = P(X in E | X in H) * 1e-1 = P(X in E | X in H) / 10

So now we've switched over to instead making a guess about P(X in E | X in H), i.e. the probability that one of the 1e10 most influential people also is one of the 1e11 earliest people, and dividing by 10. That doesn't seem much easier than making a guess about P(X in H | X in E), and it's not obvious whether our intuitions here would lead us to expect more or less influentialness.

Also, the way that 1e-90 and 1e-89 are both extraordinarily unlikely, but divide out to becoming 1e-1, illustrates Buck's point:

if you condition on us being at an early time in human history (which is an extremely strong condition, because it has incredibly low prior probability), it’s not that surprising for us to find ourselves at a hingey time.

Comment by Lukas_Finnveden on Getting money out of politics and into charity · 2020-10-07T06:26:26.565Z · EA · GW

Another relevant post is Paul Christiano's Repledge++, which suggests some nice variations. (It might still be worth going with something simple to ease communication, but it seems good to consider options and be aware of concerns.)

As one potential problem with the basic idea, it notes that

I'm not donating to politics, so wouldn't use it.

isn't necessarily true, because if you thought that your money would be matched with high probability, you could remove money from the other campaign at no cost to your favorite charity. This is bad, because it gives people on the other side less incentive to donate to the scheme, because they might just match people who otherwise wouldn't have donated to campaigns.

Comment by Lukas_Finnveden on Getting money out of politics and into charity · 2020-10-07T06:03:11.418Z · EA · GW

We were discussing the idea back in 2009. Toby Ord has written a relevant paper.

Both links go to the same felicifia page. I suspect you're referring to the moral trade paper: http://www.amirrorclear.net/files/moral-trade.pdf

Comment by Lukas_Finnveden on How Dependent is the Effective Altruism Movement on Dustin Moskovitz and Cari Tuna? · 2020-09-22T17:57:10.172Z · EA · GW

Givewell estimates that they directed or influenced about 161 million dollars in 2018. 64 million came from Good Ventures grants. Good Ventures is the philanthropic foundation founded and funded by Dustin and Cari. It seems like the 161 million directed by Give Well represents a comfortable majority of total 'EA' donation.

If you want to count OpenPhil's donations as EA donations, that majority isn't so comfortable. In 2018, OpenPhil recommended a bit less than 120 million (excluding Good Venture's donations to GiveWell charities) of which almost all came from Good Ventures, and they recommended more in both 2017 and 2019. This is a great source on OpenPhil's funding.

Comment by Lukas_Finnveden on AMA: Markus Anderljung (PM at GovAI, FHI) · 2020-09-22T13:36:11.482Z · EA · GW

Thanks, that's helpful.

The fewer competitive organisations there are in the space where you're aiming to build career capital and the narrower the career capital you want to build (e.g. because you're unsure about cause prior or because the roles you're aiming at require wide skillsets), the less frequently changing roles makes sense.

Is this a typo? I expect uncertainty about cause prio and requirements of wide skillsets to favor less narrow career capital (and increased benefits of changing roles), not narrower career capital.

Comment by Lukas_Finnveden on AMA: Markus Anderljung (PM at GovAI, FHI) · 2020-09-22T11:33:41.642Z · EA · GW

Hi Markus! I like the list of unusal views.

I think EAs tend to underestimate the value of specialisation. For example, we need more people to become experts in a narrow domain / set of skills and then make those relevant to the wider community. Most of the impact you have in a role comes when you’ve been in it for more than a year.

I would've expected you to cite the threshold for specialisation as longer than a year; as stated, I think most EAs would agree with the last sentence. Do you think that the gains from specialisation keep accumulating after a year, or do you think that someone switching roles every three years will achieve at least half as much as someone who keeps working in the same role? (This might also depend on how narrowly you define a "role".)

Comment by Lukas_Finnveden on Space governance is important, tractable and neglected · 2020-07-10T15:34:45.485Z · EA · GW

Why is that? I don't know much about the area, but my impression is that we currently don't know what space governance would be good from an EA perspective, so we can't advocate for any specific improvement. Advocating for more generic research into space-governance would probably be net-positive, but it seems a lot less leveraged than having EAs look into the area, since I expect longtermists to have different priorities and pay attention to different things (e.g. that laws should be robust to vastly improved technology, and that colonization of other solar systems matter more than asteroid mining despite being further away in time).

Comment by Lukas_Finnveden on saulius's Shortform · 2020-05-07T09:25:30.238Z · EA · GW

If you have images in your posts, you have to upload them somewhere on the internet (e.g. https://imgur.com/)

If you've put the images in a google doc, and made the doc public, then you've already uploaded the images to the internet, and can link to them there. If you use the WYSIWYG editor, you can even copypaste the images along with the text.

I'm not sure whether I should expect google or imgur to preserve their image-links for longer.

Comment by Lukas_Finnveden on What are examples of EA work being reviewed by non-EA researchers? · 2020-03-29T19:15:01.272Z · EA · GW

Since then, the related paper Cheating Death in Damascus has apparently been accepted by The Journal of Philosophy, though it doesn't seem to be published yet.

Comment by Lukas_Finnveden on EAF/FRI are now the Center on Long-Term Risk (CLR) · 2020-03-09T12:45:14.193Z · EA · GW

Good job on completing the rebranding! Do you have an opinion on whether CLR should be pronounced as "see ell are" or as "clear"?

Comment by Lukas_Finnveden on Insomnia with an EA lens: Bigger than malaria? · 2020-03-07T10:19:07.185Z · EA · GW

(Nearly) every insomniac I’ve spoken to knows multiple others

Just want to highlight a potential selection effect: If these people spontaneously tell you that they're insomniacs, they're the type of people who will tell other people about their insomnia, and thus get to know multiple others. There might also be silent insomniacs, who don't tell people they're insomniacs and don't know any others. You're less likely to speak with those, so it would be hard to tell how common they are.

Comment by Lukas_Finnveden on Thoughts on doing good through non-standard EA career pathways · 2020-01-12T21:22:34.330Z · EA · GW

Owen speaks about that in his 80k interview.

Comment by Lukas_Finnveden on 8 things I believe about climate change · 2019-12-28T19:52:24.657Z · EA · GW
Climate change by itself should not be considered a global catastrophic risk (>10% chance of causing >10% of human mortality)

I'm not sure if any natural class of events could be considered global catastrophic risks under this definition, except possibly all kinds of wars and AI. It seems pretty weird to not classify e.g. asteroids or nuclear war as global catastrophic risks, just because they're relatively unlikely. Or is the 10% supposed to mean that there's a 10% probability of >10% of humans dying conditioned on some event in the event class happening? If so, this seems unfair to climate change, since it's so much more likely than the other risks (indeed, it's already happening). Under this definition, I think we could call extreme climate change a global catastrophic risk, for some non-ridiculous definition of extreme.

Comment by Lukas_Finnveden on 8 things I believe about climate change · 2019-12-28T13:34:19.406Z · EA · GW
It’s very difficult to communicate to someone that you think their life’s work is misguided

Just emphasizing the value of prudence and nuance, I think that this^ is a bad and possibly false way to formulate things. Being the "marginal best thing to work on for most EA people with flexible career capital" is a high bar to scale, that most people are not aiming towards, and work to prevent climate change still seems like a good thing to do if the counterfactual is to do nothing. I'd only be tempted to call work on climate change "misguided" if the person in question believes that the risks from climate change are significantly bigger than they in fact are, and wouldn't be working on climate change if they knew better. While this is true for a lot of people, I (perhaps naively) think that people who've spent their life fighting climate change know a bit more. And indeed, someone who have spent their life fighting climate change probably has career capital that's pretty specialized towards that, so it might be correct for them to keep working on it.

I'm still happy to inform people (with extreme prudence, as noted) that other causes might be better, but I think that "X is super important, possibly even more important than Y" is a better way to do this than "work on Y is misguided, so maybe you want to check out X instead".

Comment by Lukas_Finnveden on JP's Shortform · 2019-10-11T10:06:15.219Z · EA · GW

It works fairly well right now, with the main complaints (images, tables) being limitations of our current editor.

Copying images from public Gdocs to the non-markdown editor works fine.

Comment by Lukas_Finnveden on Competition is a sign of neglect in important causes with long time horizons for impact. · 2019-09-15T13:36:20.823Z · EA · GW

It seems to say that the same level of applicant pool growth produces fewer mentors in mentorship-bottlenecked fields than in less mentorship-bottlenecked fields, but I don't understand why.

If a field is bottlenecked on mentors, it has too few mentors per applicants, or put differently, more applicants than the mentors can accept. Assuming that each applicant needs some fixed amount of time with a mentor before becoming senior themselves, increasing the size of the applicant-pool doesn't increase the number of future senior people, because the present mentors won't be able to accept more people just because the applicant-pool is bigger.

Caveats:

  • More people in the applicant-pool may lead to future senior people being better (because the best people in a larger pool are probably better).
  • It's not actually true that a fixed amount of mentor-input makes someone senior. With a larger applicantpool, you might be able to select for people who requires less mentor-input, or who has a larger probability of staying in the field, which will translate to more future senior people (but still significantly less than in applicant-bottlenecked fields).
  • My third point above: some people might be able to circumvent applying to the mentor-constrained positions altogether, and still become senior.
Comment by Lukas_Finnveden on Are we living at the most influential time in history? · 2019-09-04T21:07:49.813Z · EA · GW

Did you make a typo here? "if simulations are made, they're more likely to be of special times than of boring times" is almost exactly what “P(seems like HoH | simulation) > P(seems like HoH | not simulation)” is saying. The only assumptions you need to go between them is that the world is more likely to seem like HoH for people living in special times than for people living in boring times, and that the statement "more likely to be of special times than of boring times" is meant relative to the rate at which special times and boring times appear outside of simulations.

Comment by Lukas_Finnveden on Are we living at the most influential time in history? · 2019-09-04T17:34:06.886Z · EA · GW

Ok, I see.

people seem to put credence in it even before Will’s argument.

This is kind of tangential, but some of the reasons that people put credence in it before Will's argument are very similar to Will's argument, so one has to make sure to not update on the same argument twice. Most of the force from the original simulation argument comes from the intuition that ancestor simulations are particularly interesting. (Bostrom's trilemma isn't nearly as interesting for a randomly chosen time-and-space chunk of the universe, because the most likely solution is that nobody ever hade any reason to simulate it.) Why would simulations of early humans be particularly interesting? I'd guess that this bottoms out in them having disproportionately much influence over the universe relative to how cheap they are to simulate, which is very close to the argument that Will is making.

Comment by Lukas_Finnveden on Are we living at the most influential time in history? · 2019-09-04T08:39:58.062Z · EA · GW

Not necessarily.

P(simulation | seems like HOH) = P(seems like HOH | simulation)*P(simulation) / (P(seems like HOH | simulation)*P(simulation) + P(seems like HOH | not simulation)*P(not simulation))

Even if P(seems like HoH | simulation) >> P(seems like HoH | not simulation), P(simulation | seems like HOH) could be much less than 50% if we have a low prior for P(simulation). That's why the term on the right might be wrong - the present text is claiming that our prior probability of being in a simulation should be large enough that HOH should make us assign a lot more than 50% to being in a simulation, which is a stronger claim than HOH just being strong evidence for us being in a simulation.