Announcing a contest: EA Criticism and Red Teaming 2022-06-01T18:58:55.510Z
Resource for criticisms and red teaming 2022-06-01T18:58:37.998Z
Space governance - problem profile 2022-05-08T17:16:15.570Z
Concave and convex altruism 2022-04-27T22:36:51.135Z
Pre-announcing a contest for critiques and red teaming 2022-03-25T11:52:32.174Z
EA Projects I'd Like to See 2022-03-13T18:12:37.126Z
Risks from Asteroids 2022-02-11T21:01:58.342Z
Two Podcast Opportunities 2021-12-29T14:03:32.359Z
[Podcast] Bryan Caplan on Causes of Poverty and the Case for Open Borders 2021-10-07T13:31:52.967Z
[Podcast] Ben Todd on Choosing a Career and Defining Longtermism 2021-10-04T10:15:20.127Z
[Podcast] Anders Sandberg on the Fermi Paradox, Transhumanism, and so much more 2021-10-01T13:34:41.079Z
[Podcast] Jeffrey Sachs on Sustainable Development 2021-09-28T09:30:09.462Z
Three podcast episodes on energy and climate change 2021-09-24T02:32:37.000Z
Major UN report discusses existential risk and future generations (summary) 2021-09-17T15:51:04.036Z
[Podcast] Thomas Moynihan on the History of Existential Risk 2021-03-22T11:07:50.871Z
[Podcast] Marcus Daniell on High Impact Athletes, Communicating EA, and the Purpose of Sport 2021-03-03T13:31:38.635Z
[Podcast] Luke Freeman on Giving What We Can and Community Building 2021-01-31T12:41:35.781Z
[Podcast] Simon Beard on Parfit, Climate Change, and Existential Risk 2021-01-28T19:47:19.377Z
Introduction to Longtermism 2021-01-27T16:01:11.566Z
Four podcast episodes on animal advocacy 2021-01-25T13:40:13.333Z
finm's Shortform 2021-01-21T20:36:36.398Z
Introduction to the Philosophy of Well-Being 2020-12-07T12:32:41.592Z
Scale-norming in the measurement of subjective well-being 2020-11-02T16:39:25.818Z
Review and Summary of 'Moral Uncertainty' 2020-10-02T15:41:02.591Z
Suggest a question for Bruce Friedrich of GFI 2020-09-05T17:55:45.589Z
Suggest a question for Peter Singer 2020-09-05T17:37:43.720Z
Eliciting donations through causal closeness 2020-08-10T17:06:44.113Z
Four EA podcast episodes 2020-07-24T16:49:40.998Z


Comment by finm on Winners of the EA Criticism and Red Teaming Contest · 2022-10-01T13:28:26.311Z · EA · GW

Enormous +1

Comment by finm on Should you still use the ITN framework? [Red Teaming Contest] · 2022-09-24T18:48:26.764Z · EA · GW

Ok, got it. I'm curious — how do you see people using ITN in practice? (If not for making and comparing estimates of  ?)

Also this post may be relevant!

Comment by finm on Introduction to the Philosophy of Well-Being · 2022-09-24T18:40:02.579Z · EA · GW

That's a good point. It is the case that preferences can be about an indefinite number of things. But I suppose there is still a sense in which a preference satisfaction account is monistic, namely in essentially valuing only the satisfaction of preferences (whatever they are about); and there is no equivalent sense in which objective list theories (with more than one item) are monistic. Also note that objective list theories can contain something like the satisfaction of preferences, and as such can be at least as complex and ecumenical as preference satisfaction views. 

Comment by finm on Most problems fall within a 100x tractability range (under certain assumptions) · 2022-09-22T18:31:59.397Z · EA · GW

Thanks, this is a good post. A half-baked thought about a related but (I think) distinct reason for this phenomenon: I wonder if we tend to (re)define the scale of problems such that they are mostly unsolved at present (but also not so vast that we obviously couldn't make a dent). For instance, it's not natural to think that the problem of 'eradicating global undernourishment' is more than 90% solved, because fewer than 10% of people in the world are undernourished. As long as problems are (re)defined in this way to be smaller in absolute terms, then tractability is going to (appear to) proportionally increase, as a countervailing factor to diminishing returns from extra investment of resources. A nice feature of ITN is that (re)defining the scale of a problem such that it is always mostly unsolved at present doesn't affect the bottom line of utility per marginal dollar, because (utility / % of problem solved) increases as (% of problem solved / marginal dollar) decreases. To the extent this is a real phenomenon, it could emphasise the importance of not reading too much into direct comparisons between tractability across causes.

Comment by finm on Samotsvety's AI risk forecasts · 2022-09-15T16:33:51.565Z · EA · GW

I think it would be very valuable if more reports of this kind were citable in contexts where people are sensitive to signs of credibility and prestige. In other words, I think there are contexts where if this existed as a report on SSRN or even ArXiV, or on the website of an established institution, I think it could be citable and would be valuable as such. Currently I don't think it could be cited (or taken seriously if cited). So if there are low-cost ways of publishing this or similar reports in a more polished way, I think that would be great.

Caveats that (i) maybe you have done this and I missed it; (ii) this comment isn't really specific to this post but it's been on my mind and this is the most recent post where it is applicable; and (iii) on balance it does nonetheles seem likely that the work required to turn this into a 'polished' report means doing so is not (close to) worthwhile.

That said: this is an excellent post and I'm very grateful for these forecasts.

Comment by finm on Population Ethics Without Axiology: A Framework · 2022-09-13T18:15:12.749Z · EA · GW

Thanks for writing this — I'm curious about approaches like this, and your post felt unusually comprehensive. I also don't yet feel like I could faithfully represent your view to someone else, possibly because I read this fairly quickly.

Some scattered thoughts / questions below, written in a rush. I expect some or many of them are fairly confused! NNTR.

  • On this framework, on what grounds can someone not "defensibly ignore" another's complaint? Am I right in thinking this is because ignoring some complaints means frustrating others' goals or preferences, and not frustrating others' goals or preferences is indefensible, as long as we care about getting along/cooperating at all (minimal morality)?
  • You say The exact reach of minimal morality is fuzzy/under-defined. How much is entailed by “don’t be a jerk?”. This seems important. For instance, you might see 'drowning child' framings as (compellling) efforts to move charitable giving within the purview of "you're a jerk if you don't do this when you comfortably could." Especially given the size of the stakes, could you imagine certain longtermist causes like "protecting future generations" similarly being framed as a component of minimal morality?
    • One speculative way you could do this: you described 'minimal morality' as “contractualist” or “cooperation-focused” in spirit. Certainly some acts seem wrong because they just massively undermine the potential for many people living at the same time with many different goals to cooperate on whatever their goals are. But maybe there are some ways in which we collaborate/cooperate/make contracts across (large stretches of) time. Maybe this could ground obligations to future people in minimal morality terms.
  • I understand the difference  in emphasis between saying that the moral significance of people's well-being is derivative of its contribution to valuable states of affairs, as contrasted with saying that what makes states of affairs valuable just is people's well-being (or something to that effect). But I'm curious what this means in a decision-relevant sense?
    • Here's an analogy: my daily walk isn't important because it increases the counter on my podometer; rather the counter matters because it says something about how much I've walked (and walking is the thing I really care about). To see this, consider that intervening on the counter without actually walking does not matter at all.
    • But unlike this analogy, fans of axiology might say that "the value of a state of affairs" is not a measure of what matters (actual people and their well-being) that can be manipulated independently of those things; rather it is defined in terms of what you say actually matters, so there is no substantial disagreement beyond one of emphasis (this is why I don't think I'm on board with 'further thought' complaints against aggregative consequentialism). Curious what I'm missing here, though I realise this is maybe also a distraction.
  • I found the "court hearing analogy" and the overall discussion of population ethics in terms of the anticipated complains/appeals/preferences of future people a bit confusing (because, as you point out, it's not clear how it makes sense in light of the non-identity problem). In particular your tentative solution of talking about the interests of 'interest groups' seems like it's kind of veering into the axiological territory that you wanted to avoid, no? As in: groups don't literally have desires or preferences or goals or interests above and beyond the individuals that make them up. But we can't compare across individuals here, so it's not clear how we can meaningfully compare the interests of groups in this sense. So what are we comparing? Well, groups can be said to have different kinds of intrinsic value, and while that value could be manifested/realised/determined only by individuals, you can comfortably compare  value across groups with different sets of individuals.
  • Am I right in thinking that in order to creatively duck things like the RP, pinprick argument, arguments against asymmetry (etc) you are rejecting that there is a meaningful "better than" relation between certain states of affairs in population ethics contexts? If so this seems somewhat implausible because there do seem to be some cases where one state of affairs is better than another, and views which say "sure, some comparisons are clear, but others are vague or subjective" seem complicated. Do you just need to opt out of the entire game of "some states of affairs are better than other states of affairs (discontinuous with our own world)"? Curious how you frame this in your own mind.
  • I had an overall sense that you are both explaining the broad themes of an alternative to populaiton ethics grounded in axiology; and then building your own richer view on top of that (with the court hearing analogy, distinction between minimal and ambitious morality, etc), such that your own view is like a plausible instance of this broad family of alternatives, but doesn't obviously follow from the original motivation for an alternative?  Is that roughly right?
  • I also had a sense that you could have written a similar post just focused on simpler kinds of aggregative consequentialism (maybe you have in other posts, afraid I haven't read them all); in some sense you picked an especially ambitious challenge in (i) developing a perspective on ethics that can be applied broadly; and then (ii) applying it to an especially complex part of ethics. So double props I guess!
Comment by finm on What happens on the average day? · 2022-09-10T22:48:58.875Z · EA · GW

Thanks for writing this Rose, I love it.

Small note: my (not fully confident) understanding is that a typical day still does not involve a launch to orbit. My cached number is something like 2 or 3 launches / week in the world; or ~100–150 days / year with a launch. This is the best cite I can find. Launches often bring multiple 'objects' (satellites) into orbit, which is why it can be true that the average number of objects launched into space each day can exceed 1. So maybe the claim that "humans launch 5 objects into space" is somewhat misleading, despite being true on average. (This is ignorable pedantry!)

Comment by finm on Should you still use the ITN framework? [Red Teaming Contest] · 2022-09-10T16:51:11.085Z · EA · GW

Thanks for writing this! What I took from it (with some of my own thoughts added):

The ITN framework is a way of breaking down  into three components —

As such ITN is one way of estimating . But you might sometimes prefer other ways to break it down, because:

  • Sometimes the units for I,T, or N  are ambigious, and that can lead to unit inconsistensies in the same argument, i.e. by equivocating between "effort" and "money". These inconsistencies can mislead.
  • The neat factorisation might blind us to the fact that the meaning of 'good done' is underspecified, so it could lead us into thinking it is easier or more straightforward than it actually is to compare across disparate causes. Having more specific s for  can make it clearer when you are comparing apples and oranges.
  • ITN invites marginal thinking (you're being asked to estimate derivatives), but sometimes marginal thinking can mislead, when 'good done' is concave with resources.
  • Maybe most important of all: sometimes there are just much clearer/neater ways to factor the problem, which better carves it at its joints. Let's not constrain ourselves to one factorisation at the cost of more natural ones!

I should add that I find the "Fermi estimates vs ITN" framing potentially misleading. Maybe "ITN isn't the only way to do Fermi estimates of impact" is a clearer framing?

Anyway, curious if this all lines up with what you had in mind.

Comment by finm on Evaluation of Longtermist Institutional Reform · 2022-09-10T14:49:37.758Z · EA · GW

Thanks Dwarkesh, really enjoyed this.

This section stood out to me:

Instead, task a specific, identifiable agency with enforcing posterity impact statements. If their judgements are unreasonable, contradictory, or inconsistent, then there is a specific agency head that can be fired and replaced instead of a vast and unmanageable judiciary.

I've noticed this distinction become relevant a few times now: between wide, department-spanning regulation / intiatives on one hand; and fociused offices / people / agencies / departments with a narrow, specific remit on the other. I have in mind that the 'wide' category involves checking for compliance with some desiderata, and stopping or modifying existing plans if they don't; while the 'focused' category involves figuring out how to proactively achieve some goal, sometimes by building something new in the world.

Examples of the 'wide' category are NEPA (and other laws / regulation where basically anyone can sue); or new impact assessments required for a wide range of projects, such as the 'future generations impact assessment' proposal from the Wellbeing of Future Generations Bill (page 7 of this PDF).

Examples of the 'focused' category are the Office of Technology Assessment, the Spaceguard Survey Report, or something like the American Pandemic Preparedness Plan (even without the funding it deserves).

I think my examples show a bias towards the 'focused and proactive' category but the 'wide regulation' category obviously is sometimes very useful; even necessary. Maybe one thought is that concrete projects should often precede wide regulation, and wide regulation often does best when it's specific and legible (i.e. requiring that a specific safety-promoting technology is installed in new builds). We don't mind regulation that requires smoke alarms and sprinklers, because they work and they are worth the money. It's possible to imagine focused projects to drive down costs of e.g. sequencing and sterilisation tech, and then maybe following up with regulation which requires specific tech be installed to clear standards, enforced by a specific agency.

Comment by finm on Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies · 2022-06-25T21:17:36.629Z · EA · GW

Thanks very much for writing this — I'm inclined to agree that results from the happiness literature are often surprising and underrated for finding promising neartermist interventions and thinking about the value of economic growth. I also enjoyed hearing this talk in person!

The "aren't people's scales adjusting over time?" story ('scale norming') is most compelling to me, and I think I'm less sure that we can rule it out. For instance — if I'm reading you right, you suggest that one reason to be skeptical that people are adjusting their scales over time is that people mostly agree on which adjectives like "good" correspond with which numerical scores of wellbeing. This doesn't strike me as strong evidence that people are not scale norming, since I wouldn't be surprised if people adjust the rough meaning of adjectives roughly in line with numbers.

If people thought this task was meaningless, they’d answer at random, and the lines would be flat.

I don't see a dichotomy between "people use the same scales across time and context for both words and adjectives" and "people view this task as meaningless".

You also suggest a story about what people are doing when they come up SWB scores, which if true leaves little room for scale norming/adjustment. And since (again, if I'm reading you right) this story seems independently plausible, we have an independently plausible reason to be skeptical that scale norming is occurring. Here's the story:

the way we intuitively use 0 to 10 scales is by taking 10 to be the highest realistic level (i.e. the happiest a person can realistically be) and 0 as the lowest (i.e. the least happy a person could realistically be) (Plant 2020). We do this, I claim, so that [...] we can use the same scales as other people and over time. If we didn’t do this, it would make it very difficult for our answers to be understood.

I think I don't find this line of argument super compelling, and not even because I strongly disagree with that excerpt. Rather: the excerpt underdetermines what function you use to project from an extremely wide space onto a bounded scale, and there is no obvious such 'Schelling' function (i.e. I don't even know what it would mean for your function to be linear). And indeed people could change functions over time while keeping those 0 and 10 pegs fixed. Another thing that could be going on is that people might be considering how to make their score informationally valuable, which might involve imagining what kind of function would give a relatively even spread across 0–10 when used population-wide. I don't think this is primarily what is going on, but to the extent that it is, such a consideration would make a person's scale more relative to the population they understand themselves to be part of[1], and as such to re-adjust over time.

Two extra things: (i) in general I strongly agree that this question (about how people's SWB scales adjust across time or contexts) is important and understudied, and (ii) having spoken with you and read your stuff I've become relatively less confident in scale-norming as a primary explanation of all this stuff.

I would change my mind more fully that scale norming is not occuring if I saw evidence that experience-sampling type measures of affect also did not change over the course of decades as countries become/became wealthier (and earned more leisure time etc). I'd also change my mind if I saw some experiment where people were asked to rate how their lives were going in relation to some shared reference point(s), such  as other people's lives descibed in a good amount of detail, and where people's ratings of how their lives were going relative to those reference points also didn't change as countries became significantly wealthier.

(Caveat to all of above that I'm writing in a hurry!)

  1. ^

    If almost everyone falls between 6–7 on the widest scale I can imagine, maybe the scale I actually use should significantly zoom in on that region.

Comment by finm on Announcing a contest: EA Criticism and Red Teaming · 2022-06-06T10:54:00.956Z · EA · GW

Sounds to me like that would count! Perhaps you could submit the entire sequence but highlight the critical posts.

Comment by finm on Announcing a contest: EA Criticism and Red Teaming · 2022-06-03T23:12:40.521Z · EA · GW

Replying in personal capacity:

I hope the contest will consider lower effort but insightful or impactful submissions to account for this?

Yes, very short submissions count. And so should "low effort" posts, in the sense of "I have a criticism I've thought through, but I don't have time to put together a meticulous writeup, so I can either write something short/scrappy, or nothing at all." I'd much rather see unpolished ideas than nothing at all.

Secondly, I'd expect people with the most valuable critiques to be more outside EA since I would expect to find blindspots in the particular way of thinking, arguing and knowing EA uses. What will the panelists do to ensure they can access pieces using a very different style of argument? Have you considered having non-EA panelists to aid with this?

Thanks, I think this is important.

  • We (co-posters) are proactively sharing this contest with non-EA circles (e.g.), and others should feel welcome and encouraged to do the same.
  • Note the incentives for referring posts from outside the Forum. This can and should include writing that was not written with this contest in mind. It could also include writing aimed at some idea associated with EA that doesn't itself mention "effective altruism".
  • It obviously shouldn't be a requirement that submissions use EA jargon.
  • I do think writing a post roughly in line with the Forum guidelines (e.g. trying to be clear and transparent in your reasoning) means the post will be more likely to get understood and acted on. As such, I do think it makes sense to encourage this manner of writing where possible, but it's not a hard requirement.
  • To this end, one idea might be to speak to someone who is more 'fluent' in modes of thinking associated with effective altruism, and to frame the submission as a dialogue or collaboration.
  • But that shouldn't be a requirement either. In cases where the style of argument is unfamiliar, but the argument itself seems potentially really good, we'll make the effort — such as by reaching out to the author for clarifications or a call. I hope there are few really important points that cannot be communicated through just having a conversation!
  • I'm curious which non-EA judges you would have liked to see! We went with EA judges (i) to credibly show that representatives for big EA stakeholders are invested in this, and (ii) because people with a lot of context on specific parts of EA seem best placed to spot which critiques are most underrated. I'm also not confident that every member of the panel would strongly identify as an "effective altruist", though I appreciate connection to EA comes in degrees.

Thirdly, criticisms from outside of EA might also contain mistakes about the movement but nonetheless make valid arguments. I hope this can be taken into account and such pieces not just dismissed.

Yes. We'll try to be charitable in looking for important insights, and and forgiving of innacuracies from missing context where they don't affect the main argument.

That said, it does seem straightforwardly useful to avoid factual errors that can easily be resolved with public information, because that's good practice in general.

What plans do you have in place to help prevent and mitigate backlash[?]

My guess is that the best plan is going to be very context specific. If you have concerns in this direction, you can email, and we will consider steps to help, such as by liaising with the community health team at CEA. I can also imagine cases where you just want to communicate a criticism privately and directly to someone. Let us know, and we can arrange for that to happen also ("we" meaning myself, Lizka, or Joshua).

Comment by finm on The pandemic threat of DEEP VZN - notes on a podcast with Kevin Esvelt · 2022-05-30T15:27:02.016Z · EA · GW

Just commenting to say this was a really useful resource summarising an important topic — thanks for the time you put into it!

Comment by finm on Space governance - problem profile · 2022-05-09T10:16:36.186Z · EA · GW

This (and your other comments) is incredibly useful, thanks so much. Not going to respond to particular points right now, other than to say many of them stick out as well worth pursuing.

Comment by finm on Space governance - problem profile · 2022-05-08T20:45:21.528Z · EA · GW

Thanks for this, I think I agree with the broad point you're making.

That is, I agree that basically all the worlds in which space ends up really mattering this century are worlds in which we get transformative AI (because scenarios in which we start to settle widely and quickly are scenarios in which we get TAI). So, for instance, I agree that there doesn't seem to be much value in accelerating progress on space technology. And I also agree that getting alignment right is basically a prerequisite to any of the longer-term 'flowthrough' considerations.

If I'm reading you right I don't think your points apply to near-term considerations, such as from arms control in space.

It seems like a crux is something like: how much precedent-setting or preliminary research now on ideal governance setups doesn't get washed out once TAI arrives, conditional on solving alignment? And my answer is something like: sure, probably not a ton. But if you have a reason to be confident that none of it ends up being useful, it feels like that must be a general reason for scepticism that any kind of efforts at improving governance, or even values change, are rendered moot by the arrival of TAI. And I'm not fully sceptical about those efforts.

Suppose before TAI arrived we came to a strong conclusion: e.g. we're confident we don't want to settle using such-and-such a method, or we're confident we shouldn't immediately embark on a mission to settle space once TAI arrives. What's the chance that work ends up making a counterfactual difference, once TAI arrives? Notquite zero, it seems to me.

So I am indeed on balance significantly less excited about working on long-term space governance things than on alignment and AI governance, for the reasons you give. But not so much that they don't seem worth mentioning.

Ultimately, I'd really like to see [...] More up-front emphasis on the importance of AI alignment as a potential determinant.

This seems like a reasonable point, and one I was/am cognisant of — maybe I'll make an addition if I get time.

(Happy to try saying more about any of above if useful)

Comment by finm on Nuclear Fusion Energy coming within 5 years · 2022-04-29T16:21:14.956Z · EA · GW

I agree that fusion is feasible and will likely account for a large fraction (>20%) of energy supply by the end of the century, if all goes well. I agree that would be pretty great. And yeah, Helion looks promising.

But I don't think we should be updating much on headlines about achieving ignition or breakeven soon. In particular, I don't think these headlines should be significantly shifting forecasts like this one from Metaculus about timelines to >10% of energy supply coming from fusion. The main reason is that there is a very large gap between proof of concept and a cost-competitive supply of energy. Generally speaking, solar will probably remain cheaper per kWh than fusion for a long time (decades), so I don't expect the transition to be very fast.

It's also unclear what this should all mean for EA. One response could be: "Wow, a world with abundant energy would be amazing, we should prioritise trying to accelerate the arrival of that world." But, I don't know, there's already a lot of interested capital flying around — it's not like investors are naive to the benefits. On the government side, the bill for ITER alone was something in the order of $20 billion.

Another response could be: "Fusion is going to arrive sooner than we expected,  so the world is soon going to look different from what we expected!" And I'd probably just dispute that the crowd (e.g. the Metaculus forecast above) is getting it especially wrong here in any action-relevant way. But I'd be delighted to be proved wrong.

Comment by finm on Concave and convex altruism · 2022-04-28T08:32:09.428Z · EA · GW

Thanks, that's a very good example.

I don't think this actually describes the curve of EA impact per $ overall

For sure.

Comment by finm on Past and Future Trajectory Changes · 2022-03-31T16:42:27.793Z · EA · GW

Just wanted to comment that this was a really thoughtful and enjoyable post. I learned a lot.

In particular, I loved the way the point about how the relative value of trajctory change should depend on the smoothness of your probability distribution over the value of the long-run future.

I'm also now curious to know more about the contingency of the caste system in India. My (original) impression was that the formation of the caste system was somewhat gradual and not especially contingent.

Comment by finm on Pre-announcing a contest for critiques and red teaming · 2022-03-27T15:54:27.392Z · EA · GW

For what it's worth I think I basically endorse that comment.

I definitely think an investigation that starts with a questioning attitude, and ends up less negative than the author's initial priors, should count.

That said, some people probably do already just have useful, considered critiques in their heads that they just need to write out. It'd be good to hear them.

Also, presumably (convincing) negative conclusions for key claims are more informationally valuable than confirmatory ones, so it makes sense to explicitly encourage the kind of investigations that have the best chance of yielding those conclusions (because the claims they address look under-scrutinised).

Comment by finm on Pre-announcing a contest for critiques and red teaming · 2022-03-27T14:34:06.530Z · EA · GW

Thank you, this is a really good point. By 'critical' I definitely intended to convey something more like "beginning with a critical mindset" (per JackM's comment) and less like "definitely ending with a negative conclusion in cases where you're critically assessing a claim you're initially unsure about". 

This might not always be relevant. For instance, you might set out to find the strongest case against some claim, whether or not you end up endorsing it. As long as that's explicit, it seems fine.

But in cases where someone is embarking on something like a minimal-trust investigation — approaching an uncertain claim from first principles — we should be incentivising the process, not the conclusion!

We'll try to make sure to be clear about that in the proper announcement.

Comment by finm on Pre-announcing a contest for critiques and red teaming · 2022-03-27T14:16:43.717Z · EA · GW

Yes, totally. I think a bunch of the ideas in the comments on that post would be a great fit for this contest.

Comment by finm on Pre-announcing a contest for critiques and red teaming · 2022-03-27T12:24:30.165Z · EA · GW

Thanks, great points. I agree that we should only be interested in good faith arguments — we should be clear about that in the judging criteria, and clear about what counts as a bad faith criticism. I think the Forum guidelines are really good on this.

Of course, it is possible to strongly disagree with a claim without resorting to bad faith arguments, and I'm hopeful that the best entrants can lead by example.

Comment by finm on EA Projects I'd Like to See · 2022-03-14T17:33:23.644Z · EA · GW

The downweighting of AI in DGB was a deliberate choice for an introductory text.

Thanks, that's useful to know.

Comment by finm on EA Projects I'd Like to See · 2022-03-14T14:24:31.259Z · EA · GW

I guess that kind of confirms the complaint that there isn't an obvious, popular book to recommend on the topic!

Comment by finm on EA Projects I'd Like to See · 2022-03-14T13:15:39.089Z · EA · GW

Embarrassingly wasn't aware of the last three items on this list; thanks for flagging!

Comment by finm on EA Projects I'd Like to See · 2022-03-14T11:53:19.139Z · EA · GW

Oh cool, wasn't aware other people were thinking about the QF idea! 

Re your question about imprints — I think I just don't know enough about how they're typically structured to answer properly.

Comment by finm on EA Projects I'd Like to See · 2022-03-14T11:46:34.340Z · EA · GW

Thanks for sharing — you should post this as a shortform or top-level post, otherwise I'm worried it'll just get lost in the comments here :)

Comment by finm on EA Projects I'd Like to See · 2022-03-14T11:38:38.077Z · EA · GW

Thanks, this is a useful clarification. I think my original claim was unclear. Read as "very few people were thinking about these topics at the time when DGB came out", then you are correct.

(I think) I had in mind something like "at the time when DGB came out it wasn't the case that, say, > 25% of either funding, person-hours, or general discussion squarely within effective altruism concerned the topics I mentioned, but now it is".

I'm actually not fully confident in that second claim, but it does seem true to me.

Comment by finm on EA Projects I'd Like to See · 2022-03-14T11:32:30.934Z · EA · GW

I was aware but should have mentioned it in the post

 — thanks for pointing it out :)

Comment by finm on EA Projects I'd Like to See · 2022-03-14T11:28:57.684Z · EA · GW

Like Max mentioned, I'm not sure The Methods of Ethics  is a good introduction to utilitarianism; I expect most people would find it difficult to read. But thanks for the pointer to the Very Short Introduction, I'll check it out!

Comment by finm on EA Projects I'd Like to See · 2022-03-14T11:22:39.737Z · EA · GW

Thanks very much for the pointer, just changed to something more sensible!

(For what it's worth, I had in mind this was much more of a 'dumb nerdy flourish' than 'the clearest way to convey this point')

Comment by finm on EA Projects I'd Like to See · 2022-03-14T11:15:24.541Z · EA · GW

Amazing! Just sent you a message.

Comment by finm on EA Projects I'd Like to See · 2022-03-14T11:04:10.954Z · EA · GW

Big fan of — not sure how I forgot to mention it!

Comment by finm on How are you keeping it together? · 2022-03-06T23:40:07.902Z · EA · GW

Thanks Ed, this is really thoughtful.

+1 to the doomscrolling point — sometimes I feel like I have an obligation or responsibility to read the news, especially when it's serious. But this is almost always a mistake: in close to every instance, the world will not be a worse place if you take time away from the news.

Comment by finm on Some research ideas on the history of social movements · 2022-03-01T14:57:49.199Z · EA · GW

Thanks for sharing Rose, this looks like an important and (hopefully) fruitful list. Would love to see more historians taking a shot at some of these questions.

Comment by finm on Punishing Russia through private disinvestment? · 2022-02-27T17:15:04.876Z · EA · GW

My guess is that divesting your private investments isn't going to be an especially leveraged/impactful way to address the situation, and that you might consider spending the time you would spend researching this might be better spent finding direct donation opportunities, and sharing the results. But don't put a lot of weight on that.

This is a good analysis of divestment in general.

Comment by finm on What are effective ways to help Ukrainians right now? · 2022-02-26T18:47:11.255Z · EA · GW

Thanks Alex, I appreciate this. Donated.

Comment by finm on Samo Burja on Effective Altruism · 2022-02-17T19:46:44.083Z · EA · GW

Thanks very much for putting this together. This section stood out to me —

He is however optimistic on innovation in new social technologies and building new institutions. He believes that there are very few functional institutions and that most institutions are attempts at mimicking these functional institutions. He believes innovation in social technology is highly undersupplied today, and that individual founders have a significant shot at building them. He also believes that civilisation makes logistical jumps in complexity and scale in very short periods of time when such innovation happens. He believes this has happened in the past, and believes it is possible today. In short, that this is very high impact, and deserves a lot more people working on it than currently are.

Makes me think of some of the work of RadicalxChange, and also 80k's recent interview with Audrey Tang. Curious what Samo's take might be on either of those things.

Comment by finm on Risks from Asteroids · 2022-02-16T21:12:43.322Z · EA · GW

Thanks for the pointer, fixed now. I meant for an average century.

Comment by finm on Risks from Asteroids · 2022-02-16T17:34:17.751Z · EA · GW

Thanks, these are great points.

Comment by finm on Risks from Asteroids · 2022-02-13T04:14:51.218Z · EA · GW

Thank you for the kind words!

I think this is strong enough as a factor that I now update to the position that derisking our exposure to natural extinction risks via increasing the sophistication of our knowledge and capability to control those risks is actually bad and we should not do it.

I would feel a bit wary about making a sweeping statement like this. I agree that there might be a more general dyanmic where (i) natural risks are typically small per century, and (ii) the technologies capable of controlling those risks might often be powerful enough to pose a non-negligible risk of their own, such that (iii) carelessly developing those technologies could sometimes increase risk on net, and (iv) we might want to delay building those capabilities while other competences catch up, such as our understanding of their effects and some meaure of international trust that we'll use them responsibly. Very ambitious geoengineering comes to mind as close to an example.

Maybe this generalizes to working on all existential risks...

Perhaps I'm misunderstanding you, but I'm very hopeful that it doesn't. One reason is that (it seems to me) very little existential risk work is best described as "let's do build dual-use capabilities whose primary aim is to reduce some risk, and hope they don't get misused"; but a lot of existential risk work can be described as either (i) "some people are building dual-use technologies ostensibly to reduce some risk or produce some benefits, but we think that could be really bad, let's do something about that" and (ii) "this technology already looks set to become radically more pwoerful, let's see if we can help shape its development so it doesn't turn to do catastrophic harm".

Comment by finm on Linch's Shortform · 2022-02-10T17:26:10.789Z · EA · GW

The pedant in me wants to ask to point out that your third definition doesn’t seem to be a definition of existential risk? You say —

Approximate Definition: On track to getting to the best possible future, or only within a small fraction of value away from the best possible future.

It does make (grammatical) sense to define existential risk as the "drastic and irrevocable curtailing of our potential". But I don’t think it makes sense to literally define existential risk as “(Not) on track to getting to the best possible future, or only within a small fraction of value away from the best possible future.”

A couple definitions that might make sense, building on what you wrote:

  • A sudden or drastic reduction in P(Utopia)
  • A sudden or drastic reduction in the expected value of the future
  • The chance that we will not reach ≈ the best futures open to us 

I feel like I want to say that it's maybe a desirable featured of the term 'existential risk' that it's not so general to encompass things like "the overall risk that we don't reach utopia", such that slowly steering towards the best futures would count as reducing existential risk. In part this is because most people's understanding of "risk" and certainly of "catastrophe" involve something discrete and relatively sudden.

I'm fine with some efforts to improve P(utopia) not being counted as efforts to reduce existential risk, or equivalently the chance of existential catastrophe. And I'd be interested in new terminology if you think there's some space of interventions that aren't neatly captured by that standard definitions of existential risk.

Comment by finm on Splitting the timeline as an extinction risk intervention · 2022-02-06T22:28:49.085Z · EA · GW

As you go into unlikelier and unlikelier worlds, you also go into weirder and weirder worlds.

Seems to me that pretty much whenever anyone would actually considering 'splitting the timeline' on some big uncertain question, then even if they didn't decide to split the timeline, there are still going to be fairly non-weird worlds in which they make both decisions? 

Comment by finm on Splitting the timeline as an extinction risk intervention · 2022-02-06T21:08:45.444Z · EA · GW

Thanks for writing this — in general I am pro thinking more about what MWI could entail!

But I think it's worth being clear about what this kind of intervention would achieve. Importantly (as I'm sure you're aware), no amount of world slicing is going to increase the expected value of the future (roughly all the branches from here), or decrease the overall (subjective) chance of existential catastrophe.

But it could increase the chance of something like "at least [some small fraction]% of'branches' survive catastrophe", or at the extreme "at least one 'branch' survives catastrophy". If you have some special reason to care about this, then this could be good.

For instance, suppose you thought whether or not to accelerate AI capabilities research  in the US is likely to have a very large impact on the chance of existential catastrophe, but you're unsure about the sign. To use some ridiculous play numbers: maybe you're split 50-50 between thinking investing in AI raises p(catastrophe) to 98% and 0 otherwise, or investing in AI lowers p(catastrophe) to 0 and 98% otherwise. If you flip a 'classical' coin, the expected chance of catastrophe is 49%, but you can't be sure we'll end up in a world where we survive. If you flip a 'quantum' coin and split into two 'branches' with equal measure, you can be sure that one world will survive (and another will encounter catastrophe with 98% likelihood). So you've increased the chance that 'at least 40% of the future worlds will survive' from 50% to 100%.[1]

In general you've moving from more overall uncertainty about whether things will turn out good or bad, to more certainty that things will turn out in some mixture of good and bad.

Maybe that sounds good, if for instance you think the mere fact that something exists is good in itself (you might have in mind that if someone perfectly duplicated the Mona Lisa, the duplicate would be worth less than the original, and that the analogy carries).[2]

But I also think it is astronomically unlikely that a world splitting exercise like this would make the difference[3] between 'at least one branch survives' and 'no branches survive'. The reason is just that there are so, so many branches, such that —

  1. It just seems very likely that at least some branches survive anyway;
  2. Even if you thought there was a decent chance that no branches survive without doing the world splitting, then you should have such a wide uncertainty over the number of branches you expect to survive that (I claim) your odds on something like [at least one branch will survive if we do split worlds, and no branches will survive if we don't] should be very very low.[4] And I think this still goes through even if you split the world many times.


  1. ^

    It's like choosing between [putting $50 on black and 50% on red] at a roulette table, and [putting $100 on red].

  2. ^

    But also note that by splitting worlds you're also increasing the chance that 'at least 40% of the future worlds will encounter catastrophe' from 48% to 99%. And maybe there's a symmetry, where if you think there's something intrinsically good about the fact that a good thing occurs at all, then you should think there's something intrinsically bad about the fact that a bad thing occurs at all, and I count existential catastrophe as bad!

  3. ^

    Note this is not the same as claining it's highly unlikely that this intervention will increase the chance of surviving in at least one world.

  4. ^

    Because you are making at most a factor-of-two difference by 'splitting' the world once.

Comment by finm on New EA Cause Area: Run Blackwell's Bookstore · 2022-02-06T19:15:16.785Z · EA · GW

Noting that this is a question I'm also interested in

Comment by finm on Ray Dalio's Principles (full list) · 2022-01-29T04:11:50.209Z · EA · GW

Awesome, thanks so much for putting in the time to make this. Obviously this kind of resource is a great shortcut for people who haven't read the books they're summarising, but I think it's easy to underrate how useful they also are for people who have already read the books, as a device for consolidating and refreshing your memory of its contents.

Comment by finm on Moral Uncertainty and Moral Realism Are in Tension · 2022-01-27T18:01:38.221Z · EA · GW

Ok, thanks for the reply Lukas. I think this clarifies some things, although I expect I should read some of your other posts to get fully clear.

Comment by finm on So you want to be a charity entrepreneur. Read these first. · 2022-01-27T17:42:54.541Z · EA · GW

The time seems right for more competent+ambitious EA entrepreneurship, and this seems like an excellent list. Thanks for putting it together!

Comment by finm on Moral Uncertainty and Moral Realism Are in Tension · 2022-01-25T20:34:41.641Z · EA · GW

Thanks for this post, it seems really well researched.

As I understand, it sounds like you're saying moral uncertainty implies or requires moral realism to make sense, but since moral uncertainty means "having a vague or unclear understanding of that reality", it's not clear you can justify moral realism from a position of moral uncertainty. And you're saying this tension is problematic for moral realism because it's hard to resolve.

But I'm not sure what makes you say that moral uncertainty implies or requires moral realism? I do  think that moral uncertainty  strongly favours cognitivism about ethics (the view that moral statements express truth-evaluable beliefs). And it's true that cognitivism naturally suggests realism, because it's somewhat strange to be both a cognitivist and an antirealist. But it seems coherent to me to entertain a cognitivist kind of antirealism/nihilism/error theory as one of the theories you're uncertain about. If that's right, it's not clear to me that this kind of problematic tension exists for most kinds of moral uncertainty.

I say a bit more about this here, for what it's worth. Also note that I have not read the other posts in your sequence, so I may be lacking context. Likely I've missed something here — curious to hear your thoughts.

Comment by finm on List of important ways we may be wrong · 2022-01-18T12:40:25.003Z · EA · GW

Just want to note that a project like this seems very good, and I'm interested in helping make something like it happen.