Comment by Isaac Dunn (Isaac_Dunn) on Announcing EA Pulse, large monthly US surveys on EA · 2022-09-20T19:28:30.118Z · EA · GW

Sounds excellent! Roughly how large is large?

Comment by Isaac Dunn (Isaac_Dunn) on CEA Ops is now EV Ops · 2022-09-16T17:36:05.271Z · EA · GW

Thanks for the reply!

If I understand correctly, you think that people in EA do care about the sign of their impact, but that in practice their actions don't align with this and they might end up having a large impact of unknown sign?

That's certainly a reasonable view to hold, but given that you seem to agree that people are trying to have a positive impact, I don't see how using phrases like "expected value" or "positive impact" instead of just "impact" would help.

In your example, it seems that SBF is talking about quickly making grants that have positive expected value, and uses the phrase "expected value" three times.

Comment by Isaac Dunn (Isaac_Dunn) on CEA Ops is now EV Ops · 2022-09-14T17:09:02.628Z · EA · GW

I think when people talk about impact, it's implicit that they mean positive impact. I haven't seen anything that makes me think that someone in EA doesn't care about the sign of their impact, although I'd certainly be interested in any evidence of that.

Comment by Isaac_Dunn on [deleted post] 2022-09-14T16:39:28.327Z

When someone learns about effective altruism, they might realise how large a difference they can make. They might also realise how much greater a difference a more diligent/thoughtful/selfless/smart/skilled version of themselves could make, and they might start to feel guilty about not doing more or being better.

Does Kristin have any advice for people that are new to effective altruism about how best to reduce these feelings? (Or advice for the way that we communicate about effective altruism that might prevent these problems?)

Comment by Isaac Dunn (Isaac_Dunn) on Longtermism and Computational Complexity · 2022-09-01T12:30:08.046Z · EA · GW

Thanks for clarifying! I agree that if someone just tells me (say) what they think the probability of AI causing an existential catastrophe is without telling me why, I shouldn't update my beliefs much, and I should ask for their reasons. Ideally, they'd have compelling reasons for their beliefs.

That said, I think I might be slightly more in favour of forecasting being useful than you. I think that my own credence in (say) AI existential risk should be an input into how I make decisions, but that I should be pretty careful about where that credence has come from.

Comment by Isaac Dunn (Isaac_Dunn) on Longtermism and Computational Complexity · 2022-09-01T08:16:59.628Z · EA · GW

I agree we should be skeptical! (Although I am open to believing such events are possible if there seem to be good reasons to think so.)

But while the intractability stuff is kind of interesting, I don't think it actually says much about how skeptical we should be of different claims in practice.

Comment by Isaac Dunn (Isaac_Dunn) on Longtermism and Computational Complexity · 2022-08-31T23:02:20.626Z · EA · GW

I agree that we should be especially careful not to fool ourselves that we have worked out a way to positively affect the future. But I'm overall not convinced by this argument. (Thanks for writing it, though!)

I can't quite crisply say why I'm not convinced. But as a start, why is this argument restricted just to longtermist EA? Wouldn't these problems, if they exist, also make it intractable to say whether (for example) the outcome intended by a nearterm focused intervention has positive probability? The argument seems to prove too much.

Comment by Isaac Dunn (Isaac_Dunn) on End-To-End Encryption For EA · 2022-08-30T17:45:45.652Z · EA · GW

Thanks for explaining, I hadn't realised that, and it makes it much more attractive to follow your advice!

Comment by Isaac Dunn (Isaac_Dunn) on End-To-End Encryption For EA · 2022-08-30T13:16:17.249Z · EA · GW

You mention that reliance on Google is bad - I'd be interested to hear more about why you think that's true. (I agree that EA relies on Google services a lot.)

It seems that if we can trust Google, then the in-transit encryption that Gmail provides is good enough.

Comment by Isaac Dunn (Isaac_Dunn) on End-To-End Encryption For EA · 2022-08-30T13:16:00.299Z · EA · GW

It seems that it is not possible to do what you're suggesting using (say) Gmail's web interface or phone app. I expect that having to give up on the features provided by these would be a noticeable ongoing cost for me - for example, I expect Thunderbird to do much worse at automatically categorising my incoming mail. Does that seem right?

Also, the specific upsides you mentioned don't seem that compelling to me, and the general "you don't know why it might be useful but it might" applies to too many things to be worth the costs.

Comment by Isaac Dunn (Isaac_Dunn) on Caring about the future doesn’t mean ignoring the present · 2022-08-30T10:47:50.513Z · EA · GW

Ah jtm has written a comment mentioning some similar points before I refreshed the page!

Comment by Isaac Dunn (Isaac_Dunn) on Caring about the future doesn’t mean ignoring the present · 2022-08-30T10:43:19.673Z · EA · GW

I agree that increased interest in longtermism hasn't caused EA as a whole to decrease funding to other causes in practice. But I don't think that this is in itself good. As the article acknowledges, prioritising between causes is an essential part of doing EA.

So if, all things considered, we thought that dropping all work helping present generations to exclusively prioritise future generations would lead to better outcomes, I think we should be willing to do that.

I particularly disagree with this quote from the article:

But if the shift to longtermism meant that effective altruists would stop helping the people of the present, and would instead put all their money and energy into projects meant to help the distant future, it would be doing an obvious and immediate harm.

Someone could equally well argue that prioritising bednets or animal advocacy over helping local homeless people would be bad because it is an obvious and immediate harm, but I think that they would be making an important mistake.

Of course, there may be instrumental reasons to keep prioritising global health and wellbeing projects. For example, you might think that:

  • The direct impact of these projects can't be beaten. That is, longtermist causes simply aren't important enough to deserve all our resources.
  • The experience from these projects are the best ways of helping us to learn how to actually get things done in the world.
  • Having a track record of doing good things will get the movement more people, money, trust and influence than other things.
  • Having a broad EA movement is valuable, perhaps because it makes it easier for us to spot the best opportunities and change course.

I would have preferred for the article to argue more directly for some of these as the actual reason that it's good EA has not deprioritised global health and development.

Comment by Isaac Dunn (Isaac_Dunn) on Four Ideas You Already Agree With · 2022-08-22T18:02:06.104Z · EA · GW

The footnotes in this article don't seem to work FYI.

Comment by Isaac Dunn (Isaac_Dunn) on Announcing the Longtermism Fellowship of EA Munich - Apply! · 2022-08-21T19:58:35.310Z · EA · GW

Sounds great!

Comment by Isaac Dunn (Isaac_Dunn) on Announcing the Longtermism Fellowship of EA Munich - Apply! · 2022-08-21T17:04:48.211Z · EA · GW

I'm glad you're running this! It seems valuable to have reading programs focused on such an important question.

But I wonder whether a better goal for the program might be to help people to engage with the ideas and figure out their views either way, rather than to increase people's confidence that longtermism is correct[1].

I think that this is better even if your motivation is to increase the number of people who agree with longtermism - convincing people of specific conclusions seems worse for community epistemics, and might seem offputtingly dogmatic to some people.

  1. ^

    I understood "we hope to decrease the uncertainty about your belief in longtermism" to mean "we hope to increase your confidence in longtermism being correct", although is it a bit ambiguous.

Comment by Isaac Dunn (Isaac_Dunn) on Uncorrelated Bets: an easy to understand and very important decision theory consideration, which helps tease out nonobvious but productive criticism of the EA movement · 2022-08-12T14:13:09.867Z · EA · GW

It seems to me that there's a difference between financial investment and EA bets: returns on financial bets can be then invested again, whereas returns on most EA bets are not more resources for the EA movement but are direct positive impact that helps our ultimate beneficiaries. So we can't get compounding returns from these bets.

So, except for when we're making bets to grow the resources of the EA movement, I don't think I agree that EA making correlated bets is bad in itself - we just want the highest EV bets.

Does that seem right to you?

Comment by Isaac Dunn (Isaac_Dunn) on Military Service as an Option to Build Career Capital · 2022-08-09T23:58:49.500Z · EA · GW

Even though this post has no relevance for me, I really enjoyed reading it - mainly because it was well written, and partly because I was curious to hear about your experiences and takeaways. Thanks for writing it!

Comment by Isaac Dunn (Isaac_Dunn) on The repugnant conclusion is not a problem for the total view · 2022-08-06T18:05:22.782Z · EA · GW

Thank you for sharing this post - it's well written, well structured, relevant and concise. (And I agree with the conclusion, which I'm sure makes me like it more!)

Comment by Isaac Dunn (Isaac_Dunn) on Interactively Visualizing X-Risk · 2022-07-30T03:21:55.256Z · EA · GW

Another thought is that the title "x-risk tree" is slightly misleading:

  • The two things I think it visualises are drops in global population of 10% or 95% before 2100
  • So it doesn't visualise the risk of extinction (although it does provide an upper bound)
  • It also doesn't visualise existential risk (x-risk), which could be much higher than extinction risk, so the upper bound doesn't hold

How about replacing the title with something like "How likely is a global catastrophe in our lifetimes?"

Comment by Isaac Dunn (Isaac_Dunn) on Interactively Visualizing X-Risk · 2022-07-30T03:21:14.683Z · EA · GW

I like the idea of visualising important things to make them feel more salient, and it's fun that this is linked to predictions on Metaculus! I also liked the visualisation of other predictions once I found them. Thanks for making it.

You mention that the purpose is to give doomy people a sense that there is hope and we can take action to survive. I would be very interested for you to find some of these people and do user interviews or similar to understand whether it has the effect you hope! And you might learn how to improve it for that goal. Have you done anything like this yet?

Comment by Isaac Dunn (Isaac_Dunn) on Wanting to dye my hair a little more because Buck dyes his hair · 2022-07-22T07:13:56.376Z · EA · GW

I agree that it's well worth acknowledging that many of us have parts of ourselves that want social validation, and that the action that gets you the most social approval in the EA community is often not the same as the action that is best for the world.

I also think it's very possible to believe that your main motivation for doing something is impact, when your true motivation is actually that people in the community will think more highly of you. [1]

Here are some quick ideas about how we might try to prevent our desire for social validation from reducing our impact:

  • We could acknowledge our need for social validation, and try to meet it in other areas of our lives, so that we care less about getting it from people in the EA community through appearing to have an impact, freeing us up to focus on actually having an impact.
  • We could strike a compromise between the part of us wants social validation from the EA community, and the part of us that wants to have an impact. For example, we might allow ourselves to spend some effort trying to get validation (e.g. writing forum posts, building our networks, achieving positions of status in the community), in the full knowledge that they're mainly useful in satisfying our need for validation so that our main efforts (e.g. our full-time work) can be focused on what we think is directly most impactful.
  • We could spend time emotionally connecting with whatever drives us to help others, reflecting on the reality that others' wellbeing really does depend on our actions, proactively noticing situations where we have a choice between more validation or more impact, and being intentional in choosing what we think is overall best.
  • We might try to align our parts by noticing that although people might be somewhat impressed with our short-term efforts to impress them, in the long run we will probably get even more social status in the EA community if we skill up and achieve something that is genuinely valuable.
  • Relatedly, we might think about who we would ideally like to impress. Perhaps we could impress some people by simply doing whatever is cool in the EA movement right now. But the people whose approval we might value most will be less impressed by that, and more impressed by us actually being strategic about how best to have an impact. In other words, we might stop considering people pursuing today's hot topic as necessarily cool, and start thinking about people who genuinely pursue impact as the cool group we aspire to join. 


  1. ^

    I haven't read it, but I think the premise of The Elephant in the Brain is that self deception like this is in our own interests, because we can truthfully claim to have a virtuous motivation even if that's not the case.

Comment by Isaac Dunn (Isaac_Dunn) on “Should you use EA in your group name?” An update on PISE’s naming experiment · 2022-04-04T19:33:30.588Z · EA · GW

I want to add to the chorus of commenter praising this post - thank you for writing it! I think it's really cool that you tried something you thought could have a high upside, actually checked if it worked, and shared your findings.

Comment by Isaac Dunn (Isaac_Dunn) on A model for engagement growth in universities · 2021-12-16T13:22:03.300Z · EA · GW

Thanks for writing this! I hadn't thought about high engagement levels being more stable than medium or low ones, and that seems right to me. I agree that having people spend time with highly-engaged people is likely to be a good way to make them more engaged. And I definitely agree with your points about fidelity and epistemics being particularly important.

I'm uncertain some of your suggestions, though. You suggest inviting a few "promising" people to socials where most people are highly-engaged. I worry that doing this could result in a "cool kids club" in-group vibe, where people who haven't been invited to join might not feel welcome in or good enough for EA. There are benefits to this - it might make people more strongly desire to join the highly-engaged group - it's not obvious to me that it's worth the cost of exclusivity.

Besides the "why am I not invited" cost, there's another cost that you point out: only adding a few new people limits how quickly the group can grow. I agree that your approach would fairly reliably create new HEAs, but my guess is we're early enough in working out how to grow EA that it's worth looking for a more scalable approach that (a) isn't exclusive and (b) has a better HEA to new person ratio. For example, 1:1 mentorship is somewhere in between your suggestion (several HEAs to each new person) and so-called "fellowships" (several new people to each HEA).

Comment by Isaac Dunn (Isaac_Dunn) on Apply now | EA Global: London (29-31 Oct) | EAGxPrague (3-5 Dec) · 2021-09-09T16:08:25.384Z · EA · GW

I think this is a great question to ask.

As it happens, I think it probably is an effective use of money, in short because it's an investment in the human capital of the community, which is probably one of the main bottlenecks to impact at the moment. That's because there's a large amount of money committed to EA compared to the number of people in the community working out the best ways to spend it. It's true that there are global health charities that could absorb a lot more money, but there's interest in finding even more impactful ways to spend money!

ETA: looks like Stefan got there slightly quicker with a very similar answer!

Comment by Isaac Dunn (Isaac_Dunn) on University EA Groups Should Form Regional Groups · 2021-09-09T11:16:35.943Z · EA · GW

Here's a similar but slightly different suggestion: rather than there being one definitive regional organisation for each area, we just encourage the creation of more organisations that are in between local groups and large funders.

Some examples of these organisations:

  • A team that runs and is responsible for a few local groups (e.g. a successful local group expands locally)
  • An organisation that centralises certain specific group functions (e.g. marketing, organising talks, introductory programs), so that local groups can outsource
  • A team that specialises in seeding new groups, and provides significant help and support to new organisers
  • A national organisation that tries to coordinate and encourage collaboration across local groups, including keeping an eye on groups that are at risk of disappearing

Some reasons this could be good:

  • Local groups wouldn't automatically be reliant on the one designated regional organisation
  • A designated regional organisation that is doing poorly is a problem because it's more difficult to replace
  • Teams can specialise in the thing that they are good at, rather than having responsibility for every aspect of all local groups in their area
  • Teams can work at whatever regional scope makes most sense (could be just a few groups, could be global) - overlap in geographical scope between different kinds of these organisations is often fine, whereas every geographical location would need exactly one designated regional organisation

Some reasons that designated regional organisations could be better:

  • Who has responsibility for what would be clearer, so fewer things would fall through the cracks
  • The role of a regional organisation would be clearly defined and the same across regions, making evaluation easier, making knowledge sharing between regions easier, and making it more straightforward to start or join an organisation (i.e. a clearer career progression pipeline)
  • A regional organisation may be well placed to decide what kinds of functions to prioritise to best support its local region 

There are quick thoughts, I'm likely missing important considerations. I'm not sure which approach seems better, and they aren't mutually exclusive, but I thought I'd share the thought.

A related example is multi-academy trusts in the UK school system, which are essentially organisations that run multiple schools. Schools can choose to join an existing trust, and trust can start new schools. Rather than the central government funding each school individually, it funds trusts, who have responsibility for the schools in their control.

Thanks for the brilliant post, by the way, I'm really glad you wrote it!

Comment by Isaac Dunn (Isaac_Dunn) on Frank Feedback Given To Very Junior Researchers · 2021-09-03T01:06:25.372Z · EA · GW

I agree that it's valuable to give honest feedback if you think that someone should consider trying something else, rather than just giving blithely positive feedback that might cause them to continue pursuing something that's a bad fit.

It's probably worth being especially thoughtful about the way that such feedback is framed. For example, if feedback of type a) can be made constructive, it might make it seem more sincerely encouraging: rather than "it's probably bad for you to do this kind of work", saying "I actually think that you might not be as well suited to this kind of work as others in the EA community because others are better at [specific thing], but from [strength X] and [strength Y] that I've noticed, I wonder if you've considered [type of work T] or [type of work S]?" (I know that you were paraphrasing and wouldn't say those actual phrases to people)

For feedback of type b), my gut reaction is that basically no one should be given feedback of that type because of the risk if you're wrong as you say, but also because of the risk of exacerbating feelings that only sufficiently impressive people are welcome in EA. I guess it depends whether you mean "you're a valued member of this community, but not competitive for a job in the community" or "you're not good enough to be a member of this community". I agree that some people should be given the first type of feedback if you're sure enough, but I don't think anyone should be told they're not good enough to join the community.

Comment by Isaac Dunn (Isaac_Dunn) on Frank Feedback Given To Very Junior Researchers · 2021-09-01T14:05:42.558Z · EA · GW

Thanks for sharing this! I enjoyed the comments about picking the right scope for a project. I also liked the general nudge towards being transparent about reasoning and uncertainty rather than overstating how much evidence supports particular conclusions.

I think that it probably is worth the trouble to be more encouraging. I'd consider being specific about some things that have been done well, beginning and ending the feedback with encouraging words, and taking a final pass to word things in a way that implies that you're glad they've done this work and you're rooting for them. That said, it definitely seems much better to give unpolished feedback rather than no feedback, so if it'd be too high a burden then I'd go ahead with potentially discouraging feedback.

I agree that the EA community does try to be welcoming to new members, but I suspect that doing it even more would probably be good to counteract the shame and guilt I think many people might have about not being good enough for a community that places high value on success.

Comment by Isaac Dunn (Isaac_Dunn) on EA Forum feature suggestion thread · 2021-06-18T13:05:21.193Z · EA · GW

I suspect that many people don't post on the forum because they're worried about their post being poorly received and damaging their reputation in the EA community.

I believe this because I feel this way myself, because I've heard other people around me worrying a lot about posting to the forum, because Will MacAskill spoke on the 80,000 hours podcast of being anxious about their reputation being damaged after posting on the forum, and because of the existence of Aaron Gertler's talk "Why you (yes, you) should post on the EA Forum".

Perhaps, by default, new posts could be anonymous until a certain karma threshold (say 30 karma) is met. After that post meets the karma threshold, the true author of the post could become visible.

That way, authors could post knowing that their reputation wouldn't be damaged if their post wasn't well received, but that they would get the credit if the post was well received.

I'd expect this to increase the number of posts (both good and bad) from hesitant new users, and I think that the increase in the number of mediocre new posts would be a cost worth paying. It's good for people to contribute and feel valued for their contribution, especially if it encourages them to make more valuable contributions in the future.

I think it'd be important that the anonymous-until-threshold was the default (i.e. opt out), so that people didn't feel embarrassed about using it.

Comment by Isaac Dunn (Isaac_Dunn) on New? Start here! (Useful links) · 2021-06-17T09:53:36.922Z · EA · GW

I agree. How about just a right arrow? (🡲)

Comment by Isaac Dunn (Isaac_Dunn) on In defense of a "statistical" life · 2021-02-17T16:59:50.341Z · EA · GW

Thanks for writing this! I especially enjoyed the part where you described how donating has given you a sense of purpose and self-worth when things have been difficult for you - I can relate.

I think I have to disagree with your last point, though, because it seems to me that whenever we make a decision to spend resources, we are making a trade off. A donation to an effective global health charity could in fact have gone to a different cause.

I don't think that diminishes how worthwhile any donation is, but I think that the spirit of effective altruism is to keep asking ourselves whether there's something else we could do that would be even better. What do you think?

Comment by Isaac Dunn (Isaac_Dunn) on Complex cluelessness as credal fragility · 2021-02-09T16:12:21.133Z · EA · GW

I agree that there may be cases of "complex" (i.e. non-symmetric) cluelessness that are nevertheless resiliently uncertain, as you point out.

My interpretation of @Gregory_Lewis' view was that rather than looking mainly at whether the cluelessness is "simple" or "complex", we should look for the important cases of cluelessness where we can make some progress. These will all be "complex", but not all "complex" cases are tractable.

I really like this framing, because it feels more useful for making decisions. The thing that lets us safely ignore a case of "simple" cluelessness isn't the symmetry in itself, but the intractability of making progress. I think I agree with the conclusion that we ought to be prioritising the difficult task of better understanding the long-run consequences of our actions, in the ways that are tractable.

Comment by Isaac Dunn (Isaac_Dunn) on A framework for discussing EA with people outside the community · 2021-02-03T16:18:46.693Z · EA · GW

I enjoyed this article and found it useful, thanks for writing it! I think it could be interesting to think about how these ideas might apply to situations like running a local EA group, where it's not just discussing EA when it comes up organically.

Comment by Isaac Dunn (Isaac_Dunn) on A case against strong longtermism · 2020-12-18T12:59:15.203Z · EA · GW

I second what Alex has said about this discussion being very valuable pushback against ideas that have got some traction - at the moment I think that strong longtermism seems right, but it's important to know if I'm mistaken! So thank you for writing the post & taking some time to engage in the comments.

On this specific question, I have either misunderstood your argument or think it might be mistaken. I think your argument is "even if we assume that the life of the universe is finite, there are still infinitely many possible futures - for example, the infinite different possible universes where someone shouts a different natural number".

But I think this is mistaken, because the universe will end before you finish shouting most natural numbers. In fact, there would only be finitely many natural numbers you could finish shouting before the universe ends, so this doesn't show there are infinitely many possible universes. (Of course, there might be other arguments for infinite possible futures.)

More generally, I think I agree with Owen's point that if we make the (strong) assumption the universe is finite in duration and finite in possible states, and can quantise time, then it follows that there are only finite possible universes, so we can in principle compute expected value.

So I'd be especially interested if you have any thoughts on whether expected value is in practice an inappropriate tool to use (e.g. with subjective probabilities) even assuming in principle it is computable. For example, I'd love to hear when (if at all) you think we should use expected value reasoning, and how we should make decisions when we shouldn't.