How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs 2020-09-05T12:51:01.844Z · score: 33 (42 votes)
The case of the missing cause prioritisation research 2020-08-16T00:21:02.126Z · score: 225 (122 votes)
APPG on Future Generations impact report – Raising the profile of future generation in the UK Parliament 2020-08-12T14:24:04.861Z · score: 86 (30 votes)
Coronavirus and long term policy [UK focus] 2020-04-05T08:29:08.645Z · score: 51 (22 votes)
Where are you donating this year and why – in 2019? Open thread for discussion. 2019-12-11T00:57:32.808Z · score: 69 (24 votes)
Managing risk in the EA policy space 2019-12-09T13:32:09.702Z · score: 67 (33 votes)
UK policy and politics careers 2019-09-28T16:18:43.776Z · score: 28 (14 votes)
AI & Policy 1/3: On knowing the effect of today’s policies on Transformative AI risks, and the case for institutional improvements. 2019-08-27T11:04:10.439Z · score: 23 (11 votes)
Self-care sessions for EA groups 2018-09-06T15:55:12.835Z · score: 14 (10 votes)
Where I am donating this year and meta projects that need funding 2018-03-02T13:42:18.961Z · score: 11 (11 votes)
General lessons on how to build EA communities. Lessons from a full-time movement builder, part 2 of 4 2017-10-10T18:24:05.400Z · score: 14 (12 votes)
Lessons from a full-time community builder. Part 1 of 4. Impact assessment 2017-10-04T18:14:12.357Z · score: 14 (14 votes)
Understanding Charity Evaluation 2017-05-11T14:55:05.711Z · score: 3 (3 votes)
Cause: Better political systems and policy making. 2016-11-22T12:37:41.752Z · score: 13 (19 votes)
Thinking about how we respond to criticisms of EA 2016-08-19T09:42:07.397Z · score: 3 (3 votes)
Effective Altruism London – a request for funding 2016-02-05T18:37:54.897Z · score: 5 (9 votes)
Tips on talking about effective altruism 2015-02-21T00:43:28.703Z · score: 14 (14 votes)
How I organise a growing effective altruism group in a big city in less than 30 minutes a month. 2015-02-08T22:20:43.455Z · score: 11 (13 votes)
Meetup : Super fun EA London Pub Social Meetup 2015-02-01T23:34:10.912Z · score: 0 (0 votes)
Top Tips on how to Choose an Effective Charity 2014-12-23T02:09:15.289Z · score: 5 (3 votes)
Outreaching Effective Altruism Locally – Resources and Guides 2014-10-28T01:58:14.236Z · score: 10 (10 votes)
Meetup : Under the influence @ the Shakespeare's Head 2014-09-12T07:11:14.138Z · score: 0 (0 votes)


Comment by weeatquince_duplicate0-37104097316182916 on Longtermist reasons to work for innovative governments · 2020-10-25T11:16:03.603Z · score: 7 (4 votes) · EA · GW

Hi Alexis, thank you for the post. I roughly agree with the case made here. 



I thought I would share some of my thoughts on the "diffusion of institutional innovations":

* I worked in government for a while. When there is incentive to make genuine policy improvements and a motivation to do so this matters. One of the key things that would be asked of a major new policy would be, what do other countries do? (Of course a lot of policy making is not political so the motivation to actually make good policy may be lacking).

* Global shocks also force governments to learn. There stuff done in the UK after Fukishima to make sure our nuclear is safe. I expect after the Beirut explosions countries are learning about fertiliser storage.

* On the other hand I have also worked outside government trying to get new policies adopted, such as polices other countries already do, and it is hard, so this does not happen easily.

* I would tentatively speculate that it is easier for innovations to diffuse when the evidence for the usefulness of the policy is concrete. This might be a factor against some of the longtermist institution reforms that Tyler and I have written about. For example “policing style x helped cut crime significantly”is more likely to diffuse than “longtermism policy y looks like it might lead to a better future in 100 years”. That said I could imagine diffusing happing also where there are large public movement and very minimal costs, for example tokenistic polices like “declare a climate emergency”. This could work in favour of longtermist ideas as making a policy now to have an effect in many years time, if the cost now is low enough, might match this pattern.



I also think that senior government positions even in smaller countries can have a longterm impact on the world in other ways:

* Technological innovation. A new technological development in one country can spread globally. * Politics. Countries can have a big impact on each other. A simple example, the EU is made up of many member states who influence each other.

* Spending. Especially for rich countries like in Scandinavia they can impact others with spending, eg climate financing.

* Preparation for disasters. Firstly building global resilience -- Eg Norway has the seed bank -- innovations like that don’t need to spread to make the world more resilient to shocks, they just need to exist. Secondly countries copy each other a lot is in disaster response -- Eg look at how uniform the response to COVID has been -- having good disaster plans can help everyone else when a disaster actually hits.



I think it matters not forget the direct impact on citizens of that country. Even a small country will have $10-$100m annual budgets. Having a small effect on that can have a truly large scale positive direct impact

Comment by weeatquince_duplicate0-37104097316182916 on Hiring engineers and researchers to help align GPT-3 · 2020-10-04T16:08:42.738Z · score: 3 (2 votes) · EA · GW

Hi, quick question, not sure this is the best place for it but curious:

Does work to "align GTP-3" include work to identify the most egregious uses for GTP-3 and develop countermeasures?


Comment by weeatquince_duplicate0-37104097316182916 on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-21T11:26:10.151Z · score: 0 (2 votes) · EA · GW

This is a fascinating question – thank you.

Let us think through the range of options for addressing Pascal's mugging. There are basically 3 options:

  • A: Bite the bullet – if anyone threatens to do cause infinite suffering then do whatever they say.
  • B: Try to fix your expected value calculations to remove your problem.
  • C: Take an alternative approach to decision making that does not rely on expected value.

It is also possible that all of A and B and C fail for different reasons.*

Let's run through.



I think that in practice no one does A. If I email everyone in the EA/longtermism community and say: I am an evil wizard please give me $100 or I will cause infinite suffering! I doubt I will get any takers.



You made three suggestions for addressing Pascal's mugging. I think I would characterise suggestions 1 and 2 as ways of adjusting your expected value calculations to aim for more accurate expected value estimates (not as using an alternate decision making tool).

I think it would be very difficult to make this work, as it leads to problems such as the ones you highlight.

You could maybe make this work using a high discounting based on the "optimisers curse" type factors to reduce the expected value of high-uncertainty high-value decisions. I am not sure.

(The GPI paper on cluelessness basically says that expected value calculations can never work to solve this problem. It is plausible you could write a similar paper about Pascals mugging. It might be interesting to  read the GPI paper and mentally replace "problem of clulessness" with "problem of pascals mugging" and see how it reads).



I do  think you could make your third option, the common sense version, work. You just say: if I follow this decision it will lead to very perverse circumstances, such as me having to give everything I own to anyone who claims they will otherwise me cause infinite suffering. It seems so counter-intuitive that I should do this that I will decide not to do this. I think this is roughly the approach that most people follow in practice. This is similar to how you might dismiss this proof that 1+1=3 even if you cannot see the error. It is however a bit of a dissatisfying answer as it is not very rigorous, it is unclear when a conclusion is so absurd as to require outright objection.

It does seem hard to apply most of the DMDU approaches to this problem. An assumption based modeling approach would lead to you writing out all of your assumptions and looking for flaws – I am not sure where it would lead.

if looking for an more rigorous approach the flexible risk planning approach might be useful. Basically make  the assumption that: when uncertainty goes up the ability to pinpoint the exact nature of the risk goes down. (I think you can investigate this empirically). So placing a reasonable expected value on a highly uncertain event means that in reality events vaguely of that type are more likely but events specifically as predicted are themselves unlikely. For example you could worry about future weapons technology that could destroy the world and try to  explore what this would look like – but you can safely say it is very unlikely to  look like your explorations. This might allow you to avoid the pascal mugger and invest appropriate time into more general more flexible evil wizard protection.


Does that help?


 * I worry that I have made this work by defining C as everything else and that the above is just saying Paradox -> No clear solution -> Everything else must be the solutions.

Comment by weeatquince_duplicate0-37104097316182916 on How can good generalist judgment be differentiated from skill at forecasting? · 2020-09-17T05:17:33.728Z · score: 2 (1 votes) · EA · GW

Thank Ben super useful.

@Linch I was taking a very very broad view of judgment.
Ben's post is much better and breaks things done in a much nicer way.

I also made a (not particularly successful) stab at explaining some aspects of not-foresight driven judgement here:

Comment by weeatquince_duplicate0-37104097316182916 on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-16T13:49:33.502Z · score: 0 (2 votes) · EA · GW

Hi Andreas, Excited you are doing this. As you can maybe tell I really liked your paper on Heuristics for Clueless Agents (although not sure my post above has sold it particularly well). Excited to see what you produce on RDM.

Firstly, measures of robustness may seem to smuggle probabilities 

This seems true to me (although not sure I would consider it to be "by the backdoor").
Insofar as any option selected through a decisions process will in a sense be the one with the highest expected value, any decision tool will have probabilities inherent either implicit or explicitly. For example you could see a basic Scenario Planning exercise as implicitly stating that all the scenarios are of reasonable (maybe equal) likelihood.

I don't think the idea of RDM is to avoid probabilities, it is to avoid the traps of expected value calculation decisions. For example by avoiding explicit predictions it prevents users making important shifts to plans based on highly speculative estimates. I'd be interested to see if you think it works well in this regard.


Secondly, we wonder why a concern for robustness in the face of deep uncertainty should lead to adoption of a satisficing criterion of choice

Honestly I don’t know (or fully understand this), so good luck finding out.  Some thoughts:

In engineering you design your lift or bridge to hold many times the capacity you think it needs, even after calculating all the things you can think off that go wrong – this helps prevent the things you didn’t think of going wrong.
I could imagine a similar principle applying to DMDU decision making – that aiming for the option that is satisfyingly robust to everything you can think of might give a better outcome than aiming elsewhere – as it may be the option that is most robust to the things you cannot think of.

But not sure. Not sure how much empirical evidence there is on this. It also occurs to me that  if some of the anti-optimizing sentiment could driven by rhetoric and a desire to be different.

Comment by weeatquince_duplicate0-37104097316182916 on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-08T18:39:09.953Z · score: 4 (2 votes) · EA · GW

Dear MichaelStJules and rohinmshah

Thank you very much for all of these thoughts. It is very interesting and I will have to read all of these links when I have the time.

I totally took the view  that the EA community relies a lot on EV calculations somewhat based on vague experience without doing a full assessment of the level of reliance, which would have been ideal, so the posted examples are very useful.


To clarify one points:

If the post is against the use of quantitative models in general, then I do in fact disagree with the post.

I was not at all against quantitative  models. Most of the DMDU stuff is quantitative models. I was arguing against the overuse of quantitative models of a particular type.


To answer one question

would you have been confident that the conclusion would have agreed with our prior beliefs before the report was done?

Yes. I would have been happy to say that, in general, I expect work of this type is less likely to be useful than other research work that does not try to predict the long-run future of humanity. (This is in a general sense, not considering factors like the researchers background and skills and so forth).

Comment by weeatquince_duplicate0-37104097316182916 on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-05T21:19:54.305Z · score: 12 (7 votes) · EA · GW

I find this hard to engage with -- you point out lots of problems that a straw longtermist might have, but it's hard for me to tell whether actual longtermists fall prey to these problems.

Thank you ever so much, this is really helpful feedback. I took the liberty of making some minor changes to the tone and approach of the post (not the content) to hopefully make it make more sense. Will try to proof read more in future.

I tried to make the crux of the argument more obvious and less storylike here:

Does that help?

The aim was not to create a strawman but rather to see what conclusions would be reached if the reader accepts a need for more uncertainty focused decision making tools for thinking about the future.


On your points:

I'm not sure which of GPI's and CLR's research you're referring to (and there's a good chance I haven't read it)

Examples: and (the later of which I have not read)

the Open Phil research you link to seems obviously relevant to cause prioritization. If it's very unlikely that there's explosive growth this century, then transformative AI is quite unlikely and we would want to place correspondingly more weight on other areas like biosecurity -- this would presumably directly change Open Phil's funding decisions.

I don’t see the OpenPhil article as that useful – it is interesting but I would not think it has a big impact on how we should approach AI risk. For example for the point of view you raise about deciding to prioritise AI over bio – who is to say based on this article that we do not get extreme growth due to progress in biotech and human enhancement rather than AI. 

I assume from the phrasing of this sentence that you believe longtermists have concrete plans more than 30 years ahead, which I find confusing. I would be thrilled to have a concrete plan for 5 years in the future (currently I'm at ~2 years). I'd be pretty surprised if Open Phil had a >30 year concrete plan (unless you count reasoning about the "last dollar").

Sorry my bad writing. I think the point I was trying to make was that it could be nice to have some plans for a few years ahead, maybe 3, maybe 5, maybe (but not more than) 30 about what we want the world to look like.

Comment by weeatquince_duplicate0-37104097316182916 on Cause/charity selection tradeoffs · 2020-08-24T17:14:03.812Z · score: 9 (5 votes) · EA · GW

Hi, I love that you are doing this.


One little bit of feedback: I really dislike the idea that being cause neutral is in some way "less emotional". I see myself as cause neutral because I am emotional and I care about how much impact I can have and I want to create as much change as possible. And I know many others who are highly passionate and highly emotional and cause neutral. I think this framing perpetuates the unhelpful stereotype that EA is all about being a cold hard calculation machine (rather than a calculation machine driven by love and concern for others).


Here is my breakdown that I did for myself:

As you can see it's quite different. Hope it helps.

Comment by weeatquince_duplicate0-37104097316182916 on How can good generalist judgment be differentiated from skill at forecasting? · 2020-08-22T17:50:47.296Z · score: 7 (9 votes) · EA · GW

Maybe I've misunderstood but in my humble opinion, and limited experience, forecasting is just a tiny tiny fraction of good judgement, (maybe about 1% depending on how broad you define forecasting). It can be useful, but somewhat overrated by the EA community.

Other aspects of good judgment may include things like:

  • Direction setting
  • Agenda setting
  • Being conscious of when you change direction part way through a judgement
  • Understanding the range of factors that are important to a judgement
  • Knowing how long to spend and how much effort to invest in a judgement
  • Brainstorming
  • Creative thinking
  • Solutions design
  • Research skills
  • Information processing
  • Group dynamics for consensus building or finding challenge
  • Knowing who to trust
  • Drawing analogies to other similar situations
  • Knowing when analagies are likely to be valid
  • Good intuition
  • Methords for digging into intuitions
  • Ability to test and moderate your intuition
  • Scenario planning (distinct from foresight?)
  • Horizen scanning (distinct from foresight?)
  • Foresight and predictions
  • Robust decion making
  • A range of models of the world which can inform the judgment
  • Good heuristics
  • Systems thinking
  • Self-awareness
  • Ability to adjust for unknown unknowns
  • Seeking evidence that contradicts the way you may want to go
  • Understanding and counteracting other biases
  • Understand statistics
  • Accounting for statistical issues like regression to mean or optimisers curse
  • Making quantitative comparisons
  • Weighing up pros and cons
  • Other generic decision making tools that can be applied, of which there are lots
  • Specific decion making tools applicable to specific situations
  • Knowing which of the above is most relevant to a judgement
  • Ability to bring all of the above together
  • Speed at bringing all the above together
  • Preparing for and understating the consequences of having made the wrong judgement
  • Ability to relearn and update judement later with new evidence
  • Etc
Comment by weeatquince_duplicate0-37104097316182916 on The case of the missing cause prioritisation research · 2020-08-18T09:38:59.150Z · score: 9 (5 votes) · EA · GW

Hi evelynciara, Thank you so much for your positivity and for complementing my writing.

Also to say do not feel discouraged. It is super unclear exactly what the community needs and I we should each be doing what we can with the skills we have and see what form that takes.

Comment by weeatquince_duplicate0-37104097316182916 on The case of the missing cause prioritisation research · 2020-08-18T09:29:34.047Z · score: 5 (3 votes) · EA · GW

Hi Tobias, Thank you for the comment. Yes very glad for CLR ect and all the s-risk research. 

An interesting thing I noted when reading through your recent comment is that all 3 of the examples of progress involve a broadening of EA, expanding horizons, pushing back on the idea that we need to be focusing right now on AI risk now. They suggest that to date the community has perhaps gone too quickly gone towards a specific case area (AI / immediate x-risk mitigation) rather than continued to explored.

I don’t really know what to make of that. Do you examples weaken the point I am making or strengthen it? Is this evidence that useful research is happening or is this evidence that we as a community under-invests in exploration?

Maybe there is no universal answer to this question and it depends on the individual reader and how your examples affects their current assumptions and priors about the world.

Comment by weeatquince_duplicate0-37104097316182916 on The case of the missing cause prioritisation research · 2020-08-18T09:16:04.095Z · score: 2 (1 votes) · EA · GW

This comment below is also relevant:

Comment by weeatquince_duplicate0-37104097316182916 on The case of the missing cause prioritisation research · 2020-08-18T09:15:05.282Z · score: 9 (5 votes) · EA · GW

Hi Michael. Thank you for your points. It is good to hear opposing views. I have never worked in pure research so find it hard to judge and somewhat parroted Paul's post. You may well be correct about the difficulty of research.

Let me try to draw from my own experience to elucidate why I may jumping to different intuitive conclusions on this question

My experience of research is from policy development. I think 2/3 of policy development is super easy and 1/3 is super difficult. The super easy stuff is just looking at the world and seeing if there are answers already out there and implementing them. For example on US police reform or UK tax policy or technology regulatory policy. We mostly know how to do these things well, we just need some incentive to implement best practice. The super difficult stuff is the foundational work, where a new problem emerges and no existing solutions abound, eg financial stability policy.

Now when I look at a question such as the one you quote of "much better research into how to make complex decisions despite high uncertainty" it seems to me to be a mix, but with definite areas that fall more towards the easy side. There appear to be a number of fields and domains with best practice that would be highly relevant to EAs making best decisions despite high uncertainty, that rarely seem to make it into EA circles. For example Enterprise Risk Management, economic models of Knightian uncertainty, organisational design, policy development toolkits, Robust Decision Making.

Maybe these have all been used and/or considered not relevant (I don’t work at GPI etc, I don’t know). But my life experience to date leaves me with an intuition that there is still low hanging research fruit just around the next corner. This is not a well-reasoned argument or a strong case simply me sharing where I come from and how I see the challenges and the path forward.

Comment by weeatquince_duplicate0-37104097316182916 on The case of the missing cause prioritisation research · 2020-08-17T20:20:11.578Z · score: 9 (3 votes) · EA · GW

Hi, Thank you for this really helpful comment. It was really interesting to read about how you work on cause prioritisation research and use IAMs. Glad that GPI will be expanding.

Comment by weeatquince_duplicate0-37104097316182916 on The case of the missing cause prioritisation research · 2020-08-17T20:10:13.955Z · score: 9 (6 votes) · EA · GW

Super great to hear that 10% of 80000 Hours team time will go into underlying research. (Also apologies for getting things wrong, was generalising from what I could find online about what 80K plans to work on – have edited the post). If you have more info on what this research might look into do let me know.

– – 

That there is an exploit explore tradeoff. Continuing to do cause prioritisation research needs to be weighed against focusing on specific cause areas.

I imply in my post that EA organisations have jumped too quickly into exploit. (I mention 80K and FHI, but l am judging from an outside view so might be wrong). I think this is a hard case to make, especially to anyone who is more certain than me about which causes matter (which may be the most EA folk). That said there are other reasons for continuing to explore, to create a diverse community, epistemic humility, game theoretic reasons (better if everyone explores a bit more), to counter optimism bias, etc. 

Not sure I am explaining this well. I guess I am saying that I still think the high level point I was making stands: that EA organisations seem to move towards exploit quicker than I would like. But do let me know if you disagree.

Comment by weeatquince_duplicate0-37104097316182916 on The case of the missing cause prioritisation research · 2020-08-17T19:11:52.893Z · score: 5 (4 votes) · EA · GW

Hi Ben,

Thank you for flagging – it is super amazing to hear and very excited by that.

I looked at a lot of organisations and tried to extrapolate what they will be doing in this space from the public information rather than reaching out, so it is great to see comments saying that research along these lines will be happening, and sorry for any thing mischaracterised.

Comment by weeatquince_duplicate0-37104097316182916 on The case of the missing cause prioritisation research · 2020-08-16T16:02:09.262Z · score: 22 (13 votes) · EA · GW

Tank you Ozzie. Very very helpful. To respond.

1. EA researchers are doing a great job. Much kudos to them. Fully agree with you on that. I think this is mostly a coordination issue. 

3. Agree a messy funding situation is a problem. Not so sure there is that big huge gap between groups funded by EA Funds and groups funded by OpenPhil.

4. Maybe we should worry less about "groups doing a bad job at these topics could be net negative". I am not a big donor so find this hard to judge this well. Also I am all for funding well evidenced projects (see my skepticism below about funding "smart young people"). But I am not convinced that we should be that worried that research on this will lead to harm, except in a few very specific cases. Poor research will likely just be ignored. Also most Foundations vet staff more carefully than they vet projects they fund.

5-6. Agree research leaders are rare (hopefully this inspires them). Disagree that junior researchers are rare. You said: "We only have so many strong EA researchers, and fewer people capable of leading teams and obtaining funding." + "It seems really difficult to convince committed researchers to change fields" Very good points. That said I think Rethink Priories have been positively surprised at how many very high quality applicants they have had for research roles. So maybe junior researchers are there. My hope this post inspires some people to set up more organisations working in this space. 

7. Not so sure about "more bets on smart young people". Not sure I agree. I tend to prefer giving to or hiring people with experience or evidence of traction. But I don’t have a strong view and would change my mind if there was good evidence on this. There might also be ways to test less experienced people before funding the, like through a "Charity Entrepreneurship" type fellowship scheme.

8. I'd love to have more of your views on what an "EA researcher/funding coordination" looks like as I could maybe make it happen. I am a Trustee of EA London. EA London is already doing a lot of global coordination of EA work (especially under COVID). I have been thinking and talking to David (EA London staff) about scaling this up, hiring a second person etc. If you have a clear vision of what this might look like or what it could add I would consider pushing more on this.

9. Rethink Priorities is OK. I have donated to them in the past but might stop as not sure they are making much headway on the issue listed here. Peter said "I think we definitely do "Beyond speculation (practical longtermism) ...  So far we've mainly been favoring within-cause intervention prioritization".

10. Good luck with your work on forecasting efforts. 

Comment by weeatquince_duplicate0-37104097316182916 on The case of the missing cause prioritisation research · 2020-08-16T15:19:02.519Z · score: 5 (4 votes) · EA · GW

Hi Ben. Thank you for this. This is exactly what I like, people replying with their impressions of the post, even if rough, so that I get some idea of how people feel and if this resonates. So thank you.

- -

That said I disagree with your claim. 

You say "I think it's just very hard and that this explains a lot of what you're describing".

I think it may well be difficult but it is mostly not happening due to underinvestment and lack of coordination in this space. Hence raising a flag.

I make this case above by comparing what I would see as a good coverage of the space with what is actually happening, so don’t have much to add here except that it is interesting that others see it differently.

I note a few counterexamples to the idea it is not done because it is hard (even in the "longtermist" area) such as: 80K's stated reason for doing less in this space is that they have reached a conclusion (priority paths) that they are happy with, that GPI was only created recently (research agenda is from 2019), Rethink Priorities is following funding, AI strategy is also difficult but is progressing much quicker. etc.

- -

Overall, I don’t have a strong view on this, and maybe you are correct. But this is something that could be looked into more. In particular I have mostly dug into research on websites but if I (or anyone) had more time it would be great talk to people who have worked on this and see if it is difficult or underinvested in (or both). I also think you could with a bit of time somewhat address this question by writing a research agenda and looking for potential low hanging research fruit in this domain.

Comment by weeatquince_duplicate0-37104097316182916 on The case of the missing cause prioritisation research · 2020-08-16T14:35:14.020Z · score: 25 (9 votes) · EA · GW

Thank you for this comment. I fully agree with this and would say that my experience of the EA community is a very positive one and that the EA community and EA organisations work very well together and are very willing to share ideas, talk and support one another. I am sure would be much support for anyone trying to fill these gaps.

Comment by weeatquince_duplicate0-37104097316182916 on The case of the missing cause prioritisation research · 2020-08-16T07:56:16.334Z · score: 2 (1 votes) · EA · GW

Yes that is correct. I have made some edits to clarify.

Comment by weeatquince_duplicate0-37104097316182916 on Why I've come to think global priorities research is even more important than I thought · 2020-08-16T00:27:02.424Z · score: 6 (5 votes) · EA · GW

Thank you for writing this Ben. I strongly agree with this.

I have also been thinking and writing about this for the past few weeks. And so, in a fit of self-promotion and/or pointing readers to similar work, direct anyone interested to my post here. I suggest that the EA movement has not done enough in this space, lay out some areas I would like to see researched, make the case that new organisations (or significant growth of existing organisations) are needed and look at some of the challenges to making that happen.

Comment by weeatquince_duplicate0-37104097316182916 on APPG on Future Generations impact report – Raising the profile of future generation in the UK Parliament · 2020-08-14T18:31:41.244Z · score: 15 (8 votes) · EA · GW

I think more thoughts along this line would be super useful

Forthcoming post, hopefully within the next week.


Do you have more thoughts on what personal traits would indicate a great fit for pulling something like this off in another country? Besides a solid understanding of the political arena 

I didn’t found the APPG so find it hard to judge what is needed to get something like this started. I think you do want sociability, charm and good networking skills to get a project off the ground, but also just having an existing good network or a few good allies might be sufficient.

Once it is off the ground it is not not really that sociable a job. It is more like being a PA but with a direction setting function. You are mostly building allies but emailing them or putting on events they would find interesting (inviting high quality speakers etc). Then you have some allies want to achieve roughly aligned goals, fix a system they see as broken, deal with environmental issues, etc. And you organise them, set up meetings, write things for them, find opportunities for them, arrange events where they all meet, etc. They do the actual meetings / TV appearances / etc.

So mostly organising and admin type work. You need to be good at emails and inviting speakers and important people to come to things via email, booking rooms and fundraising and so on.

But there is also a that direction setting side. For which you do need the ability to be self directed and able to think and strategise and a super solid understanding of the political arena, politics, policy making, etc. That said to some degree a good advisory board can help if you don’t have a great understanding of all of those things, and you do learn by doing.

Oh and you need to be good at knowing how to influence, not necessarily in person, but how to do a bit of research on someone and know what to email to them or their staff to for example get them to come talk at or attend an event or sign up to a campaign.

I have also done a fair amount of policy research but that is a different type of skill and mostly needs brains and experience in policy and writing ability.


Comment by weeatquince_duplicate0-37104097316182916 on Against the Social Discount Rate (Cowen & Parfit) - Weak refutations · 2020-08-13T08:11:15.147Z · score: 3 (2 votes) · EA · GW

In my humble opinion, you are totally correct about the first argument and Cowen and Parfit are totally correct about the second argument. (Note: I haven't read the paper just your post).



The first argument is philosophy. If a person genuinely believes that a state has a greater duty to its current citizens today than to future citizens then that person should probably apply a social discount. All that can be done to counter a philosophical intuition is to point out that there are intuitions that would suggest otherwise, clearly unpersuasive to someone who doesn’t share your intuitions you.

That said I think the Cowen and Parfit argument could be stronger by pointing out the mutual benefits or intergenerational trade, our place in history and how much we benefit form forward thinking ancestors, and the benefits to us of planning long term.



Cowen and Parfit are correct this is not a case of double counting. The whole point of a social discount rate is to allow your economic models to map your goals. So the fact they align with your goals is not double counting, it is just counting. It would be like claiming that if I would intuitively buy tasty food, then decided to build a model to map out my preferences, I should discount the value of nice tastes because I already consider nice tastes. Which is rubbish as I would just end up using the model (rather than my intuition) to choose what to buy and then having less nice tasting food than I would ideally like.

Now you can use discounts to adjust for biases. For example if you know you always overestimate the value of tastiness of food compared to other factors, even after applying your model, then you could apply a factor to try to counter this intuition. (Real world example even after applying models people underestimate construction costs due to optimism bias etc so add a factor to increase estimated construction costs). But this is if you feel you have a bias that does not match your goals, even after using a model to make decisions. Making the case for this would require some empirical evidence that such a bias exists. But the evidence does not point in this way which leads to the second point that Cowen and Parfit raise that you do not discuss which is a key part of the argument.

All the evidence suggests that humans do not value the future as much as they would ideally like to. If anything an empirical examination comparing what we do for the future compared to what we want for the future should suggest (as Cowen and Parfit highlight) a negative discount rate, to push back against availability bias and political short-termism etc. (eg see:


Hope that helps.

Comment by weeatquince_duplicate0-37104097316182916 on Reducing long-term risks from malevolent actors · 2020-08-06T11:30:30.424Z · score: 4 (2 votes) · EA · GW

Relevant policy report from the UK Parliament on enforcing the Ministerial Code of good behaviour, (from 2006):

(I wasn't sure what to do with this when I found it, I might add other policy reports I find to this thread too until I have the capacity to actually work on this in any detail)

Less directly relevant but somewhat interesting too:

Comment by weeatquince_duplicate0-37104097316182916 on Objections to Value-Alignment between Effective Altruists · 2020-07-31T17:38:57.139Z · score: 14 (7 votes) · EA · GW

I just want to say that this is one of the best things I have read on this forum. Thank you for such a thoughtful and eloquent piece. I fully agree with you.

To add to the constructive actions I think those working on EA community builidng (CEA and local community builders and 80K etc) should read and take note. Recommended actions for anyone in that position are to:

  • Create the right kind of space so that people can reach their own decision about what causes are most important.
  • Champion cause prioritisation and uncertainty.
  • Learn from the people who do this well. I would call out Amy from CEA for work on EAG in 2018 and David for EA London, who I think manage this well.

(Some notes I made in the past on this are here: )

Comment by weeatquince_duplicate0-37104097316182916 on Objections to Value-Alignment between Effective Altruists · 2020-07-31T17:30:56.666Z · score: 8 (5 votes) · EA · GW
I think your claim is not that "all value-alignment is bad" but rather "when EAs talk about value-alignment, they're talking about something much more specific and constraining than this tame interpretation".

To attempt an answer on behalf of the author. The author says "an increasingly narrow definition of value-alignment" and I think the idea is that seeking "value-alignment" has got narrower and narrower over term and further from the goal of wanting to do good.

In my time in EA value alignment has, among some folk, gone from the tame meaning you provide of really wanting to figure out how to do good to a narrower meaning such as: you also think human extinction is the most important thing.

Comment by weeatquince_duplicate0-37104097316182916 on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-24T07:02:54.454Z · score: 4 (2 votes) · EA · GW

Hi, let me try and give some feedback on your career plan:

Your career plan sounds great!

• I think the thing the world is missing right now is a good understanding of how to create sustainable systemic change. I think an econ qualification with the aim of producing really high value research on issues that are pertinent to how to bring countries out of poverty would be a really high value action, and doing this kind of work is near the top of my to do list too.

However I would caution:

• Explore whilst you are young. The path sounds like it would be good for impact but I think it is important to work in the area where your strengths match. And you should think about ways to explore your strengths. This could be by getting a job and doing a Masters course part time or doing internships over the summer or doing something else for a year etc.

• Similarly, many of the academics I think are best at creating really useful research have some experience outside of academia in creating change. Eg: taken time out from academia to run for a political position or work in politics. You can get experience in a field that is pertinent to how change is created then you might be better able to address the problems. Also at some point in the future when you have a clearer idea to solutions you might want to pivot from academia to starting a campaign or a social enterprise etc so other experience is useful.

Hope that helps

Comment by weeatquince_duplicate0-37104097316182916 on Call for feedback and input on longterm policy book proposal · 2020-07-10T15:51:57.222Z · score: 2 (1 votes) · EA · GW

I also want to clarify my statement that this was "low-medium value" was based on the current plan – I think there is valuable stuff here that could be teased out to make this useful to people in policy.

A good book summarising the academic work on how policy is made, how change happens, how external influences work, mapping out the whole space and giving an overview and different perspectives could be really really useful.

I wouldn’t give up on this idea – just maybe develop it further – can talk more if useful.

Comment by weeatquince_duplicate0-37104097316182916 on Call for feedback and input on longterm policy book proposal · 2020-07-10T08:00:17.103Z · score: 18 (6 votes) · EA · GW

Hi Maxime and Konrad,

The target audience consists of policy practitioners, inside and outside of government, and scholars of the policy process.

I am going to give a reply from the point of view of a "policy practitioner", one of the intended groups of audiences for this book. I'm not familiar with "scholars of the policy process" so can't comment on the usefulness for them". I work very much in this space – promoting long-term policy making in the UK parliament.

Let us know your thoughts, questions and feedback in the comments

In short my immediate intuition is that this is medium-low value to policymakers and to me. Although this I would likely read this I doubt I would find it very useful to me.

As others have mentioned chapters 1 & 4-5 and chapters 2-3 seem like a different topics to be read for different reasons.

Chapters 2-3

I think to someone in policy the content of chapters 2-3 seems quite basic. It is stuff that I know (or at least like to think I know). This matches my experience of the EA Geneva research I have read to date: of accurate descriptions of the policy process but quite basic and not very insightful to someone who has worked in policy for a while.

I personally think I would find it interesting to explore an academics' take on policy and see how it compares to my own knowledge. However I wouldn't expect to gain much if anything from reading this. Might be more useful to policy makers more junior in their career as introductory material.

Chapters 1 & 4-5

Chapters 1 & 4-5 seems of mixed usefulness. Chapter 1 and the beginning of chapter 4 seems useful but the rest of chapter 4 and chapter 5 seems to be written very much for academics trying to study the field.

  • Chapter 1. Seems good and interesting and I think policy makers would find this useful. That said this is all content covered elsewhere that I have read already (eg here: or
  • Chapter 4. Parts 1 and 2. Good. If done well, an analysis, literature review and exploration of these 4 diverse strategies would be very interesting.
  • Chapter 4. Parts 3 and 4. I understand this would be details of an experiment trying to compare these four strategies. These are not like for like things and decisions between them would be rare and based on many factors. I would be interested in maybe 1-2 pages of a book summarising this work but a detailed description of how someone has tried to compare them in this way seems like an intellectual academic exercise I would not be interested in. Somewhat judging this on your EA global talk.
  • Chapter 5. This looks like suggestions for academic research. This is not at all the research agenda I would take if I was trying to develop policy in this space within the next few years as a policy maker or think tank etc. It is very very theoretical based (computational models, fundamentals of policy making).

I hope that breakdown helps you refine this work. Just some initial thoughts. Happy to chat through and be constructive, especially if August works.

EDIT. Also if for a wider audience worth remembering that there are popular books on this or tangential to this. Like "The Precipice", "The Good Ancestor", "FutureGen", Will's next book, and a few others.

Comment by weeatquince_duplicate0-37104097316182916 on How to Fix Private Prisons and Immigration · 2020-07-07T08:00:02.334Z · score: 2 (1 votes) · EA · GW
That doesn't seem like a good system. The bidding process and the actualisation of losses (tied to real social interests) keep the prisons in check.

I strongly disagree. Additional checks and balances that prevent serious problems occurring are good. You have already said your system could go wrong (you said "more realistic assumptions might show my proposed system is fundamentally mistaken") and maybe it could go wrong in subtle ways that take years to manifest as companies learn how they can twist the rules.

You should be in favour of checks and balances, and might want to explore what additional systems of checks would work best for your proposal. Options include: A few prisons running on a different system (eg state-run). A regulator for your auction based prisons. Transparency. The prisons being on 10 year loans from the state with contacts the need regular renewing so they would default to state ownership. Human rights laws. Etc. Maybe all of the above are things to have.

As an example, one thing that could go wrong (although it looks like you have touched on this elsewhere in the comments) is prisons may not have a strong incentive to care about the welfare of the prisoners whilst they are in the prison.

Comment by weeatquince_duplicate0-37104097316182916 on How to Fix Private Prisons and Immigration · 2020-07-07T07:50:12.514Z · score: 2 (1 votes) · EA · GW
I'm interested to hear what you think.

Unfortunately I don’t have much useful to contribute on this. I don’t have experience running trials and pilots. I would think through the various scenarios by which a pilot could get started and then adapt to that. Eg what if you had the senior management of one prison that was keen. What about a single state. What about a few prisons. Also worth recognising that data might take years.

I used to know someone who worked on prison data collection and assessing success of prisons, if I see her at some point I could raise this and message you.

Comment by weeatquince_duplicate0-37104097316182916 on How to Fix Private Prisons and Immigration · 2020-07-06T21:20:08.390Z · score: 1 (2 votes) · EA · GW


Looks like we are mostly on the same page. We both recognise the need for theoretical data and empirical data to play a role and we both think that you have a good idea for prison reform.

I still get the impression that you undervalue empirical evidence of existent systems compared to theoretical evidence and may under invest in understanding evidence that goes against the theory or could improve the model. (Or may be I am being too harsh and we agree here too, hard to judge from a short exchange like this.) I am not sure I can persuade you to change much on this but I go into detail on a few points below.

Anyway even if you are not persuaded I expect (well hope) that you would need to gather the empirical evidence before any senior policy makers look to implement this so either way that seems like a good next step. Good luck :-)


Good theoretical evidence is "actual evidence"

Firstly, apologies. I am not sure I explained things very well. Was late and I minced my words a bit. By "actual evidence" I was trying to just encompass the case of a similar policy already being in place and working. Eg we know tobacco tax works well at achieving the policy aim of reducing smoking because we can see it working. Sorry for any confusion caused.

Can you show me a theoretical model of school building that would convince me that it would work when it would, in fact, fail?

A better example from development is microcredit (mirofinace). Basically everyone was convinced by the theory of small loans to those too poor to receive finance. The guy who came up with the idea got a freking Nobel Prize. Super-skeptics GiveWell used to have a page on the best microcredit charity. But turns out (from multiple meta-analyses) that there was basically no way to make it work in practice (not for the worlds poorest).

Any prison system that does so [works] will look similar to mine, e.g. prisons would need to get paid when convicts pay tax.

Blanket statements like this – suggesting your idea or similar is the ONLY way prisons can work still concerns me and makes me think that you value theoretical data too highly compared to empirical data. I don’t know much about prisons systems but I would be shocked if there was NO other good way to have a well managed prison system.

A pilot prison wouldn't work because it wouldn't have competitive bidding.

I still think it could help the case to think about how a pilot prison could be made to produce useful data. Could the prison bid against the state somehow? Or could it work with two prisons? Or one prison and two price streams?

Comment by weeatquince_duplicate0-37104097316182916 on How to Fix Private Prisons and Immigration · 2020-07-06T05:32:26.056Z · score: 4 (2 votes) · EA · GW

I do not have a lot of information on it. Maybe start with:

I think it mostly comes down to having good contracts in place between the prisons and the state so that the prisons have the correct incentives. I do not have a good knowledge of how the contracts work.

I think the contracts in place are temporary and need regular renewal. If a contract for a private prison is not renewed after x years then the building and management will revert to state ownership.

I believe there are both state-run and private run prisons in the UK. They are compared and in some sense this acts as a check and balance because of one system is working much worse than the other it drives change.

I note that prisons are private but that parole services are state-run (or at least reverting to be state run as they did not work privatised).

Hope that helps

Comment by weeatquince_duplicate0-37104097316182916 on How to Fix Private Prisons and Immigration · 2020-07-06T05:25:47.018Z · score: 2 (3 votes) · EA · GW



Correct me if I am wrong but you seem to be implying that the "theoretical reasons" why a policy idea will work are necessary and more important than empirical evidence that a system has worked in some case (which may be misleading due to confounding factors like good people).

If so I strongly disagree:

  • Based on my 7 years experience working in UK policy would lead me to say the opposite. Theoretical reasons are great but actual evidence that a particular system has worked is super great, and in most cases more important.
  • Of course both can be useful. The world is complicated and policy is complicated and both evidence and theory can lead you down the wrong path. Good theoretical policy ideas can turn out to be wrong and well-evidence policy idea may not replicate as expected.
  • Consider international development. The effective altruism community has been saying for years (and backing up these claims) that in development you cannot just do things that theoretically sound like they will work (like building schools) but you need to do things that have empirical evidence of working well.
  • People are very very good at persuading themselves in what they believe (eg confirmation bias). A risk with policies driven by theoretical reasoning is that its adherents have ideological baggage and motivated reasoning and do not shift in line with new evidence. This is less of a risk for policy driven based on what works.


I have not considered all the details but I do think you have a decent policy idea here. I would be interested to see it tried. I would make the following, hopefully constructive, suggestions to you.


Focus on countries where the prison system is actually broken

There is a lot of failings in policy and limited capacity to address them all. I do think "if it is not broke don’t fix it" is often a good maxim in policy and countries with working systems should not be the first to shift to the system you describe.


Be wary of the risk of motivated reasoning

If the UK system currently works well, I suspect that you have good regulators who are manually handling the shortcomings of the underlying system.

Nothing you said substantiates this claim and from what I know about the UK system (which is admittedly minimal) I don’t think this is the case. Now this claim might be true and you might have good evidence for it that you didn’t state, but it did raise a red flag in my mind when I red it.


Don’t under-value evidence, you might miss things. An underplayed strength of your case for fixing private prisons is that the solution you suggest is testable. A single pilot prison could be run and data collected and lessons learned. To some degree this could even be done by an committed entrepreneur with minimal government support.


Look at what can be learned from systems that work elsewhere. Eg a feature of the UK system is that there are both state-run and private prisons. These can and have been compared and can and have acted as a check on each other. If one is clearly failing it motivates change in the other. This learning can make you case for trailing auction based prisons stronger as you can highlight how different systems running in parallel act as a check on each other. Yet at the same time this learning also makes the case for running 100% private auction based prisons weaker as maybe some amount of state-run prisons can provide a useful check on the system.

Hope that helps.

Comment by weeatquince_duplicate0-37104097316182916 on How to Fix Private Prisons and Immigration · 2020-07-03T14:05:03.201Z · score: 8 (4 votes) · EA · GW

A good post with interesting ideas.

I think it is however worth flagging to the readers that this is a presumably a US centric post. My understanding is that in the UK at least our private prisons system performs well (and out-performs the public prisons).

I expect a lot could be learned by looking at countries that do private prisons well.

Comment by weeatquince_duplicate0-37104097316182916 on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-12T06:05:06.184Z · score: 7 (4 votes) · EA · GW

I would be interested in this.

Hi Rachel, I have been researching this topic for a while – although mostly in the UK context.

Would be up for:

  • Inputting into a session on this. Could talk through with you or talk for a few minutes on my own findings and thoughts.
  • Separate from this having a catch-up to hear about your experiences.

Send me a DM or email to: policy [at]

– Sam

Comment by weeatquince_duplicate0-37104097316182916 on Reducing long-term risks from malevolent actors · 2020-05-06T07:19:38.305Z · score: 5 (4 votes) · EA · GW

Thank you for the insight. I really have no strong view on how useful each / any of the ideas I suggested were. They were just ideas.

I would add on this point that narcissistic politicians I have encountered worried about appearance and bad press. I am pretty sure that transparency and fact checking etc discouraged them from making harmful decisions. Not every narcissistic leader is like Trump.

Comment by weeatquince_duplicate0-37104097316182916 on Update from the Happier Lives Institute · 2020-05-03T09:26:32.711Z · score: 7 (4 votes) · EA · GW

Amazing job Clare and Michael and everyone else involved. Keep up the good work.

As mentioned previously I would be interested, further down the line, to see a broad cause prioritisation assessments that looked at how SWB metrics might shed insight on how we compare global heath, to global economic growth, to improving decisions, to farmed animals well-being, to existential risk prevention, etc.

Comment by weeatquince_duplicate0-37104097316182916 on Reducing long-term risks from malevolent actors · 2020-05-03T08:50:45.687Z · score: 43 (20 votes) · EA · GW

Hi, interesting article. Thank you for writing.

I felt that this article could have said more about possible policy interventions and that it dismisses policy and political interventions as crowded too quickly. Having thought a bit about this area in the past I thought I would chip in.


Even within established democracies, we could try to identify measures that avoid excessive polarization and instead reward cross-party cooperation and compromise. ... (For example, effective altruists have discussed electoral reform as a possible lever that could help achieve this.)

There are many things that could be done to prevent malevolent leaders within established democracies. Reducing excessive polarization (or electoral reform) are two minor ones. Other ideas you do not discuss include:

  • Better mechanisms for judging individuals. Eg ensuring 360 feedback mechanisms are used routinely to guide hiring and promotion decisions as people climb political ladders. (I may do work on this in the not too distant future)
  • Less power to individuals. Eg having elections for parties rather than leaders. (The Conservative MPs in the UK could at any time decide that Boris Johnson is no longer fit to be a leader and replace him with someone else, Republicans cannot do this with Trump, Labour MPs in the UK cannot do this with a Labour leader to the same extent).
  • Reduce the extent to which corruption / malevolence is beneficial for success. There are many ways to do this. In particular removing the extent to which individuals raising money is a key factor for their political success (in the UK most political fundraising is for parties not for individuals). Also removing the extent to which dishonesty pays, for example with better fact-checking services.
  • More checks and balances on power. A second house. A constitution. More independent government institutions (central banks, regulators, etc – I may do some work in this space soon too). More transparency of political decision making. Better complaint and whistle-blowing mechanisms. Limits on use of emergency powers. Etc.


Alternatively, we could influence political background factors that make malevolent leaders more or less likely... interventions to promote democracy and reduce political instability seem valuable—though this area seems rather crowded.

You might be correct, but this feels a bit like saying the AI safety space is crowded because lots of groups are trying to develop AI. However it may not be the case that those groups are focusing as much on safety as you would like. Although there are many groups (especially nation states) that want to promote democracy there may be very specific interventions that prevent malevolent leaders that are significantly under-discussed, such as elections for parties rather than leaders, or other points listed above. It seems plausible that academics and practitioners in this space may be able to make valuable shifts in the way fledgling democracies are developing that are not otherwise being considered.

And as someone in the improving government institutions space in the UK is is not evident to me that there is much focus on the kinds of interventions that would limit malevolent leaders.

Comment by weeatquince_duplicate0-37104097316182916 on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-27T17:26:38.213Z · score: 7 (5 votes) · EA · GW

Hi Ben, I think you are correct that the main difference in our views is likely to be the trade-off between breadth/inclusivity verses expected impact in key areas. I think you are also correct that this is not a topic that either of us could do justice in this thread (I am not sure I could truly do it justice in any context without a lot of work, although always happy to try). And ultimately my initial disappointment may just be from this angle.

I do think historically 80K has struggled more in communicating its priorities to the EA community than others (CEA / GiveWell / etc) and it seems like you recognise this has been a challenge. I think perhaps it was overly harsh of me to say that 80K was "clearly doing something wrong". I was focusing only on the communications front. Maybe the problems were unavoidable or the past decisions made were the net best decisions given various trade-offs. For example maybe the issues I pointed to were just artifacts of 80K at the time transitioning its messaging from more of a "general source of EA careers advice" to more of cause focused approach. (It is still unclear to me if this is a messaging shift or a strategy shift). Always getting messaging spot on is super difficult and time consuming.

Unfortunately, I am not sure my thoughts here have lead to much that is concretely useful (but thank you for engaging). I guess if I had to summarise some key points I would say: I am super in favour of transparency about priorities (and in that regard this whole post is great); if you are focusing more on your effect on the effective altruism movement then local community organisers might have useful insights (+CEA ect have useful expertise); if 80k gets broader over time that would be exciting to me; I know I have been critical but I am really impressed by how successful you have made 80k.

Comment by weeatquince_duplicate0-37104097316182916 on Coronavirus and long term policy [UK focus] · 2020-04-26T13:12:04.635Z · score: 3 (2 votes) · EA · GW

Hi, Thank you some super useful points here. Will look at some of the BBRSC reports. I know about NC3R and think it is a good approach.

Only point I disagree with:

In terms of having a minister for dual use research this seems quite high cost to ask for, and low worth think Piers Millet suggestion of liaison officer more useful.

To clarify this is not a new Minister but adding this area of responsibility to a Ministerial portfolio so not at all a high cost ask (although ideally would do so in legislation which would be higher cost).

I think this is needed as however capable the civil service is at coordination there needs to be a Minister who is interested and held accountable in order to drive change and maintain momentum.

Comment by weeatquince_duplicate0-37104097316182916 on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-26T12:17:02.445Z · score: 36 (16 votes) · EA · GW

Hi Ben, Thank you for the thoughtful reply. Super great to see a greater focus on community culture in your plans for 2020. You are always 2 steps ahead :-)

That said I disagree with most of what you wrote.

Most of your reply talks about communications hurdles. I don’t think these pose the barrier you think they pose. In face the opposite, I think the current approach makes communications and mistrust issues worse.

You talk about the challenge of being open about your prioritisation and also open to giving advice across causes, risks of appearing to bait and switch, transparency Vs demoralising. All of these issues can be overcome, and have been overcome by others in the effective altruism community and elsewhere. Most local community organisers and CEA staff have a view on what cause they care the most about yet still mange an impartial community and impartial events. Most civil servants have political views but still provide impartial advice to Ministers. Solutions involve separating your priotisation from your impartial advice, having a strong internal culture of impartiality, being open about your aims and views, being guided by community interests, etc. This is certainly not always easy (hence why I had so many conversations about how to do this well) but it can be done.

I say the current approach makes these problems worse. Firstly thinking back to my time focused on local community building (see examples above) it appeared to me that 80000 Hours had broken some of the bonds of trust that should exist between 80000 Hours and its readership. It seems clear that 80000 Hours was doing something wrong and that more impartiality would be useful. (Although take this with a pinch of salt as I have been less in this space for a few years now). Secondly it seems surprising to me that you think the best communications approach for the effective altruism community is to have multiple organisations in this space for different causes with 80000 Hours being an odd mix of everything and future focused. A single central organisation with a broader remit would be much clearer. (Maybe something like franchising out the 80000 Hours brand to these other organisations if you trust them could solve this.)

I fully recognise there are some very difficult trade-offs here: there is a huge value to doing one thing really well, costs of growing a team to quickly to delve into more areas, costs of having lower impact on the causes you care about, costs of switching strategy, etc.

Separately to the above I expect that I would place a much stronger emphasis than you on epistemic humility and have more uncertainty than you about the value of different causes and I imagine this pushes me towards a more inclusive approach.

Comment by weeatquince_duplicate0-37104097316182916 on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-26T09:56:38.707Z · score: 14 (10 votes) · EA · GW

Hi Michelle, Firstly I want to stress that no one in 80,000 Hours needs to feel bad because I was unimpressed with some coaching a few years ago. I honestly think you are all doing a really difficult job and doing it super well and I am super grateful for all the coaching I (and others) have received. I was not upset, just concerned, and I am sure any concerns would have been dealt with at the time.

(Also worth bearing in mind that this may have been an odd case as I know the 80K staff and in some ways it is often harder to coach people you know as there is a temptation to take shortcuts, and I think people assume I am perhaps more certain about far future stuff than I am.)

I have a few potentially constructive thoughts about how to do coaching well. I have included in case helpful, although slightly wary of writing these up because they are a bit basic and you are a more experienced career coach than me so do take this with a pinch of salt:

  • I have found it works well for me best to break the sessions into areas where I am only doing traditional coaching (mostly asking questions) and a section(s), normally at the end, where I step back from the coach role to an adviser role and give an opinion. I clearly demarcate the difference and tend to ask permission before giving my opinion and tend to caveat how they should take my advice.
  • Recording and listening back to sessions has been useful for me.
  • I do coaching for people who have different views from me about which beneficiaries count. I do exercises like asking them how much they care about 1 human or 100 pigs or humans in 100 years, and work up plans from there. (This approach could be useful to you but I expect this is less relevant as I would expect much more ethical alignment of the people you coach).
  • I often feel that personally being highly uncertain about which cause paths are most important is helpful to taking an open mind when coaching. This may be a consideration when hiring new coaches.

Always happy to chat if helpful. :-)

Comment by weeatquince_duplicate0-37104097316182916 on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-21T08:44:46.200Z · score: 66 (37 votes) · EA · GW

In many ways this post leaves me feeling disappointed that 80,000 Hours has turned out the way it did and is so focused on long-term future career paths.

- -

Over the last 5 years I have spent a fair amount of time in conversation with staff at CEA and with other community builders about creating communities and events that are cause-impartial.

This approach is needed for making a community that is welcoming to and supportive of people with different backgrounds, interests and priorities; for making a cohesive community where people with varying cause areas feel they can work together; and where each individual is open-minded and willing to switch causes based on new evidence about what has the most impact.

I feel a lot of local community builders and CEA have put a lot of effort into this aspect of community building.

- -
Meanwhile it seems that 80000 Hours has taken a different tack. They have been more willing, as part of trying to do the most good, to focus on the causes that the staff at 80000 Hours think are most valuable.

Don’t get me wrong I love 80000 Hours, I am super impressed by their content glad to see them doing well. And I think there is a good case to be made for the cause-focused approach they have taken.

However, in my time as a community builder (admittedly a few years ago now) I saw the downsides of this. I saw:

  • People drifting from EA. Eg: someone telling me, they were no longer engaging with the EA community because they felt that it was now all long-term future focused and point to 80000 Hours as the evidence.
  • People feeling that they needed to pretend to be long-termism focused to get support from the EA community . Eg: someone telling me they wanted career coaching “read between the lines and pretended to be super interested in AI”.
  • Personally feeling uncomfortable because it seemed to me that my 80000 Hours career coach had a hidden agenda to push me to work on AI rather than anything else (including paths that progressed by career yet kept my options more open to different causes).
  • Concerns that the EA community is doing a bait-and-switch tactic of “come to us for resources on how to do good. Actually, the answer is this thing and we knew all along and were just pretending to be open to your thing.”

- -

“80,000 Hours’ online content is also serving as one of the most common ways that people get introduced to the effective altruism community”

So, Ben, my advice to you would firstly to be to be super proud of what you have achieved. But also to be aware of the challenges that 80000 Hours’ approach makes for building a welcoming and cohesive community. I am really glad that 20% of content on the podcast and the job board goes into broader areas than your priority paths and would encourage you to find ways that 80000 Hours can put more effort into these areas, do some more online content on these areas and to think carefully about how to avoid the risks of damaging the EA brand or the EA community.

And best of luck with the future.

Comment by weeatquince_duplicate0-37104097316182916 on What posts you are planning on writing? · 2020-02-03T10:04:59.828Z · score: 2 (1 votes) · EA · GW

Hi, is be interested and have been thinking about similar stuff (meeting the impact of lobbying, etc) from a uk policy perspective.

If helpful happy to chat and share thoughts. Feel free to get in touch to: sam [at]

Comment by weeatquince_duplicate0-37104097316182916 on Cotton‐Barratt, Daniel & Sandberg, 'Defence in Depth Against Human Extinction' · 2020-01-29T13:02:13.351Z · score: 5 (3 votes) · EA · GW

This is excellent. Very well done.

It crossed my mind to ponder on whether much can be said about where different categories* of risk prevention are under-resourced. For example it maybe that the globe spends enough resources on preventing natural risks as we have seen them in the past so understand them. It maybe that militarisation of states means that we are prepared for malicious risk. It maybe that we under-prepare for large risks as they have less small scale analogues.

Not sure how useful following that kind of thinking is but it could potentially help with prioritisation. Would be interested to hear if the authors have though through this.

*(The authors break down risks into different categories: Natural Risk / Accident Risk / Malicious Risk / Latent Risk / Commons Risk, and Leverage Risk / Cascading Risk / Large Risk, and capability risk / habitat risk / ubiquity risk / vector risk / agency risk).

Comment by weeatquince_duplicate0-37104097316182916 on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-01-22T18:34:43.953Z · score: 9 (5 votes) · EA · GW

Optimisers curse / Regression to the mean

On how trying to optimise can lead you to make mistakes

Comment by weeatquince_duplicate0-37104097316182916 on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-01-22T18:30:03.941Z · score: 7 (4 votes) · EA · GW

Knightian uncertainty / deep uncertainty

a lack of any quantifiable knowledge about some possible occurrence

This means any situation where uncertainty is so high that it is very hard / impossible / foolish to quantify the outcomes.

To understand this it is useful to note the difference between uncertainty (EG 1: The chance of a nuclear war this century) and risk (EG 2: the chance of a coin coming up heads).

The process for making decisions that rely on uncertainty may be very different form the process for making decision that rely on risk. The optimal tactic for making good decisions on situations about deep uncertainty may not be to just quantify the situation.

Why this matters

This could drastically change the causes EAs care about and the approaches they take.

This could alter how we judge the value of taking action that affects the future.

This could means that "rationalist"/LessWrong approach of "shut up and multiply" for making decisions might not be correct.

For example this could shift decisions away from a naive exacted value based on outcomes and probabilities and towards favoring courses of actions that are robust to failure modes, have good feedback loops, have short chains of affects, etc.

(Or maybe not, I don’t know. I don’t know enough about how to make optimal decisions under deep uncertainty but I think it is a thing I would like to understand better.)

See also

The difference between "risk" and "uncertainty". "Black swan events". Etc

Comment by weeatquince_duplicate0-37104097316182916 on Response to recent criticisms of EA "longtermist" thinking · 2020-01-13T14:04:56.526Z · score: 1 (7 votes) · EA · GW

Section 9.3 here:

(Disclaimer: Not my own views/criticism. I am just trying to steelman a Facebook post I read. I have not looked into the wider context of these views or people's current positions on these views.)

Comment by weeatquince_duplicate0-37104097316182916 on Response to recent criticisms of EA "longtermist" thinking · 2020-01-13T09:56:10.497Z · score: 21 (18 votes) · EA · GW


I downvoted this but I wanted to explain why and hopefully provide constructive feedback. I felt that, having seen the original post this is referencing, I really do not think this post did a good/fair job of representing (or steelmanning) the original arguments raised.

To try and make this feedback more useful and help the debate here are some very quick attempts to steelman some of the original arguments:

  • Historically arguments that justify horrendous activities have a high frequency of being utopia based (appealing to possible but uncertain future utopias). The long-termist astronomical waste argument has this feature and so we should be wary of it.
  • If an argument leads to some ridiculous / repugnant conclusions that most people would object too then it is worth being wary of that argument. The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes). We should be wary of following and promoting such arguments and philosophers.
  • There are problems with taking a simple expected value approach to decision making under uncertainty. Eg Pascal's mugging problems. [For more on this look up robust decision making under deep uncertainty or knightian uncertainty]
  • The astronomical waste type arguments are not robust to a range of different philosophical and non-utilitarian ethical frameworks and (given ethical uncertainty) this makes them not great arguments
  • Etc
  • The above are not arguments against working on x-risks etc (and the original poster does himself work on x-risk issues) but are against overly relying on, using and promoting the astronomical waste type arguments for long-termism.