Posts

Which EA organisations' research has been useful to you? 2020-11-11T09:39:13.329Z
How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs 2020-09-05T12:51:01.844Z
The case of the missing cause prioritisation research 2020-08-16T00:21:02.126Z
APPG on Future Generations impact report – Raising the profile of future generation in the UK Parliament 2020-08-12T14:24:04.861Z
Coronavirus and long term policy [UK focus] 2020-04-05T08:29:08.645Z
Where are you donating this year and why – in 2019? Open thread for discussion. 2019-12-11T00:57:32.808Z
Managing risk in the EA policy space 2019-12-09T13:32:09.702Z
UK policy and politics careers 2019-09-28T16:18:43.776Z
AI & Policy 1/3: On knowing the effect of today’s policies on Transformative AI risks, and the case for institutional improvements. 2019-08-27T11:04:10.439Z
Self-care sessions for EA groups 2018-09-06T15:55:12.835Z
Where I am donating this year and meta projects that need funding 2018-03-02T13:42:18.961Z
General lessons on how to build EA communities. Lessons from a full-time movement builder, part 2 of 4 2017-10-10T18:24:05.400Z
Lessons from a full-time community builder. Part 1 of 4. Impact assessment 2017-10-04T18:14:12.357Z
Understanding Charity Evaluation 2017-05-11T14:55:05.711Z
Cause: Better political systems and policy making. 2016-11-22T12:37:41.752Z
Thinking about how we respond to criticisms of EA 2016-08-19T09:42:07.397Z
Effective Altruism London – a request for funding 2016-02-05T18:37:54.897Z
Tips on talking about effective altruism 2015-02-21T00:43:28.703Z
How I organise a growing effective altruism group in a big city in less than 30 minutes a month. 2015-02-08T22:20:43.455Z
Meetup : Super fun EA London Pub Social Meetup 2015-02-01T23:34:10.912Z
Top Tips on how to Choose an Effective Charity 2014-12-23T02:09:15.289Z
Outreaching Effective Altruism Locally – Resources and Guides 2014-10-28T01:58:14.236Z
Meetup : Under the influence @ the Shakespeare's Head 2014-09-12T07:11:14.138Z

Comments

Comment by weeatquince on Possible misconceptions about (strong) longtermism · 2021-04-21T09:14:42.177Z · EA · GW

tl;dr – The case for giving to GiveWell top charities is based on much more more than just expected value calculations.

The case for longtermism (CL) is not based on much more than expected value calculations, in fact many non-expected value arguments currently seem to point the other way. This has lead to a situation where there are many weak arguments against longtermsim  and one very strong argument for longtermism. This is hard to evaluate.

We (longtermists) should recognise that we are new and there is still work to be done to build a good theoretical base for longtermism.

 

Hi Max,

Good question. Thank you for asking.

– – 

The more I have read by GiveWell (and to a lesser degree by groups such as Charity Entrepreneurship and Open Philanthropy) the more it is apparent to me that the case for giving to the global poor is not based solely on expected value but is based on a very broad variety of arguments. 

For example I recommend reading:

  1. https://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking
  2. https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/
  3. https://www.givewell.org/modeling-extreme-model-uncertainty
  4. https://forum.effectivealtruism.org/posts/h6uXkwFzqqr2JdZ4e/joey-savoie-tools-for-decision-making

The rough pattern of these posts is that taking a broad variety of different decision making tools and approaches and seeing where they all converge and point too is better than just looking at expected value (or using any other single tool). That expected value calculations are not the only way to make  decisions and that the arguments for giving to the global poor would be unconvincing if solely based on expected value cautions and not on historical evidence, good feedback loops, expert views, strategic considerations, etc, etc. then the authors would not be convinced.

For example in [1.] Holden describes how he was initially sceptical that:
"donations can do more good when targeting the developing-world poor rather than the developed-world poor "
but he goes onto says that:
"many (including myself) take these arguments more seriously on learning things like “people I respect mostly agree with this conclusion”; “developing-world charities’ activities are generally more robustly evidence-supported, in addition to cheaper”; “thorough, skeptical versions of ‘cost per life saved’ estimates are worse than the figures touted by charities, but still impressive”; “differences in wealth are so pronounced that “hunger” is defined completely differently for the U.S. vs. developing countries“; “aid agencies were behind undisputed major achievements such as the eradication of smallpox”; etc."

– –

Now I am actually somewhat sceptical of some of this writing. I think much of it is a pushback against longtermism. Remember the global development EAs have had to weather the transition from "give to global health, it has the highest expected value" to "give to global health, it doesn't have the highest expected value (longtermism has that) but is good for many other reasons". So it is not surprising that they have gone on to express that there are many other reasons to care about global health that are not based in expected value calculations.

– –  

But that possible "status quo bias" does not mean they are wrong. It is still the case that GiveWell have made a host of arguments for global health beyond expected value and that the longtermsim community has not done so. The longtermism community has not produced historical evidence or highlighted successful feedback loops or demonstrated that their reasoning is robust to a broad variety of possible worldviews or built strong expert consensus. (Although the case has been made that preventing extreme risks is robust to very many possible futures, so that at least is a good longtermist argument that is not based on expected value.)

In fact to some degree the opposite is the case. People who argue against longtermism have pointed to cases were long-term type planning historically led to totalitarianism or to the common-sense weirdness of longtermist conclusions etc. My own work into  risk management suggests that especially when planning for disasters it is good to not put too much weight on expected value but to assume that something unexpected will happen.

The fact is that the longtermist community has much more weird conclusions than the global health community yet has put much less effort into justifying those conclusions.

– – 

To me it looks like all this has lead to a situation where there are many weak arguments against longtermsim (CL) and one very strong argument for longtermism (AL->CL).  This is problematic as it is very hard to compare one strong argument against many weak arguments and which side you fall on will depend largely on your empirical views and how you weigh up evidence. This ultimately leads to unconstructive debate.

– – 

I think the longtermist view is likely roughly correct. But I think that the case for longtermism has not be made rigorously or even particularly well (certainly it does not stand up well to Holden's "cluster thinking" ideals). I don’t see this as a criticism of the longtermist community as the community is super new and the paper arguing the case even just from the point of view of expected value is still in draft! I just think it is a misconception worth adding to the list that the community has finished making the case for longtermism – we should recognise our newness and that there is still work to be done and not pretend we have all the answers. The EA global health community has build this broad theoretical bases beyond expected value and so can we, or we can at least try.

– – 

I would be curious to know the extent to which you agree with this?

Also, I think this way of mapping situation is a bit more nuanced here than in my previous comment so I want to acknowledge a subtle changing of views between by earlier comment and this one, ask that if you respond you respond to the views as set out here rather than above and of course thank you for your insightful comment that lead to my views evolving  – thank you Max!


– –
– – 

(PS. On the other topic you mention.  [Edited: I am not yet sure of the extent to which I  think] the  'beware suspicious convergence' counter-argument  [applies] in this context. Is it suspicious that if you make a plan for 1000 years it looks very similar to if you make a plan for 10000 years? Is it suspicious that if I plan for 100000 years or 100 years what I do in the next 10 years looks the same? Is it suspicious that if I want to go from my house in the UK to Oslo the initial steps are very similar to if I want to go from my house to Australia – ie. book ticket, get bus to train station, get train to airport? Etc?   [Would need to give this more thought but it is not obvious] )



 

Comment by weeatquince on Possible misconceptions about (strong) longtermism · 2021-03-14T20:17:16.241Z · EA · GW

Hi Jack, Thank you for your thoughts. Always a pleasure to get your views on this topic.

I agree with your overall point that the case isn’t as airtight as it could be

I think that was the main point I wanted to make (the rest was mostly to serve as an example). The case is not yet made with rigour, although maybe soon. Glad you agree.

I would also expect (although cant say for sure) that if you go hang out with GPI academics and ask how certain they are about x y and z about longtermism you would perhaps find less certainty than it comes across from the outside or that you might find on this forum and that it is useful for people to realise that.

Hence thought it might be one for your list.

 

– – 

The specific points 1. and 2. were mostly to serve as examples for the above (the "etc" was entirely in that vein, just to imply that there maybe things that a truly rigorous attempt to prove CL would throw up).

Main point made, and even roughly agreed on :-), so happy to opine a few thoughts on the truth or 1. and 2. anyway:

 

– – 

1. The actions that are best in the short run are the same as the ones that are best in the long run

Please assume that by short-term I mean within 100 years, not within 10 years.

A few reasons you might think this is true:

  • Convergence: See your section on "Longtermists won't reduce suffering today". Consider some of the examples in the paper, speeding up progress, preventing climate change, etc are quite possibly the best things you would do to maximise benefit over the next 100 years. AllFed justify working on extreme global risks based on expected lives saved in the short-run. (If this is suspicious convergence it goes both ways, why are many of the examples in the paper so suspiciously close to what is short-run best).
  • Try it: Try making the best plan you can accounting for all the souls in the next 1x10^100 years, but no longer. Great done. Now make the best plan but only take into account the next 1X10^99 years. Done? does it look any different? Now try 1x10^50 years. How different does that look? What about the best plan for 100000 years? Does that plan look different? What about 1000 years or 100 years?  At what point does it look different? Based on my experience of working with governments on long-term planning my guess would be it would start to differ significantly after about 50-100 years. (Although it might well be the case that this number is higher for philanthropists rather than policy makers.)
  • Neglectedness: Note that the two thirds of the next century (after 33 years) is basically not featured in almost any planning today. That means most of the next 100 years is almost as neglected as the long-term future (and easier to impact).

On:

Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects ... the value of these actions will in fact be coming from the long-run effects

I think I agree with this (at least intuitively agree, not given it deep though). I raised 1. as it as I think it is a useful example of where the Case for Strong Longtermism paper focuses on AL rather than CL. See section 3 p9 – the authors say if short-term actions are also the best long-term actions then AL is trivially true and then move on.  The point you raise here is just not raised by the authors as it is not relevant to the truth of AL.

 

– – 

2. Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.

I agree that AL leads to 'deontic strong longtermism'.

I don’t think expected value approach (which is the dominant approach used in their paper) or the other approaches they discuss fully engages with how to make complex decisions about the far future. I don’t think we disagree much here (you say more work could be done on decisions theoretic issues, and on tractability).

I would need to know more about your proposed alternative to comment.

Unfortunately, I am running out of time and weekend to go into this in too much depth on this so I hope you don’t mind that instead of a lengthy answer here if I just link you to some reading. 

I have recently been reading the following that you might find an interesting introduction to how one might go about thinking about these topics and is fairly close to my views:

https://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking/

https://www.givewell.org/modeling-extreme-model-uncertainty

 

– –

Always happy to hear your views. Have a great week

Comment by weeatquince on Possible misconceptions about (strong) longtermism · 2021-03-14T16:27:21.061Z · EA · GW

Thank you for this Jack.

Floating an additional idea here, in the terms of another misconception that I sometimes see. Very interested in your feedback:

 

Possible misconception: Someone has made a thorough case for "strong longtermism"

Possible misconception: “Greaves and MacAskill at GPI have set out a detailed argument for strong longtermism.”

My response: “Greaves and MacAskill argue for 'axiological strong longtermism' but this is not sufficient to make the case that what we ought to do is mainly determined by focusing on far future effects”

Axiological strong longtermism (AL) is the idea that: “In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.”

The colloquial use of strong longtermism on this forum (CL) is something like  “In most of the ethical choices we face today we can focus primarily on the far-future effects of our actions".

Now there are a few reasons why this might not follow (why CL might not follow from AL):

  1. The actions that are best in the short run are the same as the ones that are best in the long run (this is consistent with AL, see p10 of the Case for Strong Longtermism paper) in which case focusing attention on the more certain short term could be sufficient.
  2. Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.
  3. Etc

Whether or not you agree with these reasons it should at least be acknowledged that the Case for Strong Longtermism paper focuses on making a case for AL – it does not actually try to make a case for CL. This does not mean there is no way to make a case for CL but I have not seen anyone try to and I expect it would be very difficult to do, especially if aiming for philosophical-level rigour.

 

– – 

This misconception can be used in discussions for or against longtermism. If you happen to be a super strong believer that we should focus mainly on the far future it would whisper caution and if you think that Greaves and MacAskill's arguments are poor it would suggest being careful not to overstate their claims.



(PS. Both 1 and 2 seem likely to be true to me)
 

Comment by weeatquince on What Helped the Voiceless? Historical Case Studies · 2021-02-08T20:49:30.543Z · EA · GW

Yes sorry my misunderstanding. You are correct that this would still be non-ideal. 

I don’t think in most cases it would be a big problem but yes it would be problem.

 

Also another very clear problem will all of this is that humans do not naturally plan in their own long-term self interest. So for example enfranchising the young would not necessarily lead to less short-termism just because they have longer to live. The policies would have to be more nuanced and complex than that.

 

Either way I think I am drawing a lesson to lean more towards strategies that focus a bit more on policies that empower creating a good world for the next generation rather than for all future generations, although of course both matter. 

Comment by weeatquince on Introduction to Longtermism · 2021-02-08T20:28:52.203Z · EA · GW

Makes sense. Thanks Jack.

Comment by weeatquince on Introduction to Longtermism · 2021-02-08T10:30:13.405Z · EA · GW

Thank you Jack very useful. Thank you for the reading suggestion too. Some more thoughts from me

"Discounting for the catastrophe rate" should also include discounting for sudden positive windfalls or other successes that make current actions less useful. Eg if we find out that the universe is populated by benevolent intelligent non-human life anyway, or if a future unexpected invention suddenly solves societal problems, etc.

There should also be an internal project discount rate (not mentioned in my original comment). So the general discount rate (discussed above) applies after you have discounted the project you are currently working on for the chance that the project itself becomes of no value – capturing internal project risks or windfalls, as opposed to catastrophic risk or windfalls.

I am not sure I get the point about "discount the longterm future as if we were in the safest world among those we find plausible".

I don’t think any of this (on its own) invalidates the case for longtermism but I do expect it to be relevant to thinking through how longtermists make decisions.

Comment by weeatquince on Introduction to Longtermism · 2021-02-08T01:46:05.137Z · EA · GW

Yes that is true and a good point. I think I would expect a very small non-zero discount rate to be reasonable, although still not sure what relevance this has to longtermism arguments. 

Comment by weeatquince on Introduction to Longtermism · 2021-02-08T01:33:07.006Z · EA · GW

I guess some of the: AI will be transformative therefore deserves attention arguments are some of the oldest and most generally excepted within this space.

For various reasons I think the arguments for focusing on x-risk are much stronger than other longtermist arguments, but how best to do this, what x-risks to focus on, etc, is all still new and somewhat uncertain.

Comment by weeatquince on What Helped the Voiceless? Historical Case Studies · 2021-02-07T12:28:51.871Z · EA · GW

Hi, The point isn't about what is "predictable" it is about what is "plannable". Predictions are only useful insofar as they let us decide how to act. What we want is to be able robustly positively affect the world in the future.

So the adapted version of your version of my argument would be:

  • We can sometimes take actions that robustly positively affect the world with a timescale of 30+ years, BUT as the future is so uncertain most such long-term plans involve being flexible and adaptable to changes and in  practice they look very similar to if you planned for <30 year effects (with the additional caveat that in 30 years you want to be in as good a position as possible to keep making long-term positive changes going forward).
    • E.g. The global technologies and trends that could lead to brutal totalitarianism over the very long-term future are so uncertain that addressing the trends and technologies that might lead to totalitarianism in the next 30 years whilst also keeping an ongoing watchful eye on emerging trends and technologies and adapting to concerning  changes and/or to opportunities to strengthen democracy, is likely the best plan you can make.
  • Therefore there's a lot of overlap between the policies that (as best as we can tell) have the best effects on the world in 30+ years (or 60+ years ), those that have the best effects on the world in 0-30 years (and also leave the world in 30 years time ready to the next 30 years).
  • Therefore we can achieve most longtermist policy goals by getting policies that are best for the world over the next 0-30 years (and also leave the world in 30 years time ready to the next 30 years).

 

I think this mostly (although not quite 100%) addresses the two concerns that you raise


NOTES:

I would note that 30 years is not some magic number. Much of policy, including some long-term policy is time-independent. Where plans are made they might be over the next 1, 3, 5, 10, 20, 25, 30 or 50 years, as appropriate given the topic at hand. Over each length of time the aim should not be solely to maximise benefit over the planned time period but to leave the world in a good end state so that it can continue to maximise benefit going forward (eg your 1 year budgeting shouldn't say lets spend all the money this year).

There are plans that go beyond 30 years but according to Roman Kaznaric's book The Good Ancestor making plans for more than 30 years are very rare. And my own experience suggests 25 years is the maximum in most (UK) government work, and even at that length of time it is often poorly done. Hence I tend to settle on 30 years as reasonable maximum. There are of course some plans that go beyond 30 years. They tend to be on issues where long-term thinking is both necessary and simple (eg tree planting) or to use adopt adaptive planning techniques to allow for various changes in circumstances (eg the Thames Estuary 2100 flood planning).

Comment by weeatquince on What Helped the Voiceless? Historical Case Studies · 2021-02-07T12:02:08.044Z · EA · GW

Any Future Generations institution should be explicitly mandated to consider long-term prosperity, in addition to existential risks arising from technological development and environmental sustainability

Yes I fully agree with this. 

 [...] advocates of future generations can lastingly diminish the opposition of business interests—or turn it into support—by designing pro-future institutions so that they visibly contribute to areas where future generations and far-sighted businesses have common interests, such as long-term trends in infrastructure, research and development, education, and political/economic stability.

I also agree with this – although I would take my agreement with a pinch of salt – I don’t feel I have specific expertise on how farsighted businesses can be in order to take a strong view on this. 

Comment by weeatquince on Introduction to Longtermism · 2021-02-07T11:51:12.180Z · EA · GW

To add a more opinionated less factual point, as someone who researches and advises policymakers on how to think and make long-term decisions, I tend to be somewhat disappointed by the extent to which the longtermist community lacks discussions and understanding of how long-term decision making is done in practice.  I guess, if worded strongly, this could be worded as an additional community-level objection to longtermism along the lines of: 

Objection: The longtermist idea makes quite strong somewhat counterintuitive claims  about how to do good but the longtermist community has not yet demonstrated appropriately strong intellectual rigour (other than in the field of philosophy) about these claims and what they mean in practice. Individuals should therefore should be  sceptical of the claims of longtermists about how to do good. 

If worded more politely the objection would basically be that the ideas of longtermism are very new and somewhat untested and may still change significantly so we should be super cautious about adopting the conclusions of longtermists for a while longer.

Comment by weeatquince on Introduction to Longtermism · 2021-02-07T11:38:35.711Z · EA · GW

Great introduction. Strongly upvoted.  It is really good to see stuff written up clearly. Well done!!!

 

To add some points on discounting. This is not to disagree with you but to add some nuance onto a topic it is useful for people to understand. Government's (or at least the UK government) apply discount rates for three reasons:

  1. Firstly, pure-time discounting as people want things now rather than in the future of roughly 0.5%. This is what you seem to be talking about here when you talk about discounting. Interestingly this is not done for electoral politics (the discount rate is not a big political issue) but because the literature on the topic has numbers ranging from 0% to 1% so the government (which listens to experts) goes for 0.5%.
  2. Secondly, catastrophic risk discounting of 1% to account for the fact that a major catastrophic risk could make a project's value worthless, eg earthquakes could destroy the new hospital, social unrest could ruin a social programs success, etc.
  3. Thirdly, wealth discounting of 2%, to account for the fact that the future will be richer so transferring wealth form now to the future has a cost. This does not apply to harms such as loss of life.

Ultimately it is only the fist of these that longtermists and philosophers tend to disagree with. The others may still be valid to longtermists.

For example,  if you were to estimate there is a background basically unavoidable existential risk rate of 0.1% (as the UK government Stern Review discount rate suggests) then basically all the value of your actions (99.5%) is eroded after 5000 years and arguably it could be not worth thinking beyond that timeframe. There are good counter-considerations to this, not trying to start a debate here, just explain how folk outside the longtermist community apply discounting and how it might reasonably apply to longtermists' decisions. 

Comment by weeatquince on Money Can't (Easily) Buy Talent · 2021-01-26T17:26:32.269Z · EA · GW

Yes I agree with you.

That said the original post appears in a few places to be specifically taking about talent at EA organisations so the example felt apt.

Comment by weeatquince on What Helped the Voiceless? Historical Case Studies · 2021-01-23T08:36:26.562Z · EA · GW

This is quite possibly the best thing I have ever read on this forum. Thank you so much for writing this. I strongly upvoted.

I have been working full time as an advocate for future generations within politics/policy. As one of the very few people actually working in this space I think I can offer some insight and improvements to your  "implications for political strategy for future generations".

One larger point:

  • You seem to suggest a few times that political inclusion of future generations is difficult because they do not yet exist, cannot engage in politics, etc, etc. It feels at times like you see a significant gulf between representation of future generations that do not exist and inclusion of the current generation's future needs. I think this distinction is much smaller than you imply.
    • Democratic political intuitions currently work on roughly a 2 year timeline, at the cost to the future experiences of current generations.
    • 30 year + planning is impractically difficult in most cases, so that puts a limit on how long we could expect our intuitions to be looking forward.
    • So, if political systems were making the best decisions over a 30+ year time horizon rather than a 2 year timeline (ie only caring about current generations but caring about their futures) then I think this would cover roughly 95%+ of the policies that a strong longtermist would want to see in the world that are not already happening.
    • As such I think there is huge scope for getting almost all the policies that longtermists might want simply by getting the best long-run policies for current generations.

To add a nuances to a few other points

  • You suggest "When opportunities for change are limited, bide your time". I somewhat agree but I think it is important to flag that bidding time does not look like doing nothing.  I think it is useful for advocates to be working on other aligned goals during this time in order to continue to build traction and connections in the politics space.
  • You suggest "political trades ... advocating for policies, such as committees or funds for future generations, that will not be implemented for a decade or more". I think this is a valid point to raise but my instinct is that it  would be so practically difficult to pull of getting those future commitments to be binding or to happen at all that I am not sure it is that useful a tactic. Also decade is likely too long a period of time to be making these trades over,  a few years (2-5) years might be the limit here.
  • You suggest "advocates of the representation of future generations should concentrate their resources on a few nations". Once again I think this is a very valid point but in practice it works a bit differently form how you describe it. I would break down representation of future generations into a few deferent topics, such as: democratic representation for the next generation, long-run financial prudence, sustainability and preservation, preventing extreme risks, global security, preventing s-risk type scenarios, etc. Each of these topics could be championed on the world stage by a different nation.

 

Overall this post pushes me more towards thinking that future generations advocates should focus more on the long-term benefits to current generations.

The post has also inspired me to add to the ideas list a plan for a webpage where we list all UK MPs who have expressed public support for future generations. This would increase the ability that they can be held to account for their commitment to the future further down the line by future advocates.

Comment by weeatquince on Money Can't (Easily) Buy Talent · 2021-01-23T07:31:17.005Z · EA · GW

Interesting take on money and talent, thank you for writing up. I thought I would share some of my experience that might suggest an opposing view.

I do direct work and I don’t think I could do earning-to-give very well. Also when I look at my EA friends I see both someone who has gone from direct work to earning-to-give, disliked it and went back to direct work and someone who has gone from earning-to-give to direct work, disliked that an gone back to earning-to-give.  Ultimately all this suggests to me that personal fit is likely to be by far the mort important factor here and that these arguments are mostly only useful to the folk who could do really well at either path.

I also think there are a lot of people in the EA community who have really struggled to get into doing direct work (example), and I have struggled to find funding at times and relied on earning-to-give folk to fund me. I wonder if there is maybe a grass is greener on the other side effect going on. 
 

Comment by weeatquince on Thoughts on whether we're living at the most influential time in history · 2021-01-12T11:01:29.499Z · EA · GW

My thanks to Will and Buck for such an interesting thoughtful debate. However to me there seems to be one key difference that I think it is worth drawing out:

 

Will's updated article (here, p14) asks the action relevant question (rephrased by me) of:

Are we [today] among the very most influential people, out of the very large number of people who will live that we could reasonably pass resources to [over the coming thousand years]

 

Buck's post (this post) seems to focus on the not action relevant question (my phrasing) of:

Are we [in this century] at the very most influential time, out of the very large span of all time into the distant future [over the coming trillion years]

 

It seems plausible to me that Buck's criticisms are valid when considering Will's older work, however I do not think the criticisms that Buck raises here about Will's original HoH still applies to the action relevant restricted HoH in Will's new paper. (And I see little value in debating the non-action relevant HoH hypothesis.)

Comment by weeatquince on Web of virtue thesis [research note] · 2021-01-03T08:09:16.138Z · EA · GW

Hi Owen, I think this paper (and the other stuff you have posed recently) are very good.  It is good to see breakdowns of longtermism that are more practicably applicable to life and to solving some of the problems in the world today.

 

I would like to draw your attention to the COM-B (Capability Opportunity Motivation - Behaviour) model of behaviour change in case you are not already aware of it. The model is, as I understand it a fairly standard practice in governance reform in international development. The model is as follows:

  • The idea is that government decisions are dependent on the Behaviour (B) of key actors. This matches very closely to the idea that critical junctures are dependent on the Virtues embodied by key actors. 
  • An outside actor can ensure Behaviour goes well (behaviour changed for the better) by addressing key actors Capabilities (C), Opportunities (O) and Motivations (M). This matches very closely to your three points that the key actors must be 3) competent, must 1) know about the problem and must 2) care enough to solve it.
  • The model then offers a range of tools and ways to break down COM into smaller challenges and and influence COM to achieve B and can be worked into a Theory of Change.

 

I think it is useful for you to be aware of this (if you are not already) as:

  • It shows you are on the right track. It sounds like from your post you are uncertain about the Key Virtue assumption. If you just thought up this assumption it could be good to know that it matches very very closely to an existing approach to changing the actions of key actors in positions of governance (or elsewhere).
  • It provides a more standard language to use if you want to move from speaking to philosophers to speaking to actors in the institutional reform space.
  • COM-B may be a better model. By virtue of being tried and tested COM-B is likely a better model with empirical evidence and academic papers behind it. Of course it is not perfect and there are criticisms of it (similar to how there are criticisms of QALYs in global health but they are still useful).
  • It provides a whole range of useful tools for thinking through the next steps of influencing key behaviours / key virtues. As mentioned there are various ways of breaking down the problems, tools to use to drive change, and even criticisms that highlight what the model misses.

 

I hope this is useful for resolving some of the uncertainty you expressed about the Key Virtue assumption and for refining next steps when you come to work on that.

I would caveat that I find COM-B useful to think through but I am not a practitioner (I'm like an EA thinking in QALYs but not having to actually work with them).

I think there is a meta point here that I keep reading papers from FHI/GPI and getting the impression (rightly or wrongly) that stuff that is basic from a policy perspective is being being derived from scratch, often worse than the original. I would be keen to see FHI/GPI engage more with existing best practice at driving change.

Comment by weeatquince on Improving Institutional Decision-Making: a new working group · 2021-01-02T08:00:13.335Z · EA · GW

Thank you Ian. Grateful for the thoughtful reply. Good to hear the background on the name and I agree it makes sense to think of scope in a more fuzzy way (eg in scope, on the edge of scope like cfar, useful meta projects like career advice, etc)

Just to clarify my point here was not one of "whether to emphasize institutions or decision-making more" (sorry if I was initial comment was confusing) but kind of the opposite point that: it would make sense to ensure both topics are roughly equally emphasised (and that I'm not sure your post does that).

Depending on which you emphasis and which questions you ask you will likey get different answers, different interventions, etc. At an early scoping stage when you don't want to rule out much, maintaining a broad scope for what to look into is important.

Also, to flag, I don't find the "everything is decision making" framing as intuitive or useful as you do.

Totally off topic from my original point, but it is interesting to note that my experience is the polar opposite of yours. Working in gov there was a fair amount of thought and advice and tools for effective decision making, but the institutional incentives where not there. Analysts would do vast amounts of work to assess decisions and options simply to have the final decision made by a leader just looking to enrich themselves / a politician's friend / a party donor / etc.

I'd still focus on finding answers from both angles for now, but, given my experience and given that governments are likey to be among the most important institutions, if I had to call it one way or the other, I'd expect the focus on the topic of improving decision making to be less fruitful than the focus on improving institutions.

Keep up the great work!

Comment by weeatquince on Effective charities for improving institutional decision making and improving global coordination · 2021-01-02T07:30:11.928Z · EA · GW

I work for the APPG for Future Generations (https://www.appgfuturegenerations.com) in this space. Or impact report is here: https://forum.effectivealtruism.org/posts/AWKk9zjA3BXGmFdQG/appg-on-future-generations-impact-report-raising-the-profile-1 If you wish to donate please get in touch.

The APPG is affiliated with the Center for the Study of Existential Risk (https://www.cser.ac.uk/) which I behind is the best research organisation with content related to longtermism and improving institutional decision making.

More generally I think Transparency Intentional (https://www.transparency.org/en/) and Global Witness (https://www.globalwitness.org/en/) are the dominant charities in the space of reducing government corruption, a key feature of improving institutional decision making. I have not seen any evaluations of them but I'd reject they'd do well.

See also some of the institutions listed in this article (https://forum.effectivealtruism.org/posts/94QtuT4ss3RzrfH8A/improving-institutional-decision-making-a-new-working-group) under the section on "IIDM within and outside of EA"

Comment by weeatquince on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-12-31T15:42:46.200Z · EA · GW

Upon reflection I think that in my initial response to this I post was applying a UK lens to a US author.

I think the culture war dynamics (such as cancel culture) in the USA are not conducive to constructive political dialogue (agree with Larks on that). Luckily this has not seeped through to UK politics very much at least so far, but it is something I worry about. I see articles in the UK (on the right) making out that cancel culture (etc) is a problem, often with examples from the states. I expect (although this is not a topic I think much about) that articles of that type are unhelpfully fanning the culture war flames more than quelling them. As such I had a knee jerk reaction to this post and put it in the same bucket as such articles. I think I was applying a UK lens to a US author, without thinking if it applied. 

That said I still think that Larks is (similarly) unfairly applying a US lens and US examples to a German situation without making a good case that what they says applies in the German cultural context. As such I think he may well be being too harsh on EA Munich.

Comment by weeatquince on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-12-31T15:42:19.584Z · EA · GW

Upon reflection I think that in my initial response to this I post was applying a UK lens to a US author.

I think the culture war dynamics (such as cancel culture) in the USA are not conducive to constructive political dialogue (agree with Larks on that). Luckily this has not seeped through to UK politics very much at least so far, but it is something I worry about. I see articles in the UK (on the right) making out that cancel culture (etc) is a problem, often with examples from the states. I expect (although this is not a topic I think much about) that articles of that type are unhelpfully fanning the culture war flames more than quelling them. As such I had a knee jerk reaction to this post and put it in the same bucket as such articles. I think I was applying a UK lens to a US author, without thinking if it applied. 

That said I still think that Larks is (similarly) unfairly applying a US lens and US examples to a German situation without making a good case that what they says applies in the German cultural context. As such I think he may well be being too harsh on EA Munich.

Comment by weeatquince on Health and happiness research topics—Part 1: Background on QALYs and DALYs · 2020-12-31T15:20:28.517Z · EA · GW

Hi Derek, just to note to say that my experience of reading the article was that I also found the welfare and wellbeing definitions confusing. Also doesn’t "welfare economics" look to maximise "wellbeing" by your definition, or maybe I am still confused? Might be worth clearly defining these at the start of future work.

Comment by weeatquince on Health and happiness research topics—Part 1: Background on QALYs and DALYs · 2020-12-31T15:14:14.106Z · EA · GW

Hi Derek.

Fantastic work. very excited to see Rethink Priorities branch out into more meta questions on how to measure what value is and so on. Excited to read the next few posts when I have time

A few thoughts:

 

1. Have you done much stakeholder engagement? One thing that was not here (although maybe I have to wait for post 9 on this) that I would love to see is some idea of how this work feeds through to change. Have you met with staff at NICE or Gates or DCP other policy professionals and talked to them about why they are not improving these metrics and how excited they would be to have someone work on improving these metrics. (This feels like the kind of step that should be taken before the project goes too far).

 

2. Problem 4 - neglect of spillover affects – probably cannot be solved by changing the metric. It feels  more like an issue with the way the metric is used. You sort of cover this when you say "The appropriate response is unclear." I expect making the metric include all spillover affects is the wrong approach as the spillover effects are often quite uncertain and quantifying the high uncertainty effects and within the main metric seems problematic. That said I am not sure about this so just chipping in my two cents.

(For example when I worked at Treasury we refused to consider spillover effects at all, I think because there was a view that any policy could be justified by someone claiming it had spillover  effects. Then again the National Audit Office did say our own spending measures were not leading to long-term value for money so maybe that was the wrong approach.)

 

3. Who would you recommend to fund if I want to see more work like this? Who do you recommend funding if I want to see more work like this or a project to improve and change these metrics. You personally? Rethink Priorities? Happier Lives Institute? Someone else? Nobody at present?

 

4. How is the E-QALY project going? I clicked the link for the E-QALY project (https://scharr.dept.shef.ac.uk/e-qaly/about-the-project/) It says it finishes in 2019. Any idea what happened to it? 

 

Best of luck with the rest of the project.

Comment by weeatquince on Improving Institutional Decision-Making: a new working group · 2020-12-30T17:46:33.855Z · EA · GW

This is really good and I am really excited by this project. Well done on such an excellent post and all the community building work and so on.

(Some of this I put in my earlier comments on a draft but repeating here publicly, hope that is OK)

 

A few thoughts questions and ideas come to mind.

 

Did you ever consider changing the name? Maybe the name doesn’t really matter much, but if "Improving Institutional Decision Making" has been hard for people to understand then there could be better names like'good governance' or 'institutional reform' etc etc.

 

Is it useful to try to narrow/broader or define the scope of IIDM? The borders of what exactly IIDM is will always be fuzzy, and may change with time. But it could still be somewhat helpful to try to set the scope of what you are interested in. Although maybe such an exercise is futile, will lead to unnecessary arguments and will just exclude people or ideas and we want to be as broad as possible for now. The kinds of things I am thinking about are:

  • Helping progress the careers of EA aligned folk (eg 80K) – you already rule this out in your post. [I'd agree]
  • Improving individual decision making (eg CFAR, LessWrong, etc). [My view is that this is not IIDM but maybe you think it is]
  • Improving organisations in ways that are not directly decision making related, such as improving their efficiency, communications, reputation, representativeness to a population, etc? [I am not sure about this one]
  • Creating new institution? [I think most people I know would consider this IIDM. I think the aim is not to improve a specific institution but the collective decision making of institutions]

 

In order to disambiguate it could be worth  trying to better define IIDM – and do this in ways that draw out more of the questions that might be asked. I feel that your current definition of IIDM definition overly focuses on decision making. Asking how do we improve decision making then applying this to institutions might give a different answer to asking how do we improve institutions then seeing how that can be applied to their decision making. The way you describe IIDM seems to do more of former than the later. I think there could be an advantage to ensuring the question is approached form both angles.

That said later in the post you skew the other way and ask "what are the most important institutions in the world" not "what are the most important decisions made by institutions", as above approaching the question both ways could be better.

 

These are difficult questions to tease out. I think some sort of consultative  community based approach to this could be useful. Working as broadly as possible to include people who want to be involved and get their views on names, on questions of wording, on definition and on scope. 

 

Thank you for all the good work and best of luck.

Comment by weeatquince on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-30T07:36:03.063Z · EA · GW

Hi Ben. I agree with you. Yes I think roulette is a good analogy. And yes I think the "perfect information on expected value" is a strange claim to make.

But I do think it is useful to think about what could be said and justified. I do think a claim along these lines could be made and it would not be wholly unfalsifiable and it would not require completely preferencing Bayesian expected value calculations.

 

To give another analogy I think there is a reasonable long-termist equivalent of statements like:

Because of differences in wealth and purchasing power we expect that a donor in the developed west can have a much bigger impact overseas than in their home country. So in practice looking towards those kinds of international development options is a useful tool to apply when we are deciding what to do. 

This does not completely exclude the probability that we can have impact locally with donations, but it does direct our searching.

 

Being charitable to Will+Hillary, maybe that is all they are saying. And maybe it is so confusing because they have dressed it up in philosophical language – but this is because, as per GPI's goals, this paper is about engaging philosophy academics rather than producing any novel insight.

(If being more critical I am not convinced that Will+Hillary successfully give sufficient evidence to make such a claim in this paper and also see my list of things their paper could improve above.)

Comment by weeatquince on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T17:43:37.621Z · EA · GW

Yeah that is a good way of putting it. Thank you.

It is of course a feature of trying to prioritise between causes in order to do the most good, that some groups will be effectively ignored.

Luckily in this case if done in a sensible manner I would expect that there should be a strong correlation between short term welfare and long-run welfare. As managing high uncertainty should involve some amount of ensuring good feedback loops and iterating, so taking action changing things for the better (for the long run but in a way that affects the world now) learning and improving. Building the EA community, developing clean meat, improving policy making, etc.

(Unfortunately I am not sure to what extent this is a key part of the EA longtermist paradigm at present.)

Comment by weeatquince on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T16:05:59.247Z · EA · GW

@ Ben_Chugg

Curious how much you would agree with a statement like:

If we had perfect information [edit: on expected value] the options that would be best to do would be those that positively affect the far future. So in practice looking towards those kinds of options is a useful tool to apply when we are deciding what to do. 

(This is my very charitable, weak interpretation of what the Case for Strong Longtermism paper is attempting to argue)

Comment by weeatquince on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T15:53:29.406Z · EA · GW

OK Jack, I have some time today so lets dive in:

 

So, my initial reading of 4.5 was that they get it very very wrong.

Eg: "we assumed that the correct way to evaluate options in ex ante axiological terms, under conditions of uncertainty, is in terms of expected value". Any of the points above would disagree with this.

Eg: "[Knightian uncertainty] supports, rather than undermining, axiological strong longtermism". This is just not true. Some Knightian uncertainty methods would support (eg robust decision making) and some would not support (eg plan-and-adapt).

 

So why does it look like they get this so wrong?

Maybe they are trying to achieve something different from what we in this thread think they are trying to achieve.

My analysis of their analysis of Knightian uncertainty can shed some light here.

The point of Knightian (or deep) uncertainty tools is that an expected value calculation is the wrong tool for humans to use when making decisions under Knightian uncertainty. That an expected value calculation, as a decision tool it will not lead to the best outcome, the outcome with the highest true expected value. [Note: I use true expected value to mean the expected value if there was no uncertainty, which can be different from the calculated expected value.]  The aim is still the same (to maximise true expected value) but the approach is different. Why the different approach – because in practice expected value calculations do not work well – they lead to anchoring, lead to unknown unknows being ignored, are super sensitive to speculation, etc, etc. The tools used are varied but include tactics such as encouraging decision makers to aim for an option that is satisficing (least bad) on a variety of domains rather than maximising (this specific tool is to minimise the risk of  unknown unknows being ignored).

But when Will+Hillary explain Knightian uncertainty they explain it as if it is posing a fundamental axiological difference. As if aiming for the least bad option is done because the least bad option is true best option (as opposed to the calculated best option, if that makes sense). This is not at all what anyone I know who uses these tools believes.

Let's pause and note that as Knightian uncertainty tools are still aiming at guiding actors towards the true highest expected value they could theoretically be explained in terms of expected value. They don’t challenge the expected value axiology

Clearly Will+Hillary are not, in this paper, interested in if it poses an alternative methodology to reaching the true expected value, they are only interested in if it could be used to justify a different axiology. This would explain why this paper ignore all the other tools (like predict-then-act tools) focuses on this one tool and explains it in a strange way.

The case they are making (by my charitable reading) is that if we are aiming for true expected value then, because the future is so so so so so big that we should expect to be able to find at least some options that influence it and the thing that does the most good is likely to be among those options.

They chose expected value calculations as a way to illustrate this.

As Owen says here, they are "talking about how an ideal rational actor should behave – which I think is informative but not something to be directly emulate".

They do not seem to be aiming to say anything on how to make decisions about what to focus on.

 

So I stand by my claim that the most charitable reading is that they  are deliberately not addressing how to make decisions. 

 

--

As far as I can tell, in layman speak, this paper tries to make the case that: If we had perfect information [edit: on expected value] the options that would be best to do would be those that positively affect the far future. So in practice looking towards those kinds of options is a useful tool to apply when we are deciding what to do.

 

FWIW I expect this paper is largely correct (if the conclusion is as above). However I think could be improved in some ways:

  • It is opaque. Maybe it is clearer to fellow philosophers but I reached my view of what the paper was trying to achieve by looking at how they manage to mis-explain a core decision making concept two-thirds of they way through and then extrapolated the ways they could be rationally making  their apparent errors. Not easy to understand what they are doing. And I think most people on this thread would have a different view to me about this paper. Would be good if a bit more text for us layfolk.
  • It could be misconstrued. Work like this leads people to think that Will+Hillary and others believe that expected value calculations are the key tool for decision making. They are not. (I am assuming they only reference expected value calculations for illustrative purposes, if I am incorrect then their paper is either poor or I really don’t get it.)
  • It leaves unanswered questions, but does not make it clear what those questions are. I do think it is useful to know that we should expect the most high impact actions to be those that have long run positive consequences. But how the hell should anyone actually make a decision and compare short term and long term? This paper does not help on this. It could maybe highlight the need to research this.
  • Is is a weak argument.  It is plausible to me that alternative decision making tools might confuse their conclusions so much that when applied in practice by a philanthropist etc the result largely does not apply.
  • For example one could believe that economic growth is good for the future, that most people who try to impact the world positively without RCT-level evidence fail, that situations of high uncertainty are best resolved though engineering short feedback loops and quite rationally conclude that AMF (bednets) is the currently charity that has the biggest positive long-run affect on the future.  I don’t think this contradicts anything in the paper and I don’t think it would be unreasonable.
  • There are other flaws with the paper too in the more empirical part with all the examples. Eg even a very very low discount rate to account for things like extinction risk or sudden windfall really quickly reduces the amount the future matters. (Note this is different from pure time preference discounting).
  • In my view they overstate (or are misleading about) what they have achieved. Eg I do not think, for the reasons given, that they have at all shown that "plausible deviations from [an expected utility treatment of decision-making under uncertainty] do not undermine the core argument".  (This is only true insofar as decision-making approaches are, as far as I can tell, not at all relevant to their core argument). They have maybe shown something like: "plausible deviations from expected utility  theory do not undermine the core argument".

 

Let me know what you think.

Catch ya about :-)

Comment by weeatquince on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T12:45:53.157Z · EA · GW

Ah yes thank you Owen. That helps me construct a sensible positive charitable reading of their paper.

There is of course a risk that people take their paper / views of longtermism and expected value approach to be more decision guiding than perhaps they ought.

(I think it might be an overly charitable reading – the paper does briefly mention and then dismiss concerns about decision making under uncertainty, etc – although it is only a draft so reasonable to be charitable.)

Comment by weeatquince on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T11:17:00.627Z · EA · GW

the word choice doesn't really matter here

I think it is worth trying to judge the paper / case for longtermism charitably. I do not honestly think that Will means that we can literally ignore everything in the first 100 years – for a start just because the short-term affects the long-term.  If you want to evaluate interventions, even those designed for long-term impact, you need to look at the short-term impacts.

But that is where I get stuck trying to work out what Will + Hillary  mean. I think they are saying more than just you should look at the long and short term effects of interventions (trivially true under most ethical views).

They seem to be making empirical, not philosophical, claims about the current state of the world.

They appear to argue that if you use expected value calculations for decision making then you will arrive at the conclusions that suggest that you should care about highly speculative long-term effects over clear short term effects. They combine this with an assumption that expected value calculations are the correct decision making tool to conclude that long-term interventions are most likely to be the best interventions.

I think 

  • the logic of the argument is roughly correct.
  • the empirical claims made are dubious and ideally need more than a few examples to justify, but it is plausible they are correct. I think there is at least a decent case for marginal extra resources being directed to x-risk prevention in the world today.
  • the assumption that expected value calculations are the correct decision making tool is incorrect, (as per others at GPI like Owen's work and Andreas' work, bounded rationality, the entire field of risk management, economists like Taleb, knightian uncertainty, etc. etc) . A charitable reading would say that they recognises this is an assumption but chooses not to address it.

 

Hmmm... I now feel I have a slightly better grasp of what the arguments are after having written that. (Ben I think this counts as disentangling some of the claims made and more such work could be useful)

 

Vadmas – I think there can be grounds for refusing to follow arguments that you cannot disprove based solely on the implausibility or repugnance of their conclusions, which appears to be your response to their paper. I am not sure it is needed as I don’t think think the case for strong longtermism is well made.

Comment by weeatquince on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T17:30:26.689Z · EA · GW

I think that ontology used by Greaves+MacAskill is poor. I skim-read their Case  for Strong Longtermsim paper honestly expecting it to be great (Will is generally pretty sensible) but I came away quite confused as to what case was being made. 

 

Ben – maybe there needs to be more of an exercise to disentangle what is meant by longtermism before it can be critiqued fairly.

 

Owen – I am not sure if you would agree but I as far as I can tell the points you  make about bounded rationality in the excellent post you link to above contradicts the the Case  for Strong Longtermsim paper. EG:

  • Greaves+MacAskill: "we assumed that the correct way to evaluate options ... is in terms of expected value" (as far as I can tell their entire point is that you can always do an expected value calculation and "ignore all the effects contained in the first 100" years).
  • You: "if we want to make decisions on longtermist grounds, we are going to end up using some heuristics"
Comment by weeatquince on Blueprints (& lenses) for longtermist decision-making · 2020-12-28T16:52:04.330Z · EA · GW

Thank you for the excellent post

 

I want to invite readers to attempt to describe their own implicit blueprints.

My primary blueprint is as follows:

I want the world in 30 years time to be in as good a state as it can be in order to face whatever challenges that will come next.

This is based of a few ideas.

  • Firstly, that almost no business government or policy-maker ever makes plans beyond a 25-30 year time horizon and my own understanding of how to manage situations of high uncertainty put in place the 30 year time limit.
  • Secondly there is an idea that is common across long-term policy making of setting out a clear vision of what you want to achieve in the long-term is a useful step. The Welsh Future Generations Bill and the work on long-term policy making in Portugal from the School of International Futures are examples of this.

Maybe you could describe this as a lens of: What is current best practice in long-term policy thinking?

This is combined with a few alternative approaches (alternative blueprints) such as: what will the world look like in 1, 2, 5, 10, 20 years? What are the biggest risks the world will face in the next 30 years? Of issues that really matter what what is politically of interest right now.

 

I think that longtermism poses deep problems of bounded rationality, and working out how to address those (in theory and in practice) is crucial if the longtermist project is to have the best chance of succeeding at its aims. I think this means we should have a lot of discussion about: ...

I strongly agree.

However I think very few in the longtermism community are actually in need of lenses and blueprints right now. In fact sometimes I fell like I am the only one thinking like this. Maybe it is useful to staff like you at at FHI deciding what to research and it is definitely useful to me as someone working on policy making from a longtermist perspective. But most folk are not making any such decisions .

For what it is worth one of my main concerns with the longtermism community at present is it feels very divorced from actual decisions about how to make the world better. Worse it sometimes feels like folk in the longtermism community think that expected value calculations are the only valid decision making tool. I plan to write more on this at some point and would be interested in talking though if you fancy it.



 

Comment by weeatquince on [Feedback Request] The compound interest of saving lives · 2020-12-27T10:12:47.797Z · EA · GW

Economists routinely discount the future because they expect the future to be richer. This seems analogous and might be with looking into and I expect there is a fair amount written on the topic although I don't have good links.

(Note: this is different to pure time preference discounting which is what many folk in the EA community object to and what I assume you mean when you say "we dismiss discount rates")

Comment by weeatquince on EA Meta Fund Grants – July 2020 · 2020-12-27T09:34:01.613Z · EA · GW

Thank you for the excellent write up. And thank you for all the good work you do.

My gut reaction to this post is that the Future of Humanity Foundation feels like the kind of project I'd expect to come under the Long Term Future Fund rather than the EA Meta Fund.

I would be curious to hear more about how the meta fund decides on projects that are meta in scope but cause area specific. (How do such grants align with donors' expectations? Do the grantmakers have expertise in domain specific issues? Is there an attempt to balance across cause areas? Etc)

Comment by weeatquince on Giving What We Can & EA Funds now operate independently of CEA · 2020-12-24T08:43:16.196Z · EA · GW

This is really exciting. Especially the sucess of Giving What We Can over the last year - suggests there is a lot of scope for effective growth here.

On the EA funds, are things like the 'guidelines for avoiding harmful grants' going to be visible to the public? Can see pros and cons to transparency about working documents like, that but I lean in favour (and I am curious to see).

Comment by weeatquince on Which EA organisations' research has been useful to you? · 2020-11-25T10:26:28.652Z · EA · GW

Thank you all super interesting reading.

FWIW as a donor I would be very wary of giving to a research organisation without a theory of change and/or strategic plan and an idea of how to measure impact (surveys or otherwise). Someone saying such work was not needed would be a massive red flag to me. Like if a global health charity says we don’t need to measure impact we know we are doing good – maybe that global health charity is the most effective global health charity in the world but it is not going to be able to convince me of that fact. 

Comment by weeatquince on Which EA organisations' research has been useful to you? · 2020-11-17T13:32:20.585Z · EA · GW

Dear Brian, thank you for the really helpful reply. That's good info and really useful. (Also FWIW I suggest posting it as an answer to the main question above rather than in the comments as it would be more visible there).

Comment by weeatquince on Which EA organisations' research has been useful to you? · 2020-11-17T13:28:16.247Z · EA · GW

Hi Peter, super keen to hear your thoughts and plans and evaluations and always happy to talk through. (FWIW I currently plan to donate £4-6k early Dec.)

Comment by weeatquince on Which EA organisations' research has been useful to you? · 2020-11-13T17:04:55.345Z · EA · GW

Yet it would be an even stronger case if organisations produce research that is both being peer reviewed so as to build an academic field AND had immediate real world outcomes. This seems possible and the two papers you cited would pass that bar.  Hence my curiosity to try to see what EA research is being used. 

(The lack of responses so far either implies that not much research from these organisations is currently having any real world medium-term measurable output or that I am asking the wrong question, asking in the wrong place or asking in the wrong way.)
 

Comment by weeatquince on Which EA organisations' research has been useful to you? · 2020-11-13T09:29:25.448Z · EA · GW

Thank you Jack. Maybe see my update comment below. I don’t know how to evaluate the impact of academic research where I cannot see any real world use of that research. That is not to say that I don’t think it has value but the feedback loops to creating value are really long and opaque, especially for the kind of philosophical work GPI appear to be focusing on. If you have a good way of evaluating that kind of research do say, I would love to hear. But at present I would be more excited to donate to organisations where there is use of their research in some tangible way, like Charity Entrepreneurship or Founders Pledge etc (or maybe like CSER as a longtermist one).

Comment by weeatquince on Which EA organisations' research has been useful to you? · 2020-11-13T09:18:54.913Z · EA · GW

UPDATE COMMENT:

I am currently leaning towards donating to somewhere like Charity Entrepreneurship where there is a clear path from research to real world output. I am sure the academic research has real world implications but I find it hard to judge this mechanism, and there is a limit to how much capacity I have  to investigate that topic.

Alternatively, given that I have such limits to my capacity for donation decisions I may just donate to the EA Infrastructure Fund.

I would be persuaded to donate elsewhere if this post, or other steps I take to investigate this topic, shows that the work of EA aligned research organisations was leading to real world outcomes.

Comment by weeatquince on Which EA organisations' research has been useful to you? · 2020-11-13T09:05:23.360Z · EA · GW

Hi Michael.

No I am super interested in what research has guided peoples' career decisions and donation decisions.

I just thought for simplicity that not worth having lots of people say "80K affected my career decisions" as I think there is already very good evidence of this, or having lots of people say that "GiveWell affected my donation decisions" as there is similarly good evidence for this. But if GiveWell research (or any non career org) affected your career decisions or if say Open Philanthropy research (or any non charity evaluator) affected your donation decisions then I am keen to hear it.

Added an edit for clarity. Thank you for the question.

Comment by weeatquince on Which EA organisations' research has been useful to you? · 2020-11-11T09:42:03.737Z · EA · GW

What I use research for: I advocate for Future Generations policy within the UK Parliament. This involves using cause priotisation research to decide where to focus my time and attention and using research on policy and governance to decide what to advocate for.

 

Most useful:

Next most useful:

  • FHI: Work relevant to biosecurity policy like this. Some stuff on risk prevention eg The Precipice book and this survey.

Honourable mentions:

  • Global Priorities Project: this summary paper on x-risks
  • AllFed: this paper on food risks
  • OpenPhil: this and this table on policy priorities
  • EA Forum: general useful tool for feedback and new ideas.
  • Useful background newsletters from CSET, on EuropeanAI and the x-risk.net newsletter (which led me to this interesting paper on Existential security)

Not as known in EA but a shout outs are deserved to:

Comment by weeatquince on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-11-04T00:12:00.630Z · EA · GW

I edited over my original post – it was too unfair to Larks and did not convey the  message I intended. 

In short I felt that  this was good post well considered and well written. That said:

  • I didn’t find the case made in this post very convincing, I think it defines "cancel culture" as bad rather than proves it so (it could have used a more neutral term like "boycott"). I also think the German cultural context (for EA Munich) might be  very different from the US cultural context (as per the examples), and I was not general convinced the examples were analogous to the Munich case.
  • Perhaps because I found this post unconvincing, it made me worry about  folks in EA focusing on issues such as cancel culture that are the focus of or used by one side of a political spectrum and not the other without sufficient evidence. (I am not saying this post is bad  just that it worries me and noting a need for caution). [Edit: I have been led to believe Larks put effort into making this post political neutral and I think that is valuable and appreciated.]
Comment by weeatquince on Longtermist reasons to work for innovative governments · 2020-10-25T11:16:03.603Z · EA · GW

Hi Alexis, thank you for the post. I roughly agree with the case made here. 

-- 

1.

I thought I would share some of my thoughts on the "diffusion of institutional innovations":

* I worked in government for a while. When there is incentive to make genuine policy improvements and a motivation to do so this matters. One of the key things that would be asked of a major new policy would be, what do other countries do? (Of course a lot of policy making is not political so the motivation to actually make good policy may be lacking).

* Global shocks also force governments to learn. There stuff done in the UK after Fukishima to make sure our nuclear is safe. I expect after the Beirut explosions countries are learning about fertiliser storage.

* On the other hand I have also worked outside government trying to get new policies adopted, such as polices other countries already do, and it is hard, so this does not happen easily.

* I would tentatively speculate that it is easier for innovations to diffuse when the evidence for the usefulness of the policy is concrete. This might be a factor against some of the longtermist institution reforms that Tyler and I have written about. For example “policing style x helped cut crime significantly”is more likely to diffuse than “longtermism policy y looks like it might lead to a better future in 100 years”. That said I could imagine diffusing happing also where there are large public movement and very minimal costs, for example tokenistic polices like “declare a climate emergency”. This could work in favour of longtermist ideas as making a policy now to have an effect in many years time, if the cost now is low enough, might match this pattern.

--

2.

I also think that senior government positions even in smaller countries can have a longterm impact on the world in other ways:

* Technological innovation. A new technological development in one country can spread globally. * Politics. Countries can have a big impact on each other. A simple example, the EU is made up of many member states who influence each other.

* Spending. Especially for rich countries like in Scandinavia they can impact others with spending, eg climate financing.

* Preparation for disasters. Firstly building global resilience -- Eg Norway has the seed bank -- innovations like that don’t need to spread to make the world more resilient to shocks, they just need to exist. Secondly countries copy each other a lot is in disaster response -- Eg look at how uniform the response to COVID has been -- having good disaster plans can help everyone else when a disaster actually hits.

 --

3.

I think it matters not forget the direct impact on citizens of that country. Even a small country will have $10-$100m annual budgets. Having a small effect on that can have a truly large scale positive direct impact

Comment by weeatquince on Hiring engineers and researchers to help align GPT-3 · 2020-10-04T16:08:42.738Z · EA · GW

Hi, quick question, not sure this is the best place for it but curious:
 

Does work to "align GTP-3" include work to identify the most egregious uses for GTP-3 and develop countermeasures?

Cheers

Comment by weeatquince on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-21T11:26:10.151Z · EA · GW

This is a fascinating question – thank you.

Let us think through the range of options for addressing Pascal's mugging. There are basically 3 options:

  • A: Bite the bullet – if anyone threatens to do cause infinite suffering then do whatever they say.
  • B: Try to fix your expected value calculations to remove your problem.
  • C: Take an alternative approach to decision making that does not rely on expected value.

It is also possible that all of A and B and C fail for different reasons.*

Let's run through.

 

A:

I think that in practice no one does A. If I email everyone in the EA/longtermism community and say: I am an evil wizard please give me $100 or I will cause infinite suffering! I doubt I will get any takers.

 

B:

You made three suggestions for addressing Pascal's mugging. I think I would characterise suggestions 1 and 2 as ways of adjusting your expected value calculations to aim for more accurate expected value estimates (not as using an alternate decision making tool).

I think it would be very difficult to make this work, as it leads to problems such as the ones you highlight.

You could maybe make this work using a high discounting based on the "optimisers curse" type factors to reduce the expected value of high-uncertainty high-value decisions. I am not sure.

(The GPI paper on cluelessness basically says that expected value calculations can never work to solve this problem. It is plausible you could write a similar paper about Pascals mugging. It might be interesting to  read the GPI paper and mentally replace "problem of clulessness" with "problem of pascals mugging" and see how it reads).

 

C:

I do  think you could make your third option, the common sense version, work. You just say: if I follow this decision it will lead to very perverse circumstances, such as me having to give everything I own to anyone who claims they will otherwise me cause infinite suffering. It seems so counter-intuitive that I should do this that I will decide not to do this. I think this is roughly the approach that most people follow in practice. This is similar to how you might dismiss this proof that 1+1=3 even if you cannot see the error. It is however a bit of a dissatisfying answer as it is not very rigorous, it is unclear when a conclusion is so absurd as to require outright objection.

It does seem hard to apply most of the DMDU approaches to this problem. An assumption based modeling approach would lead to you writing out all of your assumptions and looking for flaws – I am not sure where it would lead.

if looking for an more rigorous approach the flexible risk planning approach might be useful. Basically make  the assumption that: when uncertainty goes up the ability to pinpoint the exact nature of the risk goes down. (I think you can investigate this empirically). So placing a reasonable expected value on a highly uncertain event means that in reality events vaguely of that type are more likely but events specifically as predicted are themselves unlikely. For example you could worry about future weapons technology that could destroy the world and try to  explore what this would look like – but you can safely say it is very unlikely to  look like your explorations. This might allow you to avoid the pascal mugger and invest appropriate time into more general more flexible evil wizard protection.

 

Does that help?

 

 * I worry that I have made this work by defining C as everything else and that the above is just saying Paradox -> No clear solution -> Everything else must be the solutions.

Comment by weeatquince on How can good generalist judgment be differentiated from skill at forecasting? · 2020-09-17T05:17:33.728Z · EA · GW

Thank Ben super useful.

@Linch I was taking a very very broad view of judgment.
Ben's post is much better and breaks things done in a much nicer way.

I also made a (not particularly successful) stab at explaining some aspects of not-foresight driven judgement here: https://forum.effectivealtruism.org/posts/znaZXBY59Ln9SLrne/how-to-think-about-an-uncertain-future-lessons-from-other#Story_1__RAND_and_the_US_military
 

Comment by weeatquince on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-16T13:49:33.502Z · EA · GW

Hi Andreas, Excited you are doing this. As you can maybe tell I really liked your paper on Heuristics for Clueless Agents (although not sure my post above has sold it particularly well). Excited to see what you produce on RDM.

Firstly, measures of robustness may seem to smuggle probabilities 

This seems true to me (although not sure I would consider it to be "by the backdoor").
Insofar as any option selected through a decisions process will in a sense be the one with the highest expected value, any decision tool will have probabilities inherent either implicit or explicitly. For example you could see a basic Scenario Planning exercise as implicitly stating that all the scenarios are of reasonable (maybe equal) likelihood.

I don't think the idea of RDM is to avoid probabilities, it is to avoid the traps of expected value calculation decisions. For example by avoiding explicit predictions it prevents users making important shifts to plans based on highly speculative estimates. I'd be interested to see if you think it works well in this regard.

 

Secondly, we wonder why a concern for robustness in the face of deep uncertainty should lead to adoption of a satisficing criterion of choice

Honestly I don’t know (or fully understand this), so good luck finding out.  Some thoughts:

In engineering you design your lift or bridge to hold many times the capacity you think it needs, even after calculating all the things you can think off that go wrong – this helps prevent the things you didn’t think of going wrong.
I could imagine a similar principle applying to DMDU decision making – that aiming for the option that is satisfyingly robust to everything you can think of might give a better outcome than aiming elsewhere – as it may be the option that is most robust to the things you cannot think of.

But not sure. Not sure how much empirical evidence there is on this. It also occurs to me that  if some of the anti-optimizing sentiment could driven by rhetoric and a desire to be different.
 

Comment by weeatquince on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-08T18:39:09.953Z · EA · GW

Dear MichaelStJules and rohinmshah

Thank you very much for all of these thoughts. It is very interesting and I will have to read all of these links when I have the time.

I totally took the view  that the EA community relies a lot on EV calculations somewhat based on vague experience without doing a full assessment of the level of reliance, which would have been ideal, so the posted examples are very useful.

*

To clarify one points:

If the post is against the use of quantitative models in general, then I do in fact disagree with the post.

I was not at all against quantitative  models. Most of the DMDU stuff is quantitative models. I was arguing against the overuse of quantitative models of a particular type.

*

To answer one question

would you have been confident that the conclusion would have agreed with our prior beliefs before the report was done?

Yes. I would have been happy to say that, in general, I expect work of this type is less likely to be useful than other research work that does not try to predict the long-run future of humanity. (This is in a general sense, not considering factors like the researchers background and skills and so forth).