Posts

Which EA organisations' research has been useful to you? 2020-11-11T09:39:13.329Z
How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs 2020-09-05T12:51:01.844Z
The case of the missing cause prioritisation research 2020-08-16T00:21:02.126Z
APPG on Future Generations impact report – Raising the profile of future generation in the UK Parliament 2020-08-12T14:24:04.861Z
Coronavirus and long term policy [UK focus] 2020-04-05T08:29:08.645Z
Where are you donating this year and why – in 2019? Open thread for discussion. 2019-12-11T00:57:32.808Z
Managing risk in the EA policy space 2019-12-09T13:32:09.702Z
UK policy and politics careers 2019-09-28T16:18:43.776Z
AI & Policy 1/3: On knowing the effect of today’s policies on Transformative AI risks, and the case for institutional improvements. 2019-08-27T11:04:10.439Z
Self-care sessions for EA groups 2018-09-06T15:55:12.835Z
Where I am donating this year and meta projects that need funding 2018-03-02T13:42:18.961Z
General lessons on how to build EA communities. Lessons from a full-time movement builder, part 2 of 4 2017-10-10T18:24:05.400Z
Lessons from a full-time community builder. Part 1 of 4. Impact assessment 2017-10-04T18:14:12.357Z
Understanding Charity Evaluation 2017-05-11T14:55:05.711Z
Cause: Better political systems and policy making. 2016-11-22T12:37:41.752Z
Thinking about how we respond to criticisms of EA 2016-08-19T09:42:07.397Z
Effective Altruism London – a request for funding 2016-02-05T18:37:54.897Z
Tips on talking about effective altruism 2015-02-21T00:43:28.703Z
How I organise a growing effective altruism group in a big city in less than 30 minutes a month. 2015-02-08T22:20:43.455Z
Meetup : Super fun EA London Pub Social Meetup 2015-02-01T23:34:10.912Z
Top Tips on how to Choose an Effective Charity 2014-12-23T02:09:15.289Z
Outreaching Effective Altruism Locally – Resources and Guides 2014-10-28T01:58:14.236Z
Meetup : Under the influence @ the Shakespeare's Head 2014-09-12T07:11:14.138Z

Comments

Comment by weeatquince on A personal take on longtermist AI governance · 2021-07-20T07:30:19.194Z · EA · GW

Thank you Luke – great to hear this work is happening but still surprised by the lack of progress and would be keen to see more such work out in public!

(FWIW Minor point but I am not sure I would phrase a goal as "make government generically smarter about AI policy" just being "smart" is not good. Ideally want a combination of smart + has good incentives + has space to take action. To be more precise when planning I often use COM-B models, as used in international development governance reform work, to ensure all three factors are captured and balanced.)

 

Comment by weeatquince on EA for Jews - Proposal and Request for Comment · 2021-07-19T17:04:34.222Z · EA · GW

Also Ben, is there a Jews and EA Facebook group – any plans to set one up? Or if I set one up do you think you could email / share it?

Comment by weeatquince on A personal take on longtermist AI governance · 2021-07-17T20:37:46.589Z · EA · GW

Thank you Luke for sharing your views. I just want to pick up one thing you said where your experience of the longtermist space seems sharply contrary to mine.

You said: "We lack the strategic clarity ... [about] intermediate goals". Which is a great point and I fully agree. Also I am super pleased to hear you have been working on this. You then said:

I caution that several people have tried this ... such work is very hard

This surprised me when I read it.  In fact my intuition is that such work is highly neglected, almost no one has done any of this and I expect it is reasonably tractable. Upon reflection I came up with three reasons for my intuition on this.


1. Reading longtermist research and not seeing much work of this type.

I have seem some really impressive forecasting and trend analysis focused but if anyone had worked on setting intermediate goals I would expect to see some evidence of basic steps such as listing out a range of plausible intermediate goals or consensus building exercises to set viable short and mid term visions of what AI governance progress looks like (maybe it's there and I've just not seen it). If anyone had made a serious stab at this I would expect to have seen thorough exploration exercises to map out and describe possible near-term futures, assumption based planning, scenario based planning, strategic analysis of a variety of options, tabletop exercises, etc. I have seen very little of this.


2. Talking to key people in the longtermist space and being told this research is not happening.

For a policy research project I was considering recently I went and talked to a bunch of longtermists about research gaps (eg at GovAI, CSET, FLI, CSER, etc). I was told time and time again that policy research (which I would see as a combination of setting intermediate goals and working out what policies are needed to get there) was not happening, was a task for another organisation, was a key bottleneck that no-one was working on, etc. 
 

3. I have found it fairly easy to make progress on identifying intermediate goals and short-term policy goals that seem net-positive for long-run AI governance

I have an intermediate goal of: key actors in positions of influence over AI governance are well equipped to make good decisions if needed (at an AI crunch time).  This leads to specific policies such as: Ensuring clear lines of responsibility exist in military procurement of software /AI or, if regulation happens it should be expert driven outcome based regulation or some of the ideas here. I would be surprised if longtermists looking into this (or other intermediate goals I routinely use) would disagree with the above intermediate goal or that the policy suggestions move us towards that goal. I would say this work has not been difficult.

– – 

So why is our experience of the longtermist space so different. One hunch I have is that we are thinking of different things when we consider "strategic clarity on intermediate goals".

In supporting governments to make long-term decisions and has given me a sense of what long-term decision making and "intermediate goal setting" and long-term decision making involves. This colours the things I would expect to see if the longtermist community was really trying to do this kind of work and I compare longtermists' work to what I understand to be best practice in other long-term fields (from forestry to tech policy to risk management).  This approach leaves me thinking that there is almost no longtermist "intermediate goal setting" happening.  Yet maybe you have a very different idea of what "intermediate goal setting involves" based on other fields you have worked in.

It might also be that we read different materials and talk to different people. It might be that this work has happened I've just missed it or not read the right stuff.

– –
Does this matter? I guess I would be much more encouraging about someone doing this work than you are and much more positive about how tractable such work is. I would advise that anyone doing this work should have a really good grasp of how wicked problems are addressed and how long-term decision making works in a range of non-EA fields and the various tools that can be used.

Comment by weeatquince on EA for Jews - Proposal and Request for Comment · 2021-07-10T19:18:01.351Z · EA · GW

I have an idea and though a comment here would be a good place to put it:
I wonder if there should be a Jewish run EA charity or Charitable Fund that directs funds to good places (such as assorted EA organisations).


I think lots of Jews want to give to a Jewish run organisation or give within the Jewish community. If a Jewish run EA charity existed it could be helpful for making the case for more global effective giving.

It could be run with Jewish grant managers who ensure that funds are used well and in line with Jewish principles (there could be a Pikuach nefesh fund for saving the most lives, or a Maimonides ladder sustainable growth fund, etc).

To argue against this idea: one of the nice things about EA is it is not us asking for your money it is us advising on where you should give your money which feels nicer and is maybe an easier pitch.  So maybe if there was an EA run Jewish charity or fund it might detract form that or should be separate from the outreach efforts.

Happy to help a bit with this if it happens.

 

Comment by weeatquince on How well did EA-funded biorisk organisations do on Covid? · 2021-06-11T07:26:53.858Z · EA · GW

Another slightly tangential but very similar question that came up in conversation I had recently is:

"How well have EA-funded orgs built on the momentum created by the COVID-motivated global interest in GCRs (global catastrophic risks) to drive policy change or other changes to help prevent GCRs and x-risks"

I could have imagined a world where the entire longtermist community pivoted towards this goal and at least for a year or two and focused all available time skill and money on driving GCR related policy change – but this doesn’t seem to have happened much. I could imagine the community looking back at this year and regretting the collective lack of action.

The organisation where I work, the APPG for Future Generations pivoted significantly, kickstarted a new Parliament Committee on risks and I wrote a paper on lessons learned from COVID which had significantly government interest and seems to have driven policy change (writeup forthcoming).

But beyond that there has definitely been some exciting stuff happening. I know:

  • CSER are starting a lessons learned from COVID project, although this is only just getting started.
  • FHI staff have submitted a some evidence to parliamentary inquiries (example).
  • The CLTR (funded by the EAIF) has launched a report on risk (I'm unsure if this was a change in direction or always the plan).
  • No more pandemics (not funded) was started.

This stuff is all great and I am sure there is more happening – but my general sense is that it is much less than and much slower than I would have expected.

I also loosely get the impression (from my own experience and that of 2-3 other orgs that I have talked to) that various EA funders have been disinterested in pivoting to support lessons learned from COVID focused policy work, some of which could scale up quite significantly, and that maybe funding is the main bottleneck for some of this (I think funding for more policy work is a bottleneck for all of the orgs listed above except FHI).

[Disclaimer – I will be bias given that I pivoted my work to focus on COVID lessons learned and policy influencing and looked for funding for this.] 

Comment by weeatquince on How well did EA-funded biorisk organisations do on Covid? · 2021-06-09T06:59:05.364Z · EA · GW

Hello, Thank you for the interesting thoughts. The comments on the GHS index are useful and insightful.

Your analysis of COVID preparation on Twitter is really really interesting. Well done for doing that. I have not yet looked at your analysis spreadsheet but will try to do that soon.

To touch on a point you said about preparation, I think we can take a bit more of a nuanced approach to think about when preparation works rather than just saying "effective pandemic response is not about preparation". Some thoughts from me on this (not just focused on pandemics).

  • Prevention definitely helps. (It is a semantic question if you want to count prevention as a type of preparation or not). The world is awash with very clear examples of disaster prevention whether it is engineering safe bridges, or flood prevention, or nuclear safety, or preventing pathogens escaping labs, etc.
  • The idea that preparation (henceforth excluding prevention) helps is conventional wisdom and I think I would want to see good evidence against this to stop believing in this.
  • Obviously preparation helps in the small cases, talk to a paramedic rushing to treat someone or a fireman. I have not looked into it but I get the impression that it helps in the medium cases, eg rapid response teams responding to terror attacks in the UK / France seem useful, although not an expert. On pandemics specifically the quick containment of SARs seems to be a success story (although I have not looked at how much preparation played a role it does seem to be a part of the story). There are not that many extreme COVID-level cases to look at, but it would be odd if it didn’t help in extreme cases too.
  • The specific wording of the claim in the linked article headline feels clickbait-y. When you actually read the article it actually says that competence matters more (I agree) and also that we should focus more on designing resilient anti-fragile systems rather than event specific preparation. I agree but I think that designing systems that can make good decisions in a risk scenario is a form of preparation.
  • I do agree that your analysis provides some evidence that preparation did not help with COVID. I am cautious of the usefulness of this evidence because of the problems with the GHS – e.g. the UK came near top but basically had no plan to deal with any non-influenza pandemic that I have identified.
  • A confusing factor that might make it hard to tell if preparation helped is that, based on the UK experience (eg discussed here) it appears that having bad plans in place may actually be worse than no plans.
  • Evidence from COVID does suggest to me that specific preparation does help. Notably countries (E Asia, Australasia)  that had SARs and prepared for future SARs type outbreaks managed COIVD better.

So maybe we can say something like:
Prevention definitely helps. Both event specific  preparation and generally building robust anti-fragile decision systems are useful approaches but the latter of those is more underinvested in. However good leadership is necessary as well as preparation and without good leadership (which maybe rare) preparation can turn out to be useless. Furthermore bad preparation, such as poor planning, can potentially hinder a response more than no preparation. 

Does that seem like a good summary and sufficiently explain your findings. 

I am thinking about doing more work to promote preparation so useful to hear if you disagree. 

Comment by weeatquince on How well did EA-funded biorisk organisations do on Covid? · 2021-06-09T00:01:50.809Z · EA · GW

[Edit – moved comment to answer above at suggestion of kbog] 

Comment by weeatquince on How well did EA-funded biorisk organisations do on Covid? · 2021-06-08T23:00:41.181Z · EA · GW

Thank you :-)

Comment by weeatquince on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-08T10:35:44.980Z · EA · GW

think a significant issue is that both of these cost time

I am always amazed at how much you fund managers all do given this isn't your paid job!
 

I don't think it's obvious whether at the margin the EAIF committee should spend more or less time to get more or fewer benefits in these areas

Fair enough. FWIW my general approach to stuff like this is not to aim for perfection but to aim for each iteration/round to be a little bit better than the last.
 

... it could be that I'm just bad at getting value out of discussions, or updating my views, or something like that.

That is possible. But also possible that you are particularly smart and have well thought-out views and people learn more from talking to you than you do from talking to them!
(And/or just that everyone is different and different ways of learning work for different people)

Comment by weeatquince on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-07T20:30:41.276Z · EA · GW

Thank you so much for your thoughtful and considered reply.

I think based on my EA Funds experience so far, I'm less optimistic that the cost would be incredibly small. E.g., I would expect less correlation between "EAIF managers think something is good to fund from a longtermist perspective" and "LTFF managers think something is good to fund from a longtermist perspective" (and vice versa for 'meta' grants) than you seem to expect. 

This is because grantmaking decisions in these areas rely a lot on judgment calls that different people might make differently even if they're aligned on broad "EA principes" and other fundamental views. I have this view both because of some cases I've seen where we actually discussed (aspects of) grants across both the EAIF and LTFF managers and because within the EAIF committee large disagreements are not uncommon (and I have no reasons to believe that disagreements would be smaller between LTFF and EAIF managers than just within EAIF managers).

 

Sorry to change topic but this is super fascinating and more interesting to me than questions of fund admin time (however much I like discussing organisational design I am happy to defer to you / Jonas / etc on if the admin cost is too high – ultimately only you know that).

Why would there be so much disagreement (so much that you would routinely want to veto each others decisions if you had the option)? It seems plausible that if there is such levels of disagreement maybe: 

  1. One fund is making quite poor decisions AND/OR
  2. There is significant potential to use consensus decisions making tools as a large group to improve decision quality AND/OR
  3. There are some particularly interesting lessons to be learned by identifying the cruxes of these disagreements.

Just curious and typing up my thoughts. Not expecting good answers to this.

Comment by weeatquince on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-07T20:06:23.262Z · EA · GW

[ETA: btw I do think part of the issue here is an "object-level" disagreement about where the grants best fit - personally, I definitely see why among the grants we've made they are among the ones that seem 'closest' to the LTFF's scope; but I don't personally view them as clearly being more in scope for the LTFF than for the EAIF.]

Thank you Max. A guess the interesting question then is why do we think different things. Is it just a natural case of different people thinking differently or have I made a mistake or is there some way the funds could better communicate.

One way to consider this might be to looking at juts the basic info / fund scope on the both EAIF and LTFF pages and ask: "if the man on the Clapham omnibus only read this information here and the description of these funds where do they think these grants would sit?"
 

Comment by weeatquince on How well did EA-funded biorisk organisations do on Covid? · 2021-06-07T14:19:41.743Z · EA · GW

I don’t think I can help much with answering these questions.

I was thinking of counties like Australia and New Zealand and Taiwan. But whether or not the strategies adopted in these places was actually optimal or best with the available information or applicable to most countries that are not islands or had a high chance of failure – I cannot say!

All I can say is that there is at least one plausible strategy that seems to have worked well in at least some countries and I personally don’t really remember it being discussed within the EA space a year ago.

Feel free to draw what conclusions or analysis you will from that.


 

the enormous costs

Just to add,  I expect (but I might be wrong) that these countries have had  lower welfare and economic costs than most other places.

Comment by weeatquince on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-07T12:53:49.014Z · EA · GW

Nothing I have seen makes me thinks the EAIF should change the decision criteria. It seems to be working very well and good stuff is getting funded. So don’t change that to address a comparatively very minor issue like this, would be throwing the baby out with the bathwater!!
 

--
If you showed me the list here and said 'Which EA Fund should fund each of these?' I would have put the Lohmar and the CLTR grants (which both look like v good grants and glad they are getting funded) in the longtermist fund. Based on your comments above  you might have made the same call as well.

From an outside view the actual cost of making the grants from the pot of another fund seems incredibly small. At minimum it could just be having someone to look over the end decisions and see if any feel like they belong in a different fund and then quickly double checking with the other fund's grantmakers that they have no strong objections and then granting the money from a different pot. (You could even do that after the decision to grant has been communicated to applicants, no reason to hold up, if the second fund objects then can still be given by the first fund).

And then all those dogmatic donors to the EAIF who don’t like longtermist stuff can go to bed happy and all those  dogmatic donors to the LTFF who don’t like meta stuff can go to bed happy and everyone feels like there money is going to where they expect it to go, etc.  Which does matter a little bit because as a donor you feel that you really need to trust that the money is going to where it says on the tin and not to something else. 

(But sure if the admin costs here are actually really high or something then not a big deal, it matters a little bit to some donors but is not the most important thing to get right)

Comment by weeatquince on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-07T12:27:03.856Z · EA · GW

Basically it's not really possible to clearly define the scope in a mutually exclusive way.


Maybe we are talking past each other but I was imagining something easy like: just defining the scope as mutually exclusive. You write: we aim for the funds to be mutually exclusive. If multiple funds would fund the same project we make the grant from whichever of the Funds seems most appropriate to the project in question.

Then before you grant money you look over and see if any stuff passed by one fund looks to you like it is more for another fund. If so (unless the fund mangers of the second fund veto the switch) you fund the project with money from the second fund.

Sure it might be a very minor admin hassle but it helps make sure donor's wishes are met and avoids the confusion of donors saying – hold on a min why am I funding this I didn’t expect that.

This is not a huge issue so maybe not the top of your to do list. And you are the expert on how much of an admin burden something like this is and if it is worth it, but from the outside it seems very easy and the kind of action I would just naturally expect of a fund / charity. 

[minor edits]

Comment by weeatquince on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-07T08:16:05.179Z · EA · GW

FWIW I had assumed the former was the case. Thank you for clarifying.

I had assumed the former as

  • it felt like the logical reading of the phrasing of the above
  • my read of the things funded in this round seemed to be that some of them don’t appear to be b OR c (unless b and c are interpreted very broadly).
     
Comment by weeatquince on How well did EA-funded biorisk organisations do on Covid? · 2021-06-07T08:00:22.913Z · EA · GW

Hi, Yes good point, maybe I am being too generous.

FWIW I don’t remember anyone in the EA / rationalist community calling for the strategy that post-hoc seems to have worked best of a long lock-down to get to zero cases followed by border closures etc to keep cases at zero. (I remember  a lot of people for example sharing this note which gets much right but stuff wrong: eg short lock-dock and comparatively easy to keep R below 1 with social distancing)

Comment by weeatquince on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-05T22:10:14.003Z · EA · GW

It also makes it easier for applicants to know what fund to apply to (or apply to first).

Comment by weeatquince on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-05T01:10:07.899Z · EA · GW

the importance of the funds being mutually exclusive in terms of remit.


I lean (as you might guess) towards the funds being mutually exclusive. The basic principle is that In general the more narrow the scope of each fund then the more control donors have about where their funds go. 

If the Fund that seemed more appropriate pays out for any thing where there is overlap then you would expect:

  • More satisfied donors. You would expect the average amount of grants that donors strongly approve to go up.
  • More donations. As well as the above satisfaction point, if donors know more precisely how their money will be spent then they would have more confident that giving to the fund makes sense comapred to some other option.
  • Theoretically better donations? If you think donors wishes are a good measure of expected impact it can arguably improve the targeting of funds to ensure amounts moved are closer to donors wishes (although maybe it makes the relationship between donors and specific fund managers weaker as there might be crossover with fund mangers moving money across multiple of the Funds).
     

None of these are big improvements, so maybe not a priority, but the cost is also small. (I cannot speak for CEA but as a charity trustee we regularly go out our way to make sure we are meeting donors wishes, regranting money hither and thither and it has not been a big time cost).

 

Comment by weeatquince on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-05T00:46:18.795Z · EA · GW

Retracted:

Upon reflection and reading the replies I think I perhaps I was underestimating how broad this Fund's scope is (and perhaps was too keen to find fault).

I do think there could be advantages for donors of narrowing the scope of this Fund / limiting overlap between Funds (see other comments), but recognise there are costs to doing that.

All my positive comments remain and great to see so much good stuff get funded.

Comment by weeatquince on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T23:36:53.064Z · EA · GW

Tl;dr: I  was  to date judging the funds by the cause area rather than the fund managers tastes and this has left me a bit surprised. I think in future I will judge more based on the fund mangers tastes.
 

Thank you Ben – I agree with all of this

Maybe I was just confused by the fund scope.

The fund scope is broad and that is good. The webpage says the scope includes: "Raise funds or otherwise support other highly-effective projects" which basically means everything! And I do think it needs to be broad – for example to support EAs bringing EA ideas into new cause areas.

But maybe in my mind I had classed it as something like "EA meta" or as "everything that is EA aligned that would not be better covered by one of the other 3 funds" or similar. But maybe that was me reading too much into things and the scope is just "anything and everything that is EA aligned". 

It is not bad that it has a broader scope than I had realised, and maybe the fault is mine, but I guess my reaction to seeing the scope is different to what I realised  is to take a step back and reconsider if my giving to date is going where I expect.

To date I have been judging the EAIF as the easy option when I am not sure where to give and have been judging the fund mostly by the cause area it gives too.

I think taking a step back will likely involve spending an hour or two going though all of the things given in recent fund rounds and thinking about how much I agree with each one then deciding if I think the EAIF is the best place for me to give, and if I think I can do better giving to one of the existing EA meta orgs that takes donations. (Probably I should have been doing this already so maybe a good nudge).

Does that make sense / answer your query?

– – 

If the EAIF had a slightly more well defined narrower scope that could make givers slightly more confident in where their funds will go but has a cost in terms of admin time and flexibility for the Funds. So there is a trade-off.

My gut feel is that in the long run the trade-off is worth it but maybe feedback from other donors would say otherwise. 

Comment by weeatquince on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T23:21:58.560Z · EA · GW

Maybe I am just confused by the fund scope.

The webpage says: "Raise funds or otherwise support other highly-effective projects" which basically means everything. Which is great – I think it needs to be broad – for example to support EAs bringing EA ideas into new cause areas for example

So maybe in my mind I had classed it as something like "everything that is EA aligned that would not be covered by one of the other 3 funds". But maybe that was me reading too much into it and the scope is just "anything and everything that is EA aligned".  

It is not bad that it has a broader scope than I had realised, and maybe the fault is mine, but I guess my reaction to seeing the scope is different to what I realised  is to take a step back and reconsider if my giving is going to where I expect.

Comment by weeatquince on How well did EA-funded biorisk organisations do on Covid? · 2021-06-04T22:56:05.154Z · EA · GW

One feature of the things that me and William have picked up on is that early on (say in Feb 2020 or earlier) advice coming from very respectable organisations was relatively poor.

I don’t think this should be seen as evidence that these organisations did badly (maybe a bit that they were over-confident) but that this was a very difficult situation to do things well in.

Comment by weeatquince on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T22:08:50.478Z · EA · GW

Thank you Michelle.

Really useful to hear. I agree with all of this.

It seems like, from what you and Jonas are saying, that the fund scopes currently overlap so there might be some grants that could be covered by multiple funds and even if they are arguably more appropriate to another fund than another they tend to get funded with by whoever gets to them first as currently the admin burden of shifting to another fund is large. 

That all seems pretty reasonable.

I guess my suggestion would be that I would be excited to see these kinks minimised over time and funding come from which ever pool seems most appropriate. That overlap is seen as a bug to be ironed out not a feature. 

FWIW I think you and all the other fund managers made really really good decisions. I am not just saying that to counteract saying something negative but I am genuinely very excited by how much great stuff is getting funded by the EAIF. Well done. 

(EDIT: PS. My reply to Ben below might be useful context too: https://forum.effectivealtruism.org/posts/zAEC8BuLYdKmH54t7/ea-infrastructure-fund-may-2021-grant-recommendations?commentId=qHMosynpxRB8hjycp#sPabLWWyCjxWfrA6E

Basically a more tightly defined fund scope could be nice and makes it easier for donors but harder for the Funds so there is a trade-off)



 

Comment by weeatquince on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T21:45:41.374Z · EA · GW

You are correct – sorry I missed that.

I agree with Michael above that a) seems is a legit administrative hassle but it seems like the kind of think I would be excited to see resolved when you have capacity to think about it. Maybe each fund could have some discretionary money from the other fund.
 
An explanation per grant would be super too as an where such a thing is possible!

(EDIT: PS. My reply to Ben above might be useful context too: https://forum.effectivealtruism.org/posts/zAEC8BuLYdKmH54t7/ea-infrastructure-fund-may-2021-grant-recommendations?commentId=qHMosynpxRB8hjycp#sPabLWWyCjxWfrA6E)

Comment by weeatquince on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-03T08:48:31.621Z · EA · GW

Thank you for the write-up super helpful. Amazing to see so much good stuff get funding.

Some feedback and personal reflections as a donor to the fund:

  • The inclusion of things on this list that might be better suited to other funds (e.g the LTFF) without an explanation of why they are being funded from the Infrastructure Fund makes me slightly less likely in future to give directly to the  Infrastructure Fund and slightly more likely to just give to one of the bigger meta orgs you give to (like Rethink Priorities).

    It basically creates some amount of uncertainty and worry in my mind that the the funds will give to the areas I expect them to give too with my donation.

    (I recognise it is possible that there are equal amounts going across established EA cause areas or something like that from this fund  but if that is the case it is not very clear)

 

This should not take away form the fact that I think the fund has genuinely done a great job here. For example saying that I would lean towards directly following the funds recommendations is recognition that I trust the fund and the work you have done to evaluate these projects – so well done!

Also I do support innovative longtermist projects (especially love CLTR – mega-super to see them funded!!) it is just not what I expect to see this fund doing so leaves me a bit confused / tempted to give elsewhere.

 

Comment by weeatquince on How well did EA-funded biorisk organisations do on Covid? · 2021-06-03T07:36:00.558Z · EA · GW

A few opinions:

I think John Hopkins' advice has generally been well respected but that is just rumour on the grapevine, I cannot say exactly why I think that.

Nuclear Threat Initiative's Global Health Security Index said "international preparedness for epidemics and pandemics remains very weak" which seems correct. But the index also put the UK and the USA as the most prepared which seems incorrect (or at least gives a reason not to trust very shallow global indices). Eg yes the UK had a plan, tick box – but turns out it was not very good. Yes the UK did emergency exercises, tick box – but turns out it did not update plans based on the exercises. Etc

Not on your list but in this interview from Feb 2020, 80,000 Hours and the Future of Humanity Institute get a lot correct (eg need for social distancing) but they do both seem to disagree with the case for most travel bans, which seems incorrect in hindsight. See the intro where this is discussed.

Also not a bio org but EA Funded Our World in Data has done a good job on COVID data gathering and presentation.

Comment by weeatquince on Possible misconceptions about (strong) longtermism · 2021-04-26T07:36:21.656Z · EA · GW

Not sure this "many weak arguments" way of looking at it is quite correct either had a quick look at the arguments given against longtermism and there are not that many of them. Maybe a better point is that there are many avenues and approaches that remain unexplored.

Comment by weeatquince on Possible misconceptions about (strong) longtermism · 2021-04-21T09:14:42.177Z · EA · GW

tl;dr – The case for giving to GiveWell top charities is based on much more more than just expected value calculations.

The case for longtermism (CL) is not based on much more than expected value calculations, in fact many non-expected value arguments currently seem to point the other way. This has lead to a situation where there are many weak arguments against longtermsim  and one very strong argument for longtermism. This is hard to evaluate.

We (longtermists) should recognise that we are new and there is still work to be done to build a good theoretical base for longtermism.

 

Hi Max,

Good question. Thank you for asking.

– – 

The more I have read by GiveWell (and to a lesser degree by groups such as Charity Entrepreneurship and Open Philanthropy) the more it is apparent to me that the case for giving to the global poor is not based solely on expected value but is based on a very broad variety of arguments. 

For example I recommend reading:

  1. https://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking
  2. https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/
  3. https://www.givewell.org/modeling-extreme-model-uncertainty
  4. https://forum.effectivealtruism.org/posts/h6uXkwFzqqr2JdZ4e/joey-savoie-tools-for-decision-making

The rough pattern of these posts is that taking a broad variety of different decision making tools and approaches and seeing where they all converge and point too is better than just looking at expected value (or using any other single tool). That expected value calculations are not the only way to make  decisions and that the arguments for giving to the global poor would be unconvincing if solely based on expected value cautions and not on historical evidence, good feedback loops, expert views, strategic considerations, etc, etc. then the authors would not be convinced.

For example in [1.] Holden describes how he was initially sceptical that:
"donations can do more good when targeting the developing-world poor rather than the developed-world poor "
but he goes onto says that:
"many (including myself) take these arguments more seriously on learning things like “people I respect mostly agree with this conclusion”; “developing-world charities’ activities are generally more robustly evidence-supported, in addition to cheaper”; “thorough, skeptical versions of ‘cost per life saved’ estimates are worse than the figures touted by charities, but still impressive”; “differences in wealth are so pronounced that “hunger” is defined completely differently for the U.S. vs. developing countries“; “aid agencies were behind undisputed major achievements such as the eradication of smallpox”; etc."

– –

Now I am actually somewhat sceptical of some of this writing. I think much of it is a pushback against longtermism. Remember the global development EAs have had to weather the transition from "give to global health, it has the highest expected value" to "give to global health, it doesn't have the highest expected value (longtermism has that) but is good for many other reasons". So it is not surprising that they have gone on to express that there are many other reasons to care about global health that are not based in expected value calculations.

– –  

But that possible "status quo bias" does not mean they are wrong. It is still the case that GiveWell have made a host of arguments for global health beyond expected value and that the longtermsim community has not done so. The longtermism community has not produced historical evidence or highlighted successful feedback loops or demonstrated that their reasoning is robust to a broad variety of possible worldviews or built strong expert consensus. (Although the case has been made that preventing extreme risks is robust to very many possible futures, so that at least is a good longtermist argument that is not based on expected value.)

In fact to some degree the opposite is the case. People who argue against longtermism have pointed to cases were long-term type planning historically led to totalitarianism or to the common-sense weirdness of longtermist conclusions etc. My own work into  risk management suggests that especially when planning for disasters it is good to not put too much weight on expected value but to assume that something unexpected will happen.

The fact is that the longtermist community has much more weird conclusions than the global health community yet has put much less effort into justifying those conclusions.

– – 

To me it looks like all this has lead to a situation where there are many weak arguments against longtermsim (CL) and one very strong argument for longtermism (AL->CL).  This is problematic as it is very hard to compare one strong argument against many weak arguments and which side you fall on will depend largely on your empirical views and how you weigh up evidence. This ultimately leads to unconstructive debate.

– – 

I think the longtermist view is likely roughly correct. But I think that the case for longtermism has not be made rigorously or even particularly well (certainly it does not stand up well to Holden's "cluster thinking" ideals). I don’t see this as a criticism of the longtermist community as the community is super new and the paper arguing the case even just from the point of view of expected value is still in draft! I just think it is a misconception worth adding to the list that the community has finished making the case for longtermism – we should recognise our newness and that there is still work to be done and not pretend we have all the answers. The EA global health community has build this broad theoretical bases beyond expected value and so can we, or we can at least try.

– – 

I would be curious to know the extent to which you agree with this?

Also, I think this way of mapping situation is a bit more nuanced here than in my previous comment so I want to acknowledge a subtle changing of views between by earlier comment and this one, ask that if you respond you respond to the views as set out here rather than above and of course thank you for your insightful comment that lead to my views evolving  – thank you Max!


– –
– – 

(PS. On the other topic you mention.  [Edited: I am not yet sure of the extent to which I  think] the  'beware suspicious convergence' counter-argument  [applies] in this context. Is it suspicious that if you make a plan for 1000 years it looks very similar to if you make a plan for 10000 years? Is it suspicious that if I plan for 100000 years or 100 years what I do in the next 10 years looks the same? Is it suspicious that if I want to go from my house in the UK to Oslo the initial steps are very similar to if I want to go from my house to Australia – ie. book ticket, get bus to train station, get train to airport? Etc?   [Would need to give this more thought but it is not obvious] )



 

Comment by weeatquince on Possible misconceptions about (strong) longtermism · 2021-03-14T20:17:16.241Z · EA · GW

Hi Jack, Thank you for your thoughts. Always a pleasure to get your views on this topic.

I agree with your overall point that the case isn’t as airtight as it could be

I think that was the main point I wanted to make (the rest was mostly to serve as an example). The case is not yet made with rigour, although maybe soon. Glad you agree.

I would also expect (although cant say for sure) that if you go hang out with GPI academics and ask how certain they are about x y and z about longtermism you would perhaps find less certainty than it comes across from the outside or that you might find on this forum and that it is useful for people to realise that.

Hence thought it might be one for your list.

 

– – 

The specific points 1. and 2. were mostly to serve as examples for the above (the "etc" was entirely in that vein, just to imply that there maybe things that a truly rigorous attempt to prove CL would throw up).

Main point made, and even roughly agreed on :-), so happy to opine a few thoughts on the truth or 1. and 2. anyway:

 

– – 

1. The actions that are best in the short run are the same as the ones that are best in the long run

Please assume that by short-term I mean within 100 years, not within 10 years.

A few reasons you might think this is true:

  • Convergence: See your section on "Longtermists won't reduce suffering today". Consider some of the examples in the paper, speeding up progress, preventing climate change, etc are quite possibly the best things you would do to maximise benefit over the next 100 years. AllFed justify working on extreme global risks based on expected lives saved in the short-run. (If this is suspicious convergence it goes both ways, why are many of the examples in the paper so suspiciously close to what is short-run best).
  • Try it: Try making the best plan you can accounting for all the souls in the next 1x10^100 years, but no longer. Great done. Now make the best plan but only take into account the next 1X10^99 years. Done? does it look any different? Now try 1x10^50 years. How different does that look? What about the best plan for 100000 years? Does that plan look different? What about 1000 years or 100 years?  At what point does it look different? Based on my experience of working with governments on long-term planning my guess would be it would start to differ significantly after about 50-100 years. (Although it might well be the case that this number is higher for philanthropists rather than policy makers.)
  • Neglectedness: Note that the two thirds of the next century (after 33 years) is basically not featured in almost any planning today. That means most of the next 100 years is almost as neglected as the long-term future (and easier to impact).

On:

Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects ... the value of these actions will in fact be coming from the long-run effects

I think I agree with this (at least intuitively agree, not given it deep though). I raised 1. as it as I think it is a useful example of where the Case for Strong Longtermism paper focuses on AL rather than CL. See section 3 p9 – the authors say if short-term actions are also the best long-term actions then AL is trivially true and then move on.  The point you raise here is just not raised by the authors as it is not relevant to the truth of AL.

 

– – 

2. Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.

I agree that AL leads to 'deontic strong longtermism'.

I don’t think expected value approach (which is the dominant approach used in their paper) or the other approaches they discuss fully engages with how to make complex decisions about the far future. I don’t think we disagree much here (you say more work could be done on decisions theoretic issues, and on tractability).

I would need to know more about your proposed alternative to comment.

Unfortunately, I am running out of time and weekend to go into this in too much depth on this so I hope you don’t mind that instead of a lengthy answer here if I just link you to some reading. 

I have recently been reading the following that you might find an interesting introduction to how one might go about thinking about these topics and is fairly close to my views:

https://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking/

https://www.givewell.org/modeling-extreme-model-uncertainty

 

– –

Always happy to hear your views. Have a great week

Comment by weeatquince on Possible misconceptions about (strong) longtermism · 2021-03-14T16:27:21.061Z · EA · GW

Thank you for this Jack.

Floating an additional idea here, in the terms of another misconception that I sometimes see. Very interested in your feedback:

 

Possible misconception: Someone has made a thorough case for "strong longtermism"

Possible misconception: “Greaves and MacAskill at GPI have set out a detailed argument for strong longtermism.”

My response: “Greaves and MacAskill argue for 'axiological strong longtermism' but this is not sufficient to make the case that what we ought to do is mainly determined by focusing on far future effects”

Axiological strong longtermism (AL) is the idea that: “In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.”

The colloquial use of strong longtermism on this forum (CL) is something like  “In most of the ethical choices we face today we can focus primarily on the far-future effects of our actions".

Now there are a few reasons why this might not follow (why CL might not follow from AL):

  1. The actions that are best in the short run are the same as the ones that are best in the long run (this is consistent with AL, see p10 of the Case for Strong Longtermism paper) in which case focusing attention on the more certain short term could be sufficient.
  2. Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.
  3. Etc

Whether or not you agree with these reasons it should at least be acknowledged that the Case for Strong Longtermism paper focuses on making a case for AL – it does not actually try to make a case for CL. This does not mean there is no way to make a case for CL but I have not seen anyone try to and I expect it would be very difficult to do, especially if aiming for philosophical-level rigour.

 

– – 

This misconception can be used in discussions for or against longtermism. If you happen to be a super strong believer that we should focus mainly on the far future it would whisper caution and if you think that Greaves and MacAskill's arguments are poor it would suggest being careful not to overstate their claims.



(PS. Both 1 and 2 seem likely to be true to me)
 

Comment by weeatquince on What Helped the Voiceless? Historical Case Studies · 2021-02-08T20:49:30.543Z · EA · GW

Yes sorry my misunderstanding. You are correct that this would still be non-ideal. 

I don’t think in most cases it would be a big problem but yes it would be problem.

 

Also another very clear problem will all of this is that humans do not naturally plan in their own long-term self interest. So for example enfranchising the young would not necessarily lead to less short-termism just because they have longer to live. The policies would have to be more nuanced and complex than that.

 

Either way I think I am drawing a lesson to lean more towards strategies that focus a bit more on policies that empower creating a good world for the next generation rather than for all future generations, although of course both matter. 

Comment by weeatquince on Introduction to Longtermism · 2021-02-08T20:28:52.203Z · EA · GW

Makes sense. Thanks Jack.

Comment by weeatquince on Introduction to Longtermism · 2021-02-08T10:30:13.405Z · EA · GW

Thank you Jack very useful. Thank you for the reading suggestion too. Some more thoughts from me

"Discounting for the catastrophe rate" should also include discounting for sudden positive windfalls or other successes that make current actions less useful. Eg if we find out that the universe is populated by benevolent intelligent non-human life anyway, or if a future unexpected invention suddenly solves societal problems, etc.

There should also be an internal project discount rate (not mentioned in my original comment). So the general discount rate (discussed above) applies after you have discounted the project you are currently working on for the chance that the project itself becomes of no value – capturing internal project risks or windfalls, as opposed to catastrophic risk or windfalls.

I am not sure I get the point about "discount the longterm future as if we were in the safest world among those we find plausible".

I don’t think any of this (on its own) invalidates the case for longtermism but I do expect it to be relevant to thinking through how longtermists make decisions.

Comment by weeatquince on Introduction to Longtermism · 2021-02-08T01:46:05.137Z · EA · GW

Yes that is true and a good point. I think I would expect a very small non-zero discount rate to be reasonable, although still not sure what relevance this has to longtermism arguments. 

Comment by weeatquince on Introduction to Longtermism · 2021-02-08T01:33:07.006Z · EA · GW

I guess some of the: AI will be transformative therefore deserves attention arguments are some of the oldest and most generally excepted within this space.

For various reasons I think the arguments for focusing on x-risk are much stronger than other longtermist arguments, but how best to do this, what x-risks to focus on, etc, is all still new and somewhat uncertain.

Comment by weeatquince on What Helped the Voiceless? Historical Case Studies · 2021-02-07T12:28:51.871Z · EA · GW

Hi, The point isn't about what is "predictable" it is about what is "plannable". Predictions are only useful insofar as they let us decide how to act. What we want is to be able robustly positively affect the world in the future.

So the adapted version of your version of my argument would be:

  • We can sometimes take actions that robustly positively affect the world with a timescale of 30+ years, BUT as the future is so uncertain most such long-term plans involve being flexible and adaptable to changes and in  practice they look very similar to if you planned for <30 year effects (with the additional caveat that in 30 years you want to be in as good a position as possible to keep making long-term positive changes going forward).
    • E.g. The global technologies and trends that could lead to brutal totalitarianism over the very long-term future are so uncertain that addressing the trends and technologies that might lead to totalitarianism in the next 30 years whilst also keeping an ongoing watchful eye on emerging trends and technologies and adapting to concerning  changes and/or to opportunities to strengthen democracy, is likely the best plan you can make.
  • Therefore there's a lot of overlap between the policies that (as best as we can tell) have the best effects on the world in 30+ years (or 60+ years ), those that have the best effects on the world in 0-30 years (and also leave the world in 30 years time ready to the next 30 years).
  • Therefore we can achieve most longtermist policy goals by getting policies that are best for the world over the next 0-30 years (and also leave the world in 30 years time ready to the next 30 years).

 

I think this mostly (although not quite 100%) addresses the two concerns that you raise


NOTES:

I would note that 30 years is not some magic number. Much of policy, including some long-term policy is time-independent. Where plans are made they might be over the next 1, 3, 5, 10, 20, 25, 30 or 50 years, as appropriate given the topic at hand. Over each length of time the aim should not be solely to maximise benefit over the planned time period but to leave the world in a good end state so that it can continue to maximise benefit going forward (eg your 1 year budgeting shouldn't say lets spend all the money this year).

There are plans that go beyond 30 years but according to Roman Kaznaric's book The Good Ancestor making plans for more than 30 years are very rare. And my own experience suggests 25 years is the maximum in most (UK) government work, and even at that length of time it is often poorly done. Hence I tend to settle on 30 years as reasonable maximum. There are of course some plans that go beyond 30 years. They tend to be on issues where long-term thinking is both necessary and simple (eg tree planting) or to use adopt adaptive planning techniques to allow for various changes in circumstances (eg the Thames Estuary 2100 flood planning).

Comment by weeatquince on What Helped the Voiceless? Historical Case Studies · 2021-02-07T12:02:08.044Z · EA · GW

Any Future Generations institution should be explicitly mandated to consider long-term prosperity, in addition to existential risks arising from technological development and environmental sustainability

Yes I fully agree with this. 

 [...] advocates of future generations can lastingly diminish the opposition of business interests—or turn it into support—by designing pro-future institutions so that they visibly contribute to areas where future generations and far-sighted businesses have common interests, such as long-term trends in infrastructure, research and development, education, and political/economic stability.

I also agree with this – although I would take my agreement with a pinch of salt – I don’t feel I have specific expertise on how farsighted businesses can be in order to take a strong view on this. 

Comment by weeatquince on Introduction to Longtermism · 2021-02-07T11:51:12.180Z · EA · GW

To add a more opinionated less factual point, as someone who researches and advises policymakers on how to think and make long-term decisions, I tend to be somewhat disappointed by the extent to which the longtermist community lacks discussions and understanding of how long-term decision making is done in practice.  I guess, if worded strongly, this could be worded as an additional community-level objection to longtermism along the lines of: 

Objection: The longtermist idea makes quite strong somewhat counterintuitive claims  about how to do good but the longtermist community has not yet demonstrated appropriately strong intellectual rigour (other than in the field of philosophy) about these claims and what they mean in practice. Individuals should therefore should be  sceptical of the claims of longtermists about how to do good. 

If worded more politely the objection would basically be that the ideas of longtermism are very new and somewhat untested and may still change significantly so we should be super cautious about adopting the conclusions of longtermists for a while longer.

Comment by weeatquince on Introduction to Longtermism · 2021-02-07T11:38:35.711Z · EA · GW

Great introduction. Strongly upvoted.  It is really good to see stuff written up clearly. Well done!!!

 

To add some points on discounting. This is not to disagree with you but to add some nuance onto a topic it is useful for people to understand. Government's (or at least the UK government) apply discount rates for three reasons:

  1. Firstly, pure-time discounting as people want things now rather than in the future of roughly 0.5%. This is what you seem to be talking about here when you talk about discounting. Interestingly this is not done for electoral politics (the discount rate is not a big political issue) but because the literature on the topic has numbers ranging from 0% to 1% so the government (which listens to experts) goes for 0.5%.
  2. Secondly, catastrophic risk discounting of 1% to account for the fact that a major catastrophic risk could make a project's value worthless, eg earthquakes could destroy the new hospital, social unrest could ruin a social programs success, etc.
  3. Thirdly, wealth discounting of 2%, to account for the fact that the future will be richer so transferring wealth form now to the future has a cost. This does not apply to harms such as loss of life.

Ultimately it is only the fist of these that longtermists and philosophers tend to disagree with. The others may still be valid to longtermists.

For example,  if you were to estimate there is a background basically unavoidable existential risk rate of 0.1% (as the UK government Stern Review discount rate suggests) then basically all the value of your actions (99.5%) is eroded after 5000 years and arguably it could be not worth thinking beyond that timeframe. There are good counter-considerations to this, not trying to start a debate here, just explain how folk outside the longtermist community apply discounting and how it might reasonably apply to longtermists' decisions. 

Comment by weeatquince on Money Can't (Easily) Buy Talent · 2021-01-26T17:26:32.269Z · EA · GW

Yes I agree with you.

That said the original post appears in a few places to be specifically taking about talent at EA organisations so the example felt apt.

Comment by weeatquince on What Helped the Voiceless? Historical Case Studies · 2021-01-23T08:36:26.562Z · EA · GW

This is quite possibly the best thing I have ever read on this forum. Thank you so much for writing this. I strongly upvoted.

I have been working full time as an advocate for future generations within politics/policy. As one of the very few people actually working in this space I think I can offer some insight and improvements to your  "implications for political strategy for future generations".

One larger point:

  • You seem to suggest a few times that political inclusion of future generations is difficult because they do not yet exist, cannot engage in politics, etc, etc. It feels at times like you see a significant gulf between representation of future generations that do not exist and inclusion of the current generation's future needs. I think this distinction is much smaller than you imply.
    • Democratic political intuitions currently work on roughly a 2 year timeline, at the cost to the future experiences of current generations.
    • 30 year + planning is impractically difficult in most cases, so that puts a limit on how long we could expect our intuitions to be looking forward.
    • So, if political systems were making the best decisions over a 30+ year time horizon rather than a 2 year timeline (ie only caring about current generations but caring about their futures) then I think this would cover roughly 95%+ of the policies that a strong longtermist would want to see in the world that are not already happening.
    • As such I think there is huge scope for getting almost all the policies that longtermists might want simply by getting the best long-run policies for current generations.

To add a nuances to a few other points

  • You suggest "When opportunities for change are limited, bide your time". I somewhat agree but I think it is important to flag that bidding time does not look like doing nothing.  I think it is useful for advocates to be working on other aligned goals during this time in order to continue to build traction and connections in the politics space.
  • You suggest "political trades ... advocating for policies, such as committees or funds for future generations, that will not be implemented for a decade or more". I think this is a valid point to raise but my instinct is that it  would be so practically difficult to pull of getting those future commitments to be binding or to happen at all that I am not sure it is that useful a tactic. Also decade is likely too long a period of time to be making these trades over,  a few years (2-5) years might be the limit here.
  • You suggest "advocates of the representation of future generations should concentrate their resources on a few nations". Once again I think this is a very valid point but in practice it works a bit differently form how you describe it. I would break down representation of future generations into a few deferent topics, such as: democratic representation for the next generation, long-run financial prudence, sustainability and preservation, preventing extreme risks, global security, preventing s-risk type scenarios, etc. Each of these topics could be championed on the world stage by a different nation.

 

Overall this post pushes me more towards thinking that future generations advocates should focus more on the long-term benefits to current generations.

The post has also inspired me to add to the ideas list a plan for a webpage where we list all UK MPs who have expressed public support for future generations. This would increase the ability that they can be held to account for their commitment to the future further down the line by future advocates.

Comment by weeatquince on Money Can't (Easily) Buy Talent · 2021-01-23T07:31:17.005Z · EA · GW

Interesting take on money and talent, thank you for writing up. I thought I would share some of my experience that might suggest an opposing view.

I do direct work and I don’t think I could do earning-to-give very well. Also when I look at my EA friends I see both someone who has gone from direct work to earning-to-give, disliked it and went back to direct work and someone who has gone from earning-to-give to direct work, disliked that an gone back to earning-to-give.  Ultimately all this suggests to me that personal fit is likely to be by far the mort important factor here and that these arguments are mostly only useful to the folk who could do really well at either path.

I also think there are a lot of people in the EA community who have really struggled to get into doing direct work (example), and I have struggled to find funding at times and relied on earning-to-give folk to fund me. I wonder if there is maybe a grass is greener on the other side effect going on. 
 

Comment by weeatquince on Thoughts on whether we're living at the most influential time in history · 2021-01-12T11:01:29.499Z · EA · GW

My thanks to Will and Buck for such an interesting thoughtful debate. However to me there seems to be one key difference that I think it is worth drawing out:

 

Will's updated article (here, p14) asks the action relevant question (rephrased by me) of:

Are we [today] among the very most influential people, out of the very large number of people who will live that we could reasonably pass resources to [over the coming thousand years]

 

Buck's post (this post) seems to focus on the not action relevant question (my phrasing) of:

Are we [in this century] at the very most influential time, out of the very large span of all time into the distant future [over the coming trillion years]

 

It seems plausible to me that Buck's criticisms are valid when considering Will's older work, however I do not think the criticisms that Buck raises here about Will's original HoH still applies to the action relevant restricted HoH in Will's new paper. (And I see little value in debating the non-action relevant HoH hypothesis.)

Comment by weeatquince on Web of virtue thesis [research note] · 2021-01-03T08:09:16.138Z · EA · GW

Hi Owen, I think this paper (and the other stuff you have posed recently) are very good.  It is good to see breakdowns of longtermism that are more practicably applicable to life and to solving some of the problems in the world today.

 

I would like to draw your attention to the COM-B (Capability Opportunity Motivation - Behaviour) model of behaviour change in case you are not already aware of it. The model is, as I understand it a fairly standard practice in governance reform in international development. The model is as follows:

  • The idea is that government decisions are dependent on the Behaviour (B) of key actors. This matches very closely to the idea that critical junctures are dependent on the Virtues embodied by key actors. 
  • An outside actor can ensure Behaviour goes well (behaviour changed for the better) by addressing key actors Capabilities (C), Opportunities (O) and Motivations (M). This matches very closely to your three points that the key actors must be 3) competent, must 1) know about the problem and must 2) care enough to solve it.
  • The model then offers a range of tools and ways to break down COM into smaller challenges and and influence COM to achieve B and can be worked into a Theory of Change.

 

I think it is useful for you to be aware of this (if you are not already) as:

  • It shows you are on the right track. It sounds like from your post you are uncertain about the Key Virtue assumption. If you just thought up this assumption it could be good to know that it matches very very closely to an existing approach to changing the actions of key actors in positions of governance (or elsewhere).
  • It provides a more standard language to use if you want to move from speaking to philosophers to speaking to actors in the institutional reform space.
  • COM-B may be a better model. By virtue of being tried and tested COM-B is likely a better model with empirical evidence and academic papers behind it. Of course it is not perfect and there are criticisms of it (similar to how there are criticisms of QALYs in global health but they are still useful).
  • It provides a whole range of useful tools for thinking through the next steps of influencing key behaviours / key virtues. As mentioned there are various ways of breaking down the problems, tools to use to drive change, and even criticisms that highlight what the model misses.

 

I hope this is useful for resolving some of the uncertainty you expressed about the Key Virtue assumption and for refining next steps when you come to work on that.

I would caveat that I find COM-B useful to think through but I am not a practitioner (I'm like an EA thinking in QALYs but not having to actually work with them).

I think there is a meta point here that I keep reading papers from FHI/GPI and getting the impression (rightly or wrongly) that stuff that is basic from a policy perspective is being being derived from scratch, often worse than the original. I would be keen to see FHI/GPI engage more with existing best practice at driving change.

Comment by weeatquince on Improving Institutional Decision-Making: a new working group · 2021-01-02T08:00:13.335Z · EA · GW

Thank you Ian. Grateful for the thoughtful reply. Good to hear the background on the name and I agree it makes sense to think of scope in a more fuzzy way (eg in scope, on the edge of scope like cfar, useful meta projects like career advice, etc)

Just to clarify my point here was not one of "whether to emphasize institutions or decision-making more" (sorry if I was initial comment was confusing) but kind of the opposite point that: it would make sense to ensure both topics are roughly equally emphasised (and that I'm not sure your post does that).

Depending on which you emphasis and which questions you ask you will likey get different answers, different interventions, etc. At an early scoping stage when you don't want to rule out much, maintaining a broad scope for what to look into is important.

Also, to flag, I don't find the "everything is decision making" framing as intuitive or useful as you do.

Totally off topic from my original point, but it is interesting to note that my experience is the polar opposite of yours. Working in gov there was a fair amount of thought and advice and tools for effective decision making, but the institutional incentives where not there. Analysts would do vast amounts of work to assess decisions and options simply to have the final decision made by a leader just looking to enrich themselves / a politician's friend / a party donor / etc.

I'd still focus on finding answers from both angles for now, but, given my experience and given that governments are likey to be among the most important institutions, if I had to call it one way or the other, I'd expect the focus on the topic of improving decision making to be less fruitful than the focus on improving institutions.

Keep up the great work!

Comment by weeatquince on Effective charities for improving institutional decision making and improving global coordination · 2021-01-02T07:30:11.928Z · EA · GW

I work for the APPG for Future Generations (https://www.appgfuturegenerations.com) in this space. Or impact report is here: https://forum.effectivealtruism.org/posts/AWKk9zjA3BXGmFdQG/appg-on-future-generations-impact-report-raising-the-profile-1 If you wish to donate please get in touch.

The APPG is affiliated with the Center for the Study of Existential Risk (https://www.cser.ac.uk/) which I behind is the best research organisation with content related to longtermism and improving institutional decision making.

More generally I think Transparency Intentional (https://www.transparency.org/en/) and Global Witness (https://www.globalwitness.org/en/) are the dominant charities in the space of reducing government corruption, a key feature of improving institutional decision making. I have not seen any evaluations of them but I'd reject they'd do well.

See also some of the institutions listed in this article (https://forum.effectivealtruism.org/posts/94QtuT4ss3RzrfH8A/improving-institutional-decision-making-a-new-working-group) under the section on "IIDM within and outside of EA"

Comment by weeatquince on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-12-31T15:42:46.200Z · EA · GW

Upon reflection I think that in my initial response to this I post was applying a UK lens to a US author.

I think the culture war dynamics (such as cancel culture) in the USA are not conducive to constructive political dialogue (agree with Larks on that). Luckily this has not seeped through to UK politics very much at least so far, but it is something I worry about. I see articles in the UK (on the right) making out that cancel culture (etc) is a problem, often with examples from the states. I expect (although this is not a topic I think much about) that articles of that type are unhelpfully fanning the culture war flames more than quelling them. As such I had a knee jerk reaction to this post and put it in the same bucket as such articles. I think I was applying a UK lens to a US author, without thinking if it applied. 

That said I still think that Larks is (similarly) unfairly applying a US lens and US examples to a German situation without making a good case that what they says applies in the German cultural context. As such I think he may well be being too harsh on EA Munich.

Comment by weeatquince on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-12-31T15:42:19.584Z · EA · GW

Upon reflection I think that in my initial response to this I post was applying a UK lens to a US author.

I think the culture war dynamics (such as cancel culture) in the USA are not conducive to constructive political dialogue (agree with Larks on that). Luckily this has not seeped through to UK politics very much at least so far, but it is something I worry about. I see articles in the UK (on the right) making out that cancel culture (etc) is a problem, often with examples from the states. I expect (although this is not a topic I think much about) that articles of that type are unhelpfully fanning the culture war flames more than quelling them. As such I had a knee jerk reaction to this post and put it in the same bucket as such articles. I think I was applying a UK lens to a US author, without thinking if it applied. 

That said I still think that Larks is (similarly) unfairly applying a US lens and US examples to a German situation without making a good case that what they says applies in the German cultural context. As such I think he may well be being too harsh on EA Munich.

Comment by weeatquince on Health and happiness research topics—Part 1: Background on QALYs and DALYs · 2020-12-31T15:20:28.517Z · EA · GW

Hi Derek, just to note to say that my experience of reading the article was that I also found the welfare and wellbeing definitions confusing. Also doesn’t "welfare economics" look to maximise "wellbeing" by your definition, or maybe I am still confused? Might be worth clearly defining these at the start of future work.

Comment by weeatquince on Health and happiness research topics—Part 1: Background on QALYs and DALYs · 2020-12-31T15:14:14.106Z · EA · GW

Hi Derek.

Fantastic work. very excited to see Rethink Priorities branch out into more meta questions on how to measure what value is and so on. Excited to read the next few posts when I have time

A few thoughts:

 

1. Have you done much stakeholder engagement? One thing that was not here (although maybe I have to wait for post 9 on this) that I would love to see is some idea of how this work feeds through to change. Have you met with staff at NICE or Gates or DCP other policy professionals and talked to them about why they are not improving these metrics and how excited they would be to have someone work on improving these metrics. (This feels like the kind of step that should be taken before the project goes too far).

 

2. Problem 4 - neglect of spillover affects – probably cannot be solved by changing the metric. It feels  more like an issue with the way the metric is used. You sort of cover this when you say "The appropriate response is unclear." I expect making the metric include all spillover affects is the wrong approach as the spillover effects are often quite uncertain and quantifying the high uncertainty effects and within the main metric seems problematic. That said I am not sure about this so just chipping in my two cents.

(For example when I worked at Treasury we refused to consider spillover effects at all, I think because there was a view that any policy could be justified by someone claiming it had spillover  effects. Then again the National Audit Office did say our own spending measures were not leading to long-term value for money so maybe that was the wrong approach.)

 

3. Who would you recommend to fund if I want to see more work like this? Who do you recommend funding if I want to see more work like this or a project to improve and change these metrics. You personally? Rethink Priorities? Happier Lives Institute? Someone else? Nobody at present?

 

4. How is the E-QALY project going? I clicked the link for the E-QALY project (https://scharr.dept.shef.ac.uk/e-qaly/about-the-project/) It says it finishes in 2019. Any idea what happened to it? 

 

Best of luck with the rest of the project.