Posts

Convergence thesis between longtermism and neartermism 2021-12-30T16:03:43.712Z
APPG for Future Generations Impact Report 2020 - 2021 2021-10-26T14:40:46.182Z
A practical guide to long-term planning – and suggestions for longtermism 2021-10-10T15:37:17.458Z
Which EA organisations' research has been useful to you? 2020-11-11T09:39:13.329Z
How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs 2020-09-05T12:51:01.844Z
The case of the missing cause prioritisation research 2020-08-16T00:21:02.126Z
APPG on Future Generations impact report – Raising the profile of future generation in the UK Parliament 2020-08-12T14:24:04.861Z
Coronavirus and long term policy [UK focus] 2020-04-05T08:29:08.645Z
Where are you donating this year and why – in 2019? Open thread for discussion. 2019-12-11T00:57:32.808Z
Managing risk in the EA policy space 2019-12-09T13:32:09.702Z
UK policy and politics careers 2019-09-28T16:18:43.776Z
AI & Policy 1/3: On knowing the effect of today’s policies on Transformative AI risks, and the case for institutional improvements. 2019-08-27T11:04:10.439Z
Self-care sessions for EA groups 2018-09-06T15:55:12.835Z
Where I am donating this year and meta projects that need funding 2018-03-02T13:42:18.961Z
General lessons on how to build EA communities. Lessons from a full-time movement builder, part 2 of 4 2017-10-10T18:24:05.400Z
Lessons from a full-time community builder. Part 1 of 4. Impact assessment 2017-10-04T18:14:12.357Z
Understanding Charity Evaluation 2017-05-11T14:55:05.711Z
Cause: Better political systems and policy making. 2016-11-22T12:37:41.752Z
Thinking about how we respond to criticisms of EA 2016-08-19T09:42:07.397Z
Effective Altruism London – a request for funding 2016-02-05T18:37:54.897Z
Tips on talking about effective altruism 2015-02-21T00:43:28.703Z
How I organise a growing effective altruism group in a big city in less than 30 minutes a month. 2015-02-08T22:20:43.455Z
Meetup : Super fun EA London Pub Social Meetup 2015-02-01T23:34:10.912Z
Top Tips on how to Choose an Effective Charity 2014-12-23T02:09:15.289Z
Outreaching Effective Altruism Locally – Resources and Guides 2014-10-28T01:58:14.236Z
Meetup : Under the influence @ the Shakespeare's Head 2014-09-12T07:11:14.138Z

Comments

Comment by weeatquince on What questions relevant to EA could be answered by surveying the public? · 2022-01-15T01:03:18.773Z · EA · GW

Is this question asked with the intention of maybe doing such surveys?

I do plan to do surveys of the public's view of what a good future is and would really appricate support on that. I hope to be able to fund any such work but yet to be confirmed. Would you be invested in collaborating?

Comment by weeatquince on What questions relevant to EA could be answered by surveying the public? · 2022-01-14T18:38:18.180Z · EA · GW

I am doing work on this in the UK. Will PM you.

Edit: I do plan to do some of this. So if anyone else is interested in helping with such work on the UK do let me know.

Comment by weeatquince on What questions relevant to EA could be answered by surveying the public? · 2022-01-14T14:52:56.244Z · EA · GW

Making progress on ethics.

Sometimes I think philosophers could do better ethics work if they included surveying and working with the public as part of their tool kit . What do people actually think and how do they make trade-offs?

One specific example: I had a recent chat with a bunch of philosophers who said the standard view in philosophy is it impossible to have (or to technically formalise) a consequentialist view of justice based ethics. This confused me because in practice people do this all the time – you can find a bunch of justice based EAs and get them to make ethical trade-offs and it becomes pretty consequentialist pretty quickly (see here).  

Comment by weeatquince on What questions relevant to EA could be answered by surveying the public? · 2022-01-14T14:47:10.657Z · EA · GW

Any human-focused moral weights work!!

How much do members of the public care about:

  • Subjective wellbeing
  • Increases in income
  • Increases in happiness
  • Reductions in pain
  • Mental health 
  • Education
  • Being alive

Public surveys would be crucial for developing better QALYs / DALYs / WELBYs / etc (see these posts).

Public surveys are also needed to make trade offs between health and things not captured by QALYs / DALYs (such as increased income or justice), to trade of between years of life and quality of life (especially for some population ethics view) and so on.

Surveys in developing countries would be particularly useful.

Comment by weeatquince on What questions relevant to EA could be answered by surveying the public? · 2022-01-14T14:41:13.894Z · EA · GW

What are the public's views and concerns on AI, AI ethics and AI risks?

I regulation is going to happen. A better understanding of the public's attitudes would be useful for helping EA-aligned policy advocates to ensure that the regulation designed is effective at both addressing public need and ensuring that AI development is done in a safe way.

Comment by weeatquince on What questions relevant to EA could be answered by surveying the public? · 2022-01-14T14:37:46.984Z · EA · GW

What is the public's views, visions and ideas of what a good future will look like?

The idea here is that a clear vision of a what a good future looks like has been a key part of successful long-term policy making to date (based on experiences in Wales and Portugal). The hope is a clear vision of what the public want helps make long-term decision making feel easier to democratic policy makers, it helps them to explain and justify a focus on the long-term and should ultimately helps policy-makers prioritise the long-term more. 

Comment by weeatquince on Convergence thesis between longtermism and neartermism · 2022-01-03T11:36:51.880Z · EA · GW

Super thanks for the lengthy answer.

 

I think we are mostly on the same page.

Decision quality is orthogonal to value alignment. ... I'm more optimistic about IIDM that's either more targeted or value-aligned.

Agree. And yes to date I have focused on targeted interventions (e.g. improving government risk management functions) and value-aligning orgs (e.g. institutions for Future Generations).

[Could] claiming that in practice work that apparently looks like "un-targeted, value-neutral IIDM" (e.g. funding academic work in forecasting or campaigning for approval voting) is in practice pretty targeted or value-gnostic.

Agree. FWIW I think I would make this case about approval voting as I believe aligning powerful actors (elected officials) incentives with the populations incentives is a form of value-aligning. Not sure I would make this case for forecasting, but could be open to hearing others make the case.

 

So where if anywhere do we disagree?

I'm leery is that influence goes both ways, and I worry that LT people who get stuck on IIDM may (eventually) get corrupted by the epistemics or values of institutions they're trying to influence, or that of other allies.

Disagree. I don’t see that as a worry. I have not seen any evidence any cases of this, and there are 100s of EA aligned folk in the UK policy space. Where are you from? I have heard this worry so far only from people in the USA, maybe there are cultural differences or this has been happening there. Insofar as it is a risk I would assume it might be less bad for actors working outside of institutions (capaigners, lobbyists) so I do think more EA-aligned institutions in this domain could be useful.

If we think of Lizka's B in the first diagram ("a well-run government") is only weakly positive or neutral on the value alignment axis from an LT perspective

I think a well-run government is pretty positive. Maybe it depends on the government (as you say maybe there is a case for picking sides) and my experience is UK based. But, for example my understanding is there is some evidence that improved diplomacy practice is good for avoiding conflicts and mismanagement of central government functions can lead to periods of great instability (e.g. financial crises). Also a government is a collections of many smaller institutions it when you get into the weeds of it it becomes easier to pick and choose the sub-institutions that matter more.

Comment by weeatquince on Convergence thesis between longtermism and neartermism · 2022-01-03T01:03:27.286Z · EA · GW

Hi. Thank you so much for the link, somehow I had missed that post by Lizka. Was great reading :-)

To flag however I am still a bit confused.  Lizka's post says "Personally, I think IIDM-style work is a very promising area for effective altruism"so I don’t understand how you go from that too IIDM is net-negative. I also don’t understand what the phrase "especially if (like me) your empirical views about external institutions are a bit more negative than Lizka's" means (like if you think institutions are generally not doing good then IIDM might be more useful not less). 

I am not trying to be critical here. I am genuinely very keen to understand the case against. I work in this space so it would be really great to find people who think this is not useful and to understand their point of view. 

Comment by weeatquince on Disentangling "Improving Institutional Decision-Making" · 2022-01-03T00:49:59.510Z · EA · GW

Hi Lizka, WOW – Thank you for writing this. Great to see Rethink Priorities working on this. Absolutely loving the diagrams here.

I have worked in this space for a number of years mostly here, have been advocating for this cause within EA since 2016 and advised both Jess/80K and the effective institutions project on their writeups. Thought I would give some quick feedback. Let me know if it is useful.

I thought your disentanglement did a decent job. Here are a few thought I had on it. 

  1. I really like how you split IIDM into "A technical approach to IIDM" and "A value-aligning approach to IIDM."
  2. However I found the details of how you split it to be very confusing. It left me quite unsure what goes into what bucket. For example intuitively I would see increasing the "accuracy of governments" (i.e. aligning governments with the interests of the voters) as "value-aligning" yet you classify it as "technical".
  3. That said despite this, I very much agreed with the conclusion that "value-oriented IIDM makes more sense than value-neutral IIDM" and the points you made to that effect.
     
  4. I didn’t quite understand what "(1) IIDM can improve our intellectual and political environment" was really getting at. My best guess is that by (1) you mean work that only indirectly leads to "(3) improved outcomes". So value-oriented (1) would look like general value spreading. Is that correct?
     
  5. I agree with "for the sake of clarity ... we should generally distinguish between 'meta EA' work and IIDM work". That said I think it is worth bearing in mind that on occasion the approaches might not be that different. For example I have been advising the UK government on how to asses high-impact risks which is relevant for EAs too.*
     
  6. One institution can have many parts. Might be a thing to highlight if you do more disentanglement. E.g. Is a new office for future generations within a government, a new institution or improving an existing institution?
     

One other thought I had whilst reading.

  • I think it is important not to assign value to IIDM based on what is "predictable".

    For example you say "it would be extremely hard to produce candidate IIDM interventions that would have sufficiently predictable outcomes via this pathway, as the outcomes would depend on many very uncertain factors." Predictions do matter but one of the key cases for IIDM is that it offers a solution to the unpredictable, the unknown unknows, to the uncertainty of the EA (and especially longtermist) endeavour. All the advice on dealing with high-uncertainty and things that are hard to predict suggest that interventions like IIDM are the kinds of interventions that should work – as set out by Ian David Moss here (from this piece).

 

Finally, at points you seemed uncertain about tractability of this work. I wanted to add that so far I have found it much much easier than I expected. Eg you say "it is possible that shifting the aims of institutions is generally very difficult or that the potential benefits from institutions is overwhelmingly bottlenecked by decision-making ability, rather than by the value-alignment of institutions’ existing aims". (I am perhaps still confused about what you would count as shifting aims Vs decision-making ability see my point 1. above, but) my rough take on this is that I have found shifting the aims of government to be fairly easy and that there are not too many decision-making bottlenecks

So super excited to see more EA work in this space.

 

 

* Oddly enough, despite being in EA for years, I think I have found it easier to influence the UK government to get better at risk identification work than the EA community. Not sure what to do about that. Just wanted to say that I would love to input if RP is working in this space.

Comment by weeatquince on Convergence thesis between longtermism and neartermism · 2022-01-02T10:29:37.997Z · EA · GW

Without going into specific details of each of your counter-arguments your reply made me ask myself: why would it be that for across a broad range of arguments I consistently find them more compelling than you do? Do we disagree on each of these points or is there some underlying crux? 

I expect if there is a simple answer here it is that my intuitions are more lenient towards many of these arguments as I have found some amount of convergence to be a thing time and time again in my life to date. Maybe this would be an argument #11 and it might go like this:

 

#11. Having spent many years doing EA stuff, convergence keeps happening to me. 

When doing UK policy work the coalition we built essentially combined long- and near-termist types. The main belief across both groups seemed to be that the world is chronically short term and if we want to prevent problems (x-risks, people falling into homelessness) we need to fix government and make it less short-term. (This looks like evidence of #1 happening in practice). This is a form of improving institutional decision making (which looks like #6).

Helping government make good risk plans, e.g. for pandemics, came very high up the list of Charity Entrepreneurship's neartermist policy interventions to focus on. It was tractable and reasonably well evidenced. Had CE believed that Toby's estimates of risks were correct it would have looked extremely cost-effective too. (This looks like #2). 

People I know seem to work in longtermist orgs, where talent is needed, but donate to neartermist orgs, where money is needed. (This looks like #5).

In the EA meta and community building work I have done covering  both long- and near-term causes seems advantageous. For example Charity Entrepreneurship's model (talent + ideas > new charities) is based on regularly switching cause areas. (This looks like #6.) 

Etc.

 

It doesn’t feel like I really disagree with any thing concrete that you wrote (except maybe I think you overstate the conflict between long- and near-term animal welfare folk), more that you and I have different intuitions on how much this all points push towards convergence being possible, or at least not suspicious. And maybe those intuitions, as intuitions often do, arise from different lived experiences to date. So hopefully the above captures some of my lived experiences.

Comment by weeatquince on Democratising Risk - or how EA deals with critics · 2022-01-02T00:47:20.119Z · EA · GW

Thank you Luke – super helpful to hear!!

Comment by weeatquince on Convergence thesis between longtermism and neartermism · 2022-01-02T00:25:31.520Z · EA · GW

Great point!

Comment by weeatquince on Convergence thesis between longtermism and neartermism · 2022-01-02T00:23:47.781Z · EA · GW

Why are you sceptical of IIDM, meta-science, etc. Would love to hear arguments against?

The short argument for is that insofar as making the future goes well means dealing with uncertainty and things that are hard to predict, then these seem like exactly the kinds of interventions to work on (as set out here). 

Comment by weeatquince on Convergence thesis between longtermism and neartermism · 2022-01-02T00:19:39.726Z · EA · GW

Agree

I think the weak argument here is not: Singer has thought about this a lot and has an informed view. It is maybe something like: There is an intuition that convergence makes sense, and even smart folk (e.g. Singer) have this intuition, and intuitions are some evidence.

FWIW I don’t think that Peter Singer piece is a great piece.

Comment by weeatquince on Convergence thesis between longtermism and neartermism · 2022-01-02T00:15:48.067Z · EA · GW

Yes good point. In practice that bar is too high to get much done. 

Comment by weeatquince on Democratising Risk - or how EA deals with critics · 2022-01-02T00:13:36.686Z · EA · GW

Do you feel that existing data on subjective wellbeing is so compelling that it's an indictment on EA for GiveWell/OpenPhil not to have funded more work in that area?

Tl;dr. Hard to judge. Maybe: Yes for GW. No for Open Phil. Mixed for EA community as a whole.

 

I think I will slightly dodge the question and answer the separate question – are these orgs doing enough exploratory type research. (I think this is a more pertinent question, and although I think subjective wellbeing is worth looking into as an example it is not clear it is at the very top of the list of things to look into more that might change how we think about doing good). 

Firstly to give a massive caveat that I do not know for sure. It is hard to judge and knowing exactly how seriously various orgs have looked into topics is very hard to do from the outside. So take the below with a pinch of salt. That said:

  • OpenPhil – AOK.
    • OpenPhil (neartermists) generally seem good at exploring new areas and experimenting (and as Luke highlights, did look into this).
  • GiveWell – hmmm could do better.
    • GiveWell seem to have a pattern of saying they will do more exploratory research (e.g. into policy) and then not doing it (mentioned here, I think 2020 has seen some but minimal progress).
    • I am genuinely surprised GiveWell have not found things better than anti-malaria and deworming (sure, there are limits on how effective scalable charities can be but it seems odd our first guesses are still the top recommended).
    • There is limited catering to anyone who is not a classical utilitarian – for example if you care about wellbeing (e.g. years lived with disability) but not lives saved it is unclear where to give.
  • EA in general – so-so.
    • There has been interest from EAs (individuals, Charity Entrepreneurship, Founders Pledge, EAG) on the value of happiness and addressing mental health issues, etc.
    • It is not just Michael. I get the sense the folk working on Improving Institutional Decision Making (IIDM) have struggled to get traction and funding and support too. (Although maybe promoters of new causes areas within EA always feel their ideas are not taken seriously.)
    • The EA community (not just GiveWell) seems very bad at catering to folk who are not roughly classical (or negative leaning) utilitarians (a thing I struggled with when working as a community builder).
    • I do believe there is a lack of exploratory research happening given the potential benefits (see here and here). Maybe Rethink are changing this.

Not sure I really answered the question. And anyway none of those points are very strong evidence as much as me trying to explain my current intuitions. But maybe I said something of interest.

Comment by weeatquince on Convergence thesis between longtermism and neartermism · 2021-12-31T17:53:52.872Z · EA · GW

What is your argument for doing things that look good across all domains?

I think the rough argument would be

  • EA is likely to start focusing on new cause areas. EA has a significant amounts of money (£bn) and is unsure (especially for longtermists) how best to spend it and one of the key ways of finding more places to give is to explore new cause areas. Also many EAs think we will likely move to new/adjacent cause areas (source). Also EA has underinvested in cause research (link).
  • Cause areas that look promising for both neartermists and longtermists are a good bet. They are likely to have the things that neartermists filter for (e.g. quality of evidence) and that longtermists filter for (e.g.  potential to leverage large impact in the long-run). And at minimum we should not dismiss them because we view the convergence as "surprising and suspicious".

 

Point 10 then seems to undermine the general point you're trying to make about convergence.

Yes to some degree the different arguments here might undermine each other. I just listed the 10 most plausible arguments I could think off. I made no effort to make sure they didn’t contradict one another (although I think contradictions are minimal).

If you want to reconcile Point 10 with the general narrative you could say something like: As a community we should do at least some of both. So for an individual with a specific skill set the relevant question would be personal strength and choosing the cause she/he can have the most impact on (rather than longtermism or not). To be honest I am not sure I 100% agree with that but it might help a bit.  

Comment by weeatquince on Convergence thesis between longtermism and neartermism · 2021-12-31T17:35:43.523Z · EA · GW

Maybe see it as a spectrum.  For example:

  • V. strong need for empirical evidence – Probably no x-risk work meet this bar
  • Medium need for empirical evidence – I expect the main x-risk things that could meet this bar is policy change or technical safety research, where that work can be shown to be immediately useful in non x-risks type situations (e.g. improving prediction ability, vaccine technologies, etc) as there is some feedback loop. 
  • Weak need for empirical evidence  – I expect most x-risk stuff that currently happens meets this bar except for some speculative research with out clear goals or justification (perhaps some of FHI work) or things taken on trust (perhaps choosing to funding AI research where all such research is kept private, like MIRI)
  • No need for empirical evidence – All x-risk work would meet this bar

 

(Above I characterised it as a single bar for quality of evidence and once something passes the bar you are good to go but obviously in practice it is not that simple as you will weigh of quality of evidence with other factors: scale, cost-effectiveness, etc.) 

 

The higher you think the bar is the more likely it is that longtermist things and neartermist things will converge. At the very top they will almost certainly converge as you are stuck doing mostly things that can be justified with RCTs or similar levels of evidence. At the medium level convergence seems more likely than at the weak level .

I think the argument is that there are very good reasons to think the bar ought to be very very high, so convergence shouldn't be that unlikely.

Comment by weeatquince on Democratising Risk - or how EA deals with critics · 2021-12-30T13:56:52.663Z · EA · GW

To me (as someone who has funded the Happier Lives institute) I just think it should not have taken founding an institute and 6 years and of repeating this message (and feeling largely ignored and dismissed by existing EA orgs) to reach the point we are at now.

I think expecting orgs and donors to change direction is certainly a very high bar. But I don’t think we should pride ourselves on being a community that pivots and changes direction when new data (e.g. on subjective wellbeing) is made available to us.

Comment by weeatquince on Democratising Risk - or how EA deals with critics · 2021-12-30T13:42:37.148Z · EA · GW

I just want to say that I think this is a beautifully accepting response to criticism. Not defensive. Says hey yes maybe there is a problem here. Concretely offers time and money and a plan to look into things more. Really lovely, thank you Will. 

Comment by weeatquince on Democratising Risk - or how EA deals with critics · 2021-12-29T22:10:55.905Z · EA · GW

Yes I think that is fair.

At the time (before he wrote his public critique) I had not yet realised that Phil Torres was acting in bad faith.

Comment by weeatquince on Democratising Risk - or how EA deals with critics · 2021-12-29T22:08:23.494Z · EA · GW

Agree with this.

Comment by weeatquince on Democratising Risk - or how EA deals with critics · 2021-12-29T20:48:20.521Z · EA · GW

Everything written in the post above strongly resonates with my own experiences, in particular the following lines:

the creation of this paper has not signalled epistemic health. It has been the most emotionally draining paper we have ever written.

the burden of proof placed on our claims was unbelievably high in comparison to papers which were considered less “political” or simply closer to orthodox views.

The EA community prides itself on being able to invite and process criticism. However, warm welcome of criticism was certainly not our experience in writing this paper.

I think criticism of EA orthodoxy is routinely dismissed. I would like to share a few more stories of being publicly critical of EA in the hope that doing so adds some useful evidence to the discussion:

  • Consider systemic change. "Some critics of effective altruism allege that its proponents have failed to engage with systemic change" (source). I have always found the responses (eg here and here) to this critique to be dismissive and miss the point. Why can we not just say: yes we are a new community this area feels difficult and we are not there yet? Why do we have to pretend EA is perfect and does systemic change stuff well?
  • My own experience (risk planning). I have some relevant expertise from engaging with professional risk managers, military personnel, counterterrorism staff and so on.  I have really rally struggled to communicate any of this to EA folk, especially where it suggest that EAs are not thinking about risks well. I tend to find I get downvoted or told I am strawmanning EA. If I want to avoid it is is possible if I put huge amounts of time and mental energy.
  • Mental health. Consider that Michael Plant has, for 6 years now, been making the case that GiveWell and other neartermist EAs don’t put enough weight on mental health. I believe his experience is mostly one of feeling that people are dismissive rather than engage with him.
  • Other. A few years back I remember being unimpressed that EAs response to Iason Gabriel's critique was largely to argue back and ignore it. There was no effort to see if any of the criticisms contained useful grains of truth that could help us improve EA.

Other evidence to note is that the top thing that EAs thinks other EAs get (source) is "Reinventing the wheel" and "Overconfidence and misplaced difference" and many EAs worry that  EA is intellectually stale / stagnant (in answers to this question). On the other hand many EA orgs are very good at recognising their mistakes made (e.g with 'Our mistakes' pages), which is a great cultural thing that we as a community should be proud of.

 

I think we should also recognise that Both Carla and Luke have full time EA research jobs and they have found it time consuming and for someone without a full time position it can become almost impossibly time consuming and draining to do a half decent job. This essentially closes off a lot of people from critiquing EA.

 

If there was one change I would make I would like there to be a cultural shift so if someone posts something critical we try to steelman it rather than dismiss it. (Here is an example of steelmanning some of Phil Torres' arguments [edit: although we should of course not knowingly steelman/endorse arguments made in bad faith]). We could also on occasion say "yes we get this wrong and we still have much to learn" and not treat every critique as an attack. 

 

Hope some extra views help.

Comment by weeatquince on Has anyone ever started an EA student group without being a student? If so, how? · 2021-12-28T22:37:39.177Z · EA · GW

Hello, Good question.

Yes. I have tried in  twice (across multiple universities). Once more successfully and once less successfully.
 

1. THE MASS EMAIL APPROCH

THINK and then TLYCS tried to start groups at universities around the world.

For each university we would find emails of anyone who we thought could forward a message onto a large group, anything from student representatives, to people running big student societies to professors of topics that might have students interested in effective altruism, etc, etc.

Using publicly available data We collated a list of 1000s of contacts, maybe 20-40 per university for 100s of universities.

We then emailed and said can you forward this message onto your students and the message to forward on said please start an EA chapter at your university we will help you. The students then had an application form where they had to apply to run a group.

I think this was successful. This started maybe 20 EA chapters around the world. The groups didn’t thrive for years but it helped get EA started and build the community in the early days.

In case helpful see: email templates and THINK summary
 

2. THE NETWORKING APPROACH  

I tried to start student groups in London Universities when I was the London coordinator. I sorted a list of London universities by size to get a starting point and then I essentially networked through my contacts to find EA sympathetic people at London universities who would run a group. I then got them to agree to run the group and helped get them started by running introductory events and tabling at university freshers fair stalls.

This lead to a few university groups at least being registered at universities, but they were very small and mostly petered out (so was less successful). I think this is because the group leaders were less excited about running a group – they had more been pushed/nudged into it by me more than got excited by the idea and applied to do it. (Also maybe because, given my personal network, they were more likely to be grad/PhD students rather than undergrads so was a bit harder for them to engage with other students).

That said I do think some of these groups are still going successfully today.

 

I hope that helps.

Comment by weeatquince on Evidence, cluelessness, and the long term - Hilary Greaves · 2021-12-27T09:40:34.694Z · EA · GW

Thank you Pablo. Have edited my review. Hopefully it is fairer and more clear now. Thank you for the helpful feedback!!

Comment by weeatquince on Evidence, cluelessness, and the long term - Hilary Greaves · 2021-12-27T09:40:17.813Z · EA · GW

Thank you Pablo. Have edited my review. Hopefully it is fairer and more clear now. Thank you for the helpful feedback!!

Comment by weeatquince on Evidence, cluelessness, and the long term - Hilary Greaves · 2021-12-27T09:39:20.665Z · EA · GW

My initial review was as follows:

(My thanks to the post authors,  velutvulpes and juliakarbing, for transcribing and adding a talk to the EA Forum, comments below refer to the contents of the talk).

I gave this a decade review downvote and wanted to set out why.

I think this is on the whole a decent talk that sets out an personal individual's journey through EA and working  out how they can do the most good.

It does however do a little bit of reinventing of the wheel. Now EAs across the board can, I think fairly, be criticised for reinventing the wheel. In fact this survey of 40 EA leaders found that "The biggest concern and a trend that came up again and again was that EAs tend to reinvent the wheel a lot".
In this talk (and other work) the author defines and introduces the idea of "cluelessness". This serves a purpose but it is done without any mention of the myriad of existing terminologies  that essentially mean the same thing, such as "uncertainty" "deep uncertainty" "Knightian uncertainty" "wicked problems" “extreme model uncertainty” "fragile credences" etc. The author then suggests 5 responses to cluelessness without mentioning the decades of research that have gone into the above topics and the existing ways humans deal with these issues.

Ultimately this should not a big deal. We all invent terminology from time to time, or borrow from domains we are familiar with to explain what is on our mind. It is not a big sin and can normally be shrugged off.

Unfortunately this author has had the bad luck that her new terminology stuck. And it stuck pretty hard. There is a "cluelessness" tag on the EA wiki and over 450 pages on the EA Forum that mention "cluelessness". Reflecting back, and talking to other EAs a year later, I think this [edit:  invented] term may have been harmful for EA discourse. I expect it has lead to people being unaware of the troves of academic (and other) work done to date on managing high levels of uncertainty and managing risks  to confusion and ongoing wheel reinventing.

Suggested follow up (if any) might be things like replacing the "clulessness" wiki page with another term and for people to stop using the term as much as possible.

Comment by weeatquince on Evidence, cluelessness, and the long term - Hilary Greaves · 2021-12-24T19:50:51.768Z · EA · GW

Hmmm ... I am not sure what it means that EAs use the term "cluelessness" incorrectly. I honestly never hear the term used outside of EA. So have been I assuming the way EAs use it is the only way (and hence correct), so maybe I have been using it incorrectly.

Would love some more clarity if you have time to provide it!

As far as I can tell

  • "complex clulessness" as defined by Hilary here just seems to be one specific form of (or maybe specific way of rephrasing) deep uncertainty , so a subcategory of "Knightian uncertainty" as defined by Wikipedia or "Deep uncertainty" as defined here.
  • "clulessness" as it is most commonly used by EAs seems to be the same as "Knightian uncertainty" as defined by Wikipedia or "deep uncertainty" as defined here.

Is that correct?

Comment by weeatquince on Evidence, cluelessness, and the long term - Hilary Greaves · 2021-12-24T19:39:48.650Z · EA · GW

Hi Jack, lovely to get your input.



Sure, "cluelessness" is a long standing philosophical term that is "an argument against ethical frameworks themselves, including consequentialism". Very happy to accept that.

But that doesn’t seem to be the case here in this talk. Hilary says "how confident should we be really that the cost-effectiveness analysis we've got is any decent guide at all to how we should be spending our money? That's the worry that I call 'cluelessness'". This seems to be a practical decision making problem.

Which is why it looks like to me that a term has been borrowed from philosophy, and used in another context. (And even if it was never the intent to do so it seems to me that people in EA took the term to be used as pointing to the practical decision making challenges of making decisions under uncertainty.)

 

Borrowing terms happens all the time but unfortunately in this case it appears to have caused some confusion along the way.  It would have been simpler to use the keep the philosophy term in the philosophy box to talk about topics such as the limits of knowledge and so on, and to use one of the terms from decision making (like "deep uncertainty") to talk about  practical issues like making decisions about where to donate given the things we don’t know, and kept everything nice and simple. 

But also it is not really a big deal. Kind of confusing / pet peeve level, but no-one uses the right words all the time, I certainly don’t. (If there is a thing this post does badly it is the reinventing the wheel point, see my response to Pablo above, and the word choice is a part of that broader confusion about how to approach uncertainty).

Comment by weeatquince on Evidence, cluelessness, and the long term - Hilary Greaves · 2021-12-24T19:37:26.475Z · EA · GW

The EA Forum wiki has talk pages!! Wow you learn something new every day :-)

 

Separately, I think Hilary's talk is a valuable contribution to the problem, so I don't think it warrants a negative evaluation

Yes I think that is ultimately the thing we disagree on. And perhaps it is one of those subjective things that we will always disagree on (e.g. maybe different life experiences means you read some content as new and exciting and I read the same thing as old and repetitive).

 

If I had to condense why I didn’t think it is a valuable contribution is it looks  to me (given my background) that it is reinventing the wheel. 

The rough topic of how to make decisions under uncertainty about the impact of those decisions (uncertainty about what the options are, what the probabilities are, how to decide, what is even valuable ect) in the face of unknown unknowns,  etc – is a topic that military planners, risk managers, academics and others have been researching for decades. And they have a host of solutions: anti-fragility, robust-decision making, assumption based planning, sequence thinking, adaptive planning. And they have views on when to make such decisions, when to do more research, how to respond, etc. 

I think any thorough analysis of the options for addressing uncertainty/cluelessness really should draw on some of that literature (before dismissing options like "make bolder estimates" / "make the analysis more sophisticated") . Otherwise it would be like trying to reinvent the wheel, suggesting it should be square and then concluding it cannot be done and wheels don’t work.

 

Hope that explains where I am coming from.

 

(PS. To reiterate, in Hilary's defense, EAs reinvent wheels all the time. No1 top flaw and all that. I just think this specific case has lead to lots of confusion. Eg people thinking there is no good research into uncertainty management)

Comment by weeatquince on Evidence, cluelessness, and the long term - Hilary Greaves · 2021-12-23T01:04:48.749Z · EA · GW

To be a bit more concrete, I spend my time talking to politicians, policy makers, risk mangers, climate scientists, military strategists, activists. I think most of these people  would understand "deep uncertainty" and "wicked problem" but less so "cluelessness". I think they would mean the same thing by this term as this post means by "cluelessness". I think the fact that  "cluelessness" became the popular term in EA has made things a bit more challenging for me.

I recognise that expecting people to police  their language against the possibility some term they introduce their audience to is suboptimal is a high bar. Philosophers use philosophy language and that is obviously fine. I just wish "cluelessness" hadn't been the term that seemed to stick in EA and that one of these other words had been used (and also I think that the talk could have benefited from recognising that this is an issue that gets attention and has reasonable solutions outside of philosophy).

Comment by weeatquince on Evidence, cluelessness, and the long term - Hilary Greaves · 2021-12-23T00:38:03.710Z · EA · GW

Yes you are correct. I am not an expert here but my best guess is the story is something like

  • "Moral cluelessness" was a philosophical term that has been around for a while.
  • Hilary borrow the philosophy term and extended it to discuss "complex clulessness" (which a quick Google makes me think is a term she invented).
  • "complex cluelessness" is essentially identical to “deep uncertainty”  and such concepts (at least as far as I can tell from reading her work, I think it was this paper I read) .
  • This and other articles then shorthanded "complex cluelessness" to just "cluelessness".

I am not sure exactly, happy to be corrected. So maybe not an invented term but maybe a borrowed, slightly changed and then rephrased term.  Or something like that. It all gets a bit confusing.

And sorry for picking on this talk if Hilary was just borrowing ideas from others, just saw it on the Decade Review list.

– – 

Either way I don’t think this changes the point of my review. It is of course totally fine to invent / reinvent / borrow terminology, (in fact in academic philosophy it is almost a requirement as far as I can tell). And it is of course fine for philosophers to talk like philosophers. I just think sometimes adding new jargon to the EA space can cause more confusion than clarity, and this has been one of those times.  I think in this case it would have been much better if EA had got into the habit of using the more common widely used terminology that is more applicable to this topic (this specific topic is not, as far as I can tell, a problem where philosophy has done the bulk of the work to date).

And insofar as the decade review is about reviewing what has been useful 1+ years later I would say this is a nice post that has in actuality turned out unfortunately to be dis-useful / net harmful. Not trying to place blame. Maybe there is just a lesson for all of us on being cautious on introducing terminology.

Comment by weeatquince on When can I eat meat again? · 2021-12-22T23:14:30.851Z · EA · GW

In the past year I have seen a lot of disagreement on when cultivated meat will be commercially available with some companies and advocates saying it will be a matter of years and some skeptics claiming it is technologically impossible. This post is the single best thing I have read on this topic. It analyses the evidence from both sides considers the rate of technological progress that will be needed to lead to cultivated meat and and makes realistic predictions. There is a high degree of reasoning transparency throughout. Highly recommended.

Comment by weeatquince on Evidence, cluelessness, and the long term - Hilary Greaves · 2021-12-22T23:01:26.293Z · EA · GW

(My thanks to the post authors,  velutvulpes and juliakarbing, for transcribing and adding a talk to the EA Forum, comments below refer to the contents of the talk).

I gave this a decade review downvote and wanted to set out why. 

 

Reinventing the wheel

I think this is on the whole a decent talk that sets out an personal individual's journey through EA and working  out how they can do the most good.

However I think the talk involves some amount of "reinventing the wheel" (ignoring and attempting to duplicate existing research). 

In the talk Hilary raises the problem of clueless and discusses five possible solutions to this problem. The problem (at least as it is defined in this talk here) appears to relate to having confidence in decisions made under situations of uncertainty, where there are hard/impossible to measure factors.

Now the rough topic of how to make decisions under uncertainty (uncertainty about options, probabilities, values, unknown unknowns,  etc) is a topic that military planners, risk managers, academics and others have been researching for decades. And they have a host of solutions: anti-fragility, robust-decision making, assumption based planning, sequence thinking, adaptive planning. And they have views on when to make such decisions, when to do more research, how to respond, and how confident to be, etc. 

Hilary does not reference any of that work or flag it to the reader at any point in her talk. I honestly think any thorough analysis of the options for addressing uncertainty/cluelessness really should be drawing on some of that existing literature.

 

Does this matter?

Normally this should not be a big deal, EA authors reinvent the wheel all the time (this survey suggests it is EA's No1 top flaw) so avoiding this is a very high bar to hold an author/speaker too. However I think in this specific instance it appears to have sown confusion and and been harmful to EA discussions of this topic. It has been my impression EA readers are very aware of practical decision making challenges of cluessness but very unaware of the research and solutions.

Ultimately this is very subjective claim. Some additional supporting evidence might be things like:

  • Talking to people who work in longtermist research in multiple EA organisations have expressed similar views and concerns.
  • There are many anecdotal cases of EA's discussion cluelessness but not the solutions. (Even in the comments below Pablo says "In your follow-up comment, you say that the problem 'has reasonable solutions', though I am personally not aware of any such solution").
  • Searches of the site show 327 pages on the EA Forum that mention "cluelessness". Compared to 21 for robust decision making. 37 for sequence thinking. 82 for Knightian uncertainty. 166 for "Deep Uncertainty". Etc

 

Suggested follow up.

One interesting solution might that whenever referring to practical decision making challenges, the term "clulessness" (which appears to be a niche philosophical term) could be replaced with terms more common in the decision making literature, such as "deep uncertainty" or "knightian uncertainty"; for example on the EA wiki or in future posts.

 

NOTE: This review has been edited to reflect comments below. Will post the initial review below as well for prosperity. See here.

Comment by weeatquince on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2021-12-22T22:20:06.884Z · EA · GW

This post is a really good example of an EA organisation being open and clear to the community about what it will and will not do.

I still have disagreements about the direction taken (see the top comment of the post) but I often think back to this post when i think about being transparent about the work I am doing and overall I think it is great for EA orgs to write such posts and I wish more groups would do so.

Comment by weeatquince on Do you have an example impact calculation for a high-impact career? · 2021-12-20T00:58:32.666Z · EA · GW

Here is one comparing founding a charity and donating to charity: https://forum.effectivealtruism.org/posts/drRsWTctSqNRveK56/what-is-the-expected-value-of-creating-a-givewell-top

(Conclusion was something like founding and running a charity maybe worth "$220K/yr to $720K/yr" of to donations to top GiveWell charities)

Comment by weeatquince on Does anyone know of any work that investigates whether private schools add value to society vs only change *who* attains socioeconomic success? · 2021-12-20T00:50:25.132Z · EA · GW

Maybe not quite what you are looking for but Lant Pritchett's work on schooling such as his paper "Where has all the education gone?" looks at wherever shocking in general will add value to society vs only change who attains socioeconomic success?

Comment by weeatquince on A huge opportunity for impact: movement building at top universities · 2021-12-18T07:34:53.556Z · EA · GW

This is SUPER EXCITING!! Amazing opportunity. Go you. :-)

Comment by weeatquince on A huge opportunity for impact: movement building at top universities · 2021-12-18T07:33:10.520Z · EA · GW

I'm not sure you'd need to filter significantly more than at other universities. That implies you think students at non top universities would as a proportion be less interested in EA, which seems far from obvious. Could just have a really big group.

Comment by weeatquince on What are some success stories of grantmakers beating the wider EA community? · 2021-12-17T00:39:59.833Z · EA · GW

Hey Neel. It doesn't add much (still not had time for a top level post) but in case helpful a bit more of my reasoning is set out here.

Comment by weeatquince on Where are you donating in 2021, and why? · 2021-12-17T00:30:41.223Z · EA · GW

This year I am shifting £10k into a Donor Advised Fund so that I have some resources available for active grant making.

I think there is a reasonable chance I will come across opportunities to get people to start new high impact projects, and in doing so could have more impact than if I gave to the EA Funds. My reasoning for think this is set out below. I think fairly small sums of money (~£15-30k, which could fund a salary for 6-12 months) could be enough to get someone to take a career risk and try a new project. I don’t earn enough to comfortably give that much in a single year so putting something aside to make sure I access to funds feels useful.

Happy to receive feedback on my reasoning. Happy to receive suggestions of smallish projects to fund. If anyone is considering a new project (especially in longtermist policy) do reach out to me maybe I can help you get it going!!
 

***
Why aim to do active grant making?

  1. Based track record and learnings. Each year I reflect on my previous years giving and what I have learned and have done well and have done better. When reflecting on my past donations I note that I am particularly happy with my own (~£15k) attempt at active grant making trying to encourage Natalie and Tildy to scale up the APPG for Future Generations (who I later ended up working for) which I think has been very impactful.
     
  2. I have seen it work as a recipient. I have been the recipient of active grant making (~£15k) that prompted me to work full time for EA London (back before there were accessible EA funds for community builidng from any institutions) and am extremely grateful for this. This seems to have been an effective donation ahead of the curve as since there there has been a massive scale up in resources for local community organisers all around the world. 
     
  3. Good theory of change. Not everyone who has a good idea will necessarily have the confidence to take it forward and make it a reality. It seems reasonable that as someone who is well networked in EA I could bump into these people. I have some familiarity with both seeking and giving funding and I expect supporting such potential entrepreneurs with my own money could be the thing they need to get going. 
     
  4. I have relevant expertise in policy and EA Funds do not. In particular the Long-Term Future Fund receives numerous applications for policy  projects (I know of at least a few) but does not appear to fund any such work. My charitable guess is this is because they lack the expertise to vet such projects. Longtermist policy seems to be extremely tractable right now (at least the UK, Ireland, the UN the OECD all seem to be very receptive). It seems plausible that I can find ways to support such projects.
    Similarly the Global Health and Development Fund does not appear to support small start-up projects and I have some expertise in that too. 
     

***

I  also donated about £450 via the Every.org matching campaign so as to have donations doubled, with $300 to animal welfare (THL, GFI, Animal Welfare Index) $200 to global poverty (AMF, Malaria Consortium) and $100 to EA meta (Rethink Priorities).

Note: the donation to the DAF is via a donation swap as I don’t have my own DAF

Comment by weeatquince on An Emergency Fund for Effective Altruists · 2021-12-12T10:08:11.674Z · EA · GW

FWIW, in case helpful for anyone to know,  I am pretty sure that in the UK this wouldn’t be eligible for gift aid (the tax relief system for charitable donations) as gift aid cannot be claimed if the donor is receiving a benefit and this would count as a benefit. Capping the size of the "insurance fund" would not solve this (or at least not with any simple system I can think off).

Comment by weeatquince on EU AI Act now has a section on general purpose AI systems · 2021-12-09T16:30:12.109Z · EA · GW

Thank you for the update – super helpful to see.

 

What are your reactions to this development?

My overall views are fairly neutral. I lean in favour of this addition, but honestly it could go either way in the long run.

 

The addition means developers of general AI will basically be unregulated. On the one hand being totally unregulated is bad as it removes the possible advantages of oversight etc. But on the other hand applying rules to regulate general AI in a way similar to how this act regulates high-risk AI would be the wrong way to regulate general AI. 

In my view no regulation seems better than inappropriate regulation, and still leaves the door open to good regulatory practice. Someone else could argue that restrictive inappropriate regulation would slow down EU progress on general AI research and this would be good and I could understand the case for that but in my view think the evidence for the value of slowing EU general AI research is weak and my general preference for not building inappropriate or broken systems is stronger.

 

(Also the addition removes the ambiguity that was in the act as to whether it applied to general AI products, which is good as legal clarity is good.)

Comment by weeatquince on What are some success stories of grantmakers beating the wider EA community? · 2021-12-07T23:59:04.662Z · EA · GW

I think policy is a domain where there are still many opportunities for spotting and funding projects early, and having more impact than donating to the EA Funds. As mentioned it may be very hard for the EA Funds to do this kind of work.

That said I think this only applies if if you have (or know and trust someone) with relevant expertise. There are risks and I think plenty of the projects I have come across I thought, this should not to be funded. 

(I had planned to write a whole post on this and on how to do active grant-making well as a small donor – not sure if I will have time but maybe)

Comment by weeatquince on What are some success stories of grantmakers beating the wider EA community? · 2021-12-07T23:53:39.383Z · EA · GW

 * This area does seem strangely neglected still.  For example based on grant pay-outs and feedback received I believe that:

  • Open Phil does not fund new longtermist-adjacent policy advocacy projects – with the exception of CSET (a project run by a very senior political figure). Although they do give to existing policy projects such as existing large biosecurity organisations.
  • The Long-Term Future Fund receives numerous applications for policy  projects (I know of a few and am sure there have been more)  but does not appear to have funded any such work. (As far as I am aware the APPG is the only policy advocacy project the LTFF ever funded, and that was when it was already going and had built traction and they wouldn't fund it to do anything new  or different from what it was already doing  in case of the risks).

My charitable take is that the LTFF's (and maybe others) lack of focus in policy is because of the challenge of vetting policy work. Consider for example that ideally to vet policy advocacy projects well you would have an expert who understand policy in the specific country where the grant is for, but having policy experts for every country is impractical

Comment by weeatquince on What are some success stories of grantmakers beating the wider EA community? · 2021-12-07T23:43:16.954Z · EA · GW

Grants for UK policy work focused on the long-term. In particular the APPG for Future Generations and CLTR.

 

The bigger EA donors have typically been (and still are) incredibly sceptical of funding policy work and there has been almost no* funding in this space for smaller new projects. (the SAF Fund might change that, although funding rounds are infrequent). Big donors tend to be sceptical due to:

  • Concern about the reputational risk of this work.
  • Lack of expertise in donor organisations to vet such projects.

 

So, out of my personal donations one I am particularly proud of is probably offering initial funding to get the APPG for Future Generations growing and encouraging the people running it to hire and expand.  (I don’t know the full funding for these orgs but I believe, as well as my donation, that they were initially funded by donors who were enthusiastic about policy but outside the big EA grantmakers.)

 

I believe (but I am bias) that this was a great call and that UK policy work has gone shockingly well. The APPG for Future Generations and CLTR have lead to policy wins feeding into to the UK government making resilience a priority, improving the UK's ability to manage unexpected extreme risks, improving the UK's preparedness for future pandemics, and improving how UK policy makers consider the long-term.

(It is also notable that there has not been reputational problems and some conversations suggests that the risk-conscious approach taken by here may actually have reduced some reputational risks.)

 

*** Disclaimer: I work for the APPG. Views all my own but expect bias. ***

Comment by weeatquince on What are some success stories of grantmakers beating the wider EA community? · 2021-12-07T22:53:34.266Z · EA · GW

If you count seed funding as part of an incubation program I think Charity Entrepreneurship (CE) is probably the absolute king of "spotting great opportunities early".

Their earliest global poverty charities, Fortify Health and Suvita, incubated and seed funded by CE, went on to get GiveWell Incubation grants. Fish Welfare Initiative and Animal Advocacy Careers were incubated and seed funded by CE and have been funded by the EA Animal Welfare Fund. Happier Lives Institute was incubated and seed funded by CE and has now been funded by the Infrastructure Fund. I expect there are other examples and newer CE charities will do just as well.

If donors want to spot great opportunities early, being on the CE seed funding network (or even just donating to CE) seems a good way to do it. (If interested get in touch).

*** Disclaimer: I now work for CE. Views all my own but expect bias. ***
 

Comment by weeatquince on What are some success stories of grantmakers beating the wider EA community? · 2021-12-07T22:39:07.537Z · EA · GW

I think credit should also go to those in the community who applied for and successfully won grants for projects that were not their own. Making sure a larger EA donor was aware of a particular project is one way to beat (in a very collaborative sense) the larger grant-makers. It is basically doing active grant making but at low risk and low personal ability to fund.

I know one example which is Chris Chambers work on Registered Reports. The giving opportunity was picked up on / created by Hauke Hildebrandt in a Lets Fund report. The application to fund it was made by Jacob Hilton (who had no connection to the issue but just though someone should do an application) and the grant was given by the EA Infrastructure Fund (details here).

If others have more examples of this would be great to hear.

Comment by weeatquince on What are some success stories of grantmakers beating the wider EA community? · 2021-12-07T16:08:35.319Z · EA · GW

Early EA community building in London.

Back in 2015 there were no funds available for local community building, no funds from CEA or community grants or EA Funds etc. There was general scepticism about if it was worth funding such local work.

Kudos and many thanks go to Alex Gordon-Brown for initially suggesting that he could fund a local community organiser and to Kit Harris for initial funding and to many others in the London community for additional contributions.

This seems to have been a great call (although I might be bias as it was my job). Certainly since then there has been a massive scale up in resources for local community organisers all around the world. There has been a focus on community and in-person outreach as a particularly effective way to spread EA ideas without the message becoming garbled (i.e. it is more high fidelity). And of course the London community grew a lot. But it took individual donors actively pushing for this to happen, and funding it when others were sceptical, in order to lead the way. 

Comment by weeatquince on [deleted post] 2021-11-30T09:05:52.241Z

Regulatory type interventions (pre-deployment):

  • Regulatory restriction (rules on what can be done)
  • Regulatory oversight (regulators)
  • Industry self-regulation
  • Industry (& regulator) peer reviews systems
  • Fiduciary duties
  • Senior management regimes
  • Information sharing regimes
  • Whistleblowing regimes
  • Staff security clearances
  • Cybersecurity of AI companies
  • Standardisation (to support ease of oversight etc)
  • Clarity about liability & legal responsibility
  • Internal government oversight (all of the above applied internally by government to itself, e.g. internal military safety best practice)

Technical type interventions (pre-deployment):

  • AI safety research

 

Defence in depth type interventions (post-deployment):

  • Windfall clauses etc
  • Shut-off switches for AI systems
  • AIs policing other AIs' behaviours
  • Internet / technology shut-off systems