Posts

Coronavirus and long term policy [UK focus] 2020-04-05T08:29:08.645Z · score: 51 (22 votes)
Where are you donating this year and why – in 2019? Open thread for discussion. 2019-12-11T00:57:32.808Z · score: 69 (24 votes)
Managing risk in the EA policy space 2019-12-09T13:32:09.702Z · score: 65 (31 votes)
UK policy and politics careers 2019-09-28T16:18:43.776Z · score: 28 (14 votes)
AI & Policy 1/3: On knowing the effect of today’s policies on Transformative AI risks, and the case for institutional improvements. 2019-08-27T11:04:10.439Z · score: 22 (10 votes)
Self-care sessions for EA groups 2018-09-06T15:55:12.835Z · score: 12 (9 votes)
Where I am donating this year and meta projects that need funding 2018-03-02T13:42:18.961Z · score: 11 (11 votes)
General lessons on how to build EA communities. Lessons from a full-time movement builder, part 2 of 4 2017-10-10T18:24:05.400Z · score: 13 (11 votes)
Lessons from a full-time community builder. Part 1 of 4. Impact assessment 2017-10-04T18:14:12.357Z · score: 14 (14 votes)
Understanding Charity Evaluation 2017-05-11T14:55:05.711Z · score: 3 (3 votes)
Cause: Better political systems and policy making. 2016-11-22T12:37:41.752Z · score: 12 (18 votes)
Thinking about how we respond to criticisms of EA 2016-08-19T09:42:07.397Z · score: 3 (3 votes)
Effective Altruism London – a request for funding 2016-02-05T18:37:54.897Z · score: 5 (9 votes)
Tips on talking about effective altruism 2015-02-21T00:43:28.703Z · score: 12 (12 votes)
How I organise a growing effective altruism group in a big city in less than 30 minutes a month. 2015-02-08T22:20:43.455Z · score: 11 (13 votes)
Meetup : Super fun EA London Pub Social Meetup 2015-02-01T23:34:10.912Z · score: 0 (0 votes)
Top Tips on how to Choose an Effective Charity 2014-12-23T02:09:15.289Z · score: 5 (3 votes)
Outreaching Effective Altruism Locally – Resources and Guides 2014-10-28T01:58:14.236Z · score: 10 (10 votes)
Meetup : Under the influence @ the Shakespeare's Head 2014-09-12T07:11:14.138Z · score: 0 (0 votes)

Comments

Comment by weeatquince_duplicate0-37104097316182916 on Reducing long-term risks from malevolent actors · 2020-05-06T07:19:38.305Z · score: 5 (4 votes) · EA · GW

Thank you for the insight. I really have no strong view on how useful each / any of the ideas I suggested were. They were just ideas.

I would add on this point that narcissistic politicians I have encountered worried about appearance and bad press. I am pretty sure that transparency and fact checking etc discouraged them from making harmful decisions. Not every narcissistic leader is like Trump.

Comment by weeatquince_duplicate0-37104097316182916 on Update from the Happier Lives Institute · 2020-05-03T09:26:32.711Z · score: 7 (4 votes) · EA · GW

Amazing job Clare and Michael and everyone else involved. Keep up the good work.

As mentioned previously I would be interested, further down the line, to see a broad cause prioritisation assessments that looked at how SWB metrics might shed insight on how we compare global heath, to global economic growth, to improving decisions, to farmed animals well-being, to existential risk prevention, etc.

Comment by weeatquince_duplicate0-37104097316182916 on Reducing long-term risks from malevolent actors · 2020-05-03T08:50:45.687Z · score: 40 (18 votes) · EA · GW

Hi, interesting article. Thank you for writing.

I felt that this article could have said more about possible policy interventions and that it dismisses policy and political interventions as crowded too quickly. Having thought a bit about this area in the past I thought I would chip in.

MORE ON POLICY INTERVENTIONS

Even within established democracies, we could try to identify measures that avoid excessive polarization and instead reward cross-party cooperation and compromise. ... (For example, effective altruists have discussed electoral reform as a possible lever that could help achieve this.)

There are many things that could be done to prevent malevolent leaders within established democracies. Reducing excessive polarization (or electoral reform) are two minor ones. Other ideas you do not discuss include:

  • Better mechanisms for judging individuals. Eg ensuring 360 feedback mechanisms are used routinely to guide hiring and promotion decisions as people climb political ladders. (I may do work on this in the not too distant future)
  • Less power to individuals. Eg having elections for parties rather than leaders. (The Conservative MPs in the UK could at any time decide that Boris Johnson is no longer fit to be a leader and replace him with someone else, Republicans cannot do this with Trump, Labour MPs in the UK cannot do this with a Labour leader to the same extent).
  • Reduce the extent to which corruption / malevolence is beneficial for success. There are many ways to do this. In particular removing the extent to which individuals raising money is a key factor for their political success (in the UK most political fundraising is for parties not for individuals). Also removing the extent to which dishonesty pays, for example with better fact-checking services.
  • More checks and balances on power. A second house. A constitution. More independent government institutions (central banks, regulators, etc – I may do some work in this space soon too). More transparency of political decision making. Better complaint and whistle-blowing mechanisms. Limits on use of emergency powers. Etc.

ARE POLITICAL INTERVENTIONS CROWDED?

Alternatively, we could influence political background factors that make malevolent leaders more or less likely... interventions to promote democracy and reduce political instability seem valuable—though this area seems rather crowded.

You might be correct, but this feels a bit like saying the AI safety space is crowded because lots of groups are trying to develop AI. However it may not be the case that those groups are focusing as much on safety as you would like. Although there are many groups (especially nation states) that want to promote democracy there may be very specific interventions that prevent malevolent leaders that are significantly under-discussed, such as elections for parties rather than leaders, or other points listed above. It seems plausible that academics and practitioners in this space may be able to make valuable shifts in the way fledgling democracies are developing that are not otherwise being considered.

And as someone in the improving government institutions space in the UK is is not evident to me that there is much focus on the kinds of interventions that would limit malevolent leaders.

Comment by weeatquince_duplicate0-37104097316182916 on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-27T17:26:38.213Z · score: 7 (5 votes) · EA · GW

Hi Ben, I think you are correct that the main difference in our views is likely to be the trade-off between breadth/inclusivity verses expected impact in key areas. I think you are also correct that this is not a topic that either of us could do justice in this thread (I am not sure I could truly do it justice in any context without a lot of work, although always happy to try). And ultimately my initial disappointment may just be from this angle.

I do think historically 80K has struggled more in communicating its priorities to the EA community than others (CEA / GiveWell / etc) and it seems like you recognise this has been a challenge. I think perhaps it was overly harsh of me to say that 80K was "clearly doing something wrong". I was focusing only on the communications front. Maybe the problems were unavoidable or the past decisions made were the net best decisions given various trade-offs. For example maybe the issues I pointed to were just artifacts of 80K at the time transitioning its messaging from more of a "general source of EA careers advice" to more of cause focused approach. (It is still unclear to me if this is a messaging shift or a strategy shift). Always getting messaging spot on is super difficult and time consuming.

Unfortunately, I am not sure my thoughts here have lead to much that is concretely useful (but thank you for engaging). I guess if I had to summarise some key points I would say: I am super in favour of transparency about priorities (and in that regard this whole post is great); if you are focusing more on your effect on the effective altruism movement then local community organisers might have useful insights (+CEA ect have useful expertise); if 80k gets broader over time that would be exciting to me; I know I have been critical but I am really impressed by how successful you have made 80k.

Comment by weeatquince_duplicate0-37104097316182916 on Coronavirus and long term policy [UK focus] · 2020-04-26T13:12:04.635Z · score: 3 (2 votes) · EA · GW

Hi, Thank you some super useful points here. Will look at some of the BBRSC reports. I know about NC3R and think it is a good approach.

Only point I disagree with:

In terms of having a minister for dual use research this seems quite high cost to ask for, and low worth think Piers Millet suggestion of liaison officer more useful.

To clarify this is not a new Minister but adding this area of responsibility to a Ministerial portfolio so not at all a high cost ask (although ideally would do so in legislation which would be higher cost).

I think this is needed as however capable the civil service is at coordination there needs to be a Minister who is interested and held accountable in order to drive change and maintain momentum.

Comment by weeatquince_duplicate0-37104097316182916 on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-26T12:17:02.445Z · score: 35 (15 votes) · EA · GW

Hi Ben, Thank you for the thoughtful reply. Super great to see a greater focus on community culture in your plans for 2020. You are always 2 steps ahead :-)

That said I disagree with most of what you wrote.

Most of your reply talks about communications hurdles. I don’t think these pose the barrier you think they pose. In face the opposite, I think the current approach makes communications and mistrust issues worse.

You talk about the challenge of being open about your prioritisation and also open to giving advice across causes, risks of appearing to bait and switch, transparency Vs demoralising. All of these issues can be overcome, and have been overcome by others in the effective altruism community and elsewhere. Most local community organisers and CEA staff have a view on what cause they care the most about yet still mange an impartial community and impartial events. Most civil servants have political views but still provide impartial advice to Ministers. Solutions involve separating your priotisation from your impartial advice, having a strong internal culture of impartiality, being open about your aims and views, being guided by community interests, etc. This is certainly not always easy (hence why I had so many conversations about how to do this well) but it can be done.

I say the current approach makes these problems worse. Firstly thinking back to my time focused on local community building (see examples above) it appeared to me that 80000 Hours had broken some of the bonds of trust that should exist between 80000 Hours and its readership. It seems clear that 80000 Hours was doing something wrong and that more impartiality would be useful. (Although take this with a pinch of salt as I have been less in this space for a few years now). Secondly it seems surprising to me that you think the best communications approach for the effective altruism community is to have multiple organisations in this space for different causes with 80000 Hours being an odd mix of everything and future focused. A single central organisation with a broader remit would be much clearer. (Maybe something like franchising out the 80000 Hours brand to these other organisations if you trust them could solve this.)

I fully recognise there are some very difficult trade-offs here: there is a huge value to doing one thing really well, costs of growing a team to quickly to delve into more areas, costs of having lower impact on the causes you care about, costs of switching strategy, etc.

Separately to the above I expect that I would place a much stronger emphasis than you on epistemic humility and have more uncertainty than you about the value of different causes and I imagine this pushes me towards a more inclusive approach.

Comment by weeatquince_duplicate0-37104097316182916 on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-26T09:56:38.707Z · score: 12 (8 votes) · EA · GW

Hi Michelle, Firstly I want to stress that no one in 80,000 Hours needs to feel bad because I was unimpressed with some coaching a few years ago. I honestly think you are all doing a really difficult job and doing it super well and I am super grateful for all the coaching I (and others) have received. I was not upset, just concerned, and I am sure any concerns would have been dealt with at the time.

(Also worth bearing in mind that this may have been an odd case as I know the 80K staff and in some ways it is often harder to coach people you know as there is a temptation to take shortcuts, and I think people assume I am perhaps more certain about far future stuff than I am.)

--
I have a few potentially constructive thoughts about how to do coaching well. I have included in case helpful, although slightly wary of writing these up because they are a bit basic and you are a more experienced career coach than me so do take this with a pinch of salt:

  • I have found it works well for me best to break the sessions into areas where I am only doing traditional coaching (mostly asking questions) and a section(s), normally at the end, where I step back from the coach role to an adviser role and give an opinion. I clearly demarcate the difference and tend to ask permission before giving my opinion and tend to caveat how they should take my advice.
  • Recording and listening back to sessions has been useful for me.
  • I do coaching for people who have different views from me about which beneficiaries count. I do exercises like asking them how much they care about 1 human or 100 pigs or humans in 100 years, and work up plans from there. (This approach could be useful to you but I expect this is less relevant as I would expect much more ethical alignment of the people you coach).
  • I often feel that personally being highly uncertain about which cause paths are most important is helpful to taking an open mind when coaching. This may be a consideration when hiring new coaches.

Always happy to chat if helpful. :-)

Comment by weeatquince_duplicate0-37104097316182916 on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-21T08:44:46.200Z · score: 64 (35 votes) · EA · GW

In many ways this post leaves me feeling disappointed that 80,000 Hours has turned out the way it did and is so focused on long-term future career paths.

- -

Over the last 5 years I have spent a fair amount of time in conversation with staff at CEA and with other community builders about creating communities and events that are cause-impartial.

This approach is needed for making a community that is welcoming to and supportive of people with different backgrounds, interests and priorities; for making a cohesive community where people with varying cause areas feel they can work together; and where each individual is open-minded and willing to switch causes based on new evidence about what has the most impact.

I feel a lot of local community builders and CEA have put a lot of effort into this aspect of community building.

- -
Meanwhile it seems that 80000 Hours has taken a different tack. They have been more willing, as part of trying to do the most good, to focus on the causes that the staff at 80000 Hours think are most valuable.

Don’t get me wrong I love 80000 Hours, I am super impressed by their content glad to see them doing well. And I think there is a good case to be made for the cause-focused approach they have taken.

However, in my time as a community builder (admittedly a few years ago now) I saw the downsides of this. I saw:

  • People drifting from EA. Eg: someone telling me, they were no longer engaging with the EA community because they felt that it was now all long-term future focused and point to 80000 Hours as the evidence.
  • People feeling that they needed to pretend to be long-termism focused to get support from the EA community . Eg: someone telling me they wanted career coaching “read between the lines and pretended to be super interested in AI”.
  • Personally feeling uncomfortable because it seemed to me that my 80000 Hours career coach had a hidden agenda to push me to work on AI rather than anything else (including paths that progressed by career yet kept my options more open to different causes).
  • Concerns that the EA community is doing a bait-and-switch tactic of “come to us for resources on how to do good. Actually, the answer is this thing and we knew all along and were just pretending to be open to your thing.”

- -

“80,000 Hours’ online content is also serving as one of the most common ways that people get introduced to the effective altruism community”

So, Ben, my advice to you would firstly to be to be super proud of what you have achieved. But also to be aware of the challenges that 80000 Hours’ approach makes for building a welcoming and cohesive community. I am really glad that 20% of content on the podcast and the job board goes into broader areas than your priority paths and would encourage you to find ways that 80000 Hours can put more effort into these areas, do some more online content on these areas and to think carefully about how to avoid the risks of damaging the EA brand or the EA community.

And best of luck with the future.

Comment by weeatquince_duplicate0-37104097316182916 on What posts you are planning on writing? · 2020-02-03T10:04:59.828Z · score: 2 (1 votes) · EA · GW

Hi, is be interested and have been thinking about similar stuff (meeting the impact of lobbying, etc) from a uk policy perspective.

If helpful happy to chat and share thoughts. Feel free to get in touch to: sam [at] appgfuturegenerations.com

Comment by weeatquince_duplicate0-37104097316182916 on Cotton‐Barratt, Daniel & Sandberg, 'Defence in Depth Against Human Extinction' · 2020-01-29T13:02:13.351Z · score: 5 (3 votes) · EA · GW

This is excellent. Very well done.


It crossed my mind to ponder on whether much can be said about where different categories* of risk prevention are under-resourced. For example it maybe that the globe spends enough resources on preventing natural risks as we have seen them in the past so understand them. It maybe that militarisation of states means that we are prepared for malicious risk. It maybe that we under-prepare for large risks as they have less small scale analogues.

Not sure how useful following that kind of thinking is but it could potentially help with prioritisation. Would be interested to hear if the authors have though through this.


*(The authors break down risks into different categories: Natural Risk / Accident Risk / Malicious Risk / Latent Risk / Commons Risk, and Leverage Risk / Cascading Risk / Large Risk, and capability risk / habitat risk / ubiquity risk / vector risk / agency risk).

Comment by weeatquince_duplicate0-37104097316182916 on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-01-22T18:34:43.953Z · score: 7 (4 votes) · EA · GW

Optimisers curse / Regression to the mean

On how trying to optimise can lead you to make mistakes

Comment by weeatquince_duplicate0-37104097316182916 on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-01-22T18:30:03.941Z · score: 7 (4 votes) · EA · GW

Knightian uncertainty / deep uncertainty

a lack of any quantifiable knowledge about some possible occurrence

This means any situation where uncertainty is so high that it is very hard / impossible / foolish to quantify the outcomes.

To understand this it is useful to note the difference between uncertainty (EG 1: The chance of a nuclear war this century) and risk (EG 2: the chance of a coin coming up heads).

The process for making decisions that rely on uncertainty may be very different form the process for making decision that rely on risk. The optimal tactic for making good decisions on situations about deep uncertainty may not be to just quantify the situation.


Why this matters

This could drastically change the causes EAs care about and the approaches they take.

This could alter how we judge the value of taking action that affects the future.

This could means that "rationalist"/LessWrong approach of "shut up and multiply" for making decisions might not be correct.

For example this could shift decisions away from a naive exacted value based on outcomes and probabilities and towards favoring courses of actions that are robust to failure modes, have good feedback loops, have short chains of affects, etc.

(Or maybe not, I don’t know. I don’t know enough about how to make optimal decisions under deep uncertainty but I think it is a thing I would like to understand better.)


See also

The difference between "risk" and "uncertainty". "Black swan events". Etc

Comment by weeatquince_duplicate0-37104097316182916 on Response to recent criticisms of EA "longtermist" thinking · 2020-01-13T14:04:56.526Z · score: 1 (7 votes) · EA · GW

Section 9.3 here: https://www.nickbostrom.com/existential/risks.html

(Disclaimer: Not my own views/criticism. I am just trying to steelman a Facebook post I read. I have not looked into the wider context of these views or people's current positions on these views.)


Comment by weeatquince_duplicate0-37104097316182916 on Response to recent criticisms of EA "longtermist" thinking · 2020-01-13T09:56:10.497Z · score: 22 (17 votes) · EA · GW

Hi,

I downvoted this but I wanted to explain why and hopefully provide constructive feedback. I felt that, having seen the original post this is referencing, I really do not think this post did a good/fair job of representing (or steelmanning) the original arguments raised.

To try and make this feedback more useful and help the debate here are some very quick attempts to steelman some of the original arguments:

  • Historically arguments that justify horrendous activities have a high frequency of being utopia based (appealing to possible but uncertain future utopias). The long-termist astronomical waste argument has this feature and so we should be wary of it.
  • If an argument leads to some ridiculous / repugnant conclusions that most people would object too then it is worth being wary of that argument. The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes). We should be wary of following and promoting such arguments and philosophers.
  • There are problems with taking a simple expected value approach to decision making under uncertainty. Eg Pascal's mugging problems. [For more on this look up robust decision making under deep uncertainty or knightian uncertainty]
  • The astronomical waste type arguments are not robust to a range of different philosophical and non-utilitarian ethical frameworks and (given ethical uncertainty) this makes them not great arguments
  • Etc
  • The above are not arguments against working on x-risks etc (and the original poster does himself work on x-risk issues) but are against overly relying on, using and promoting the astronomical waste type arguments for long-termism.

Comment by weeatquince_duplicate0-37104097316182916 on 8 things I believe about climate change · 2019-12-29T14:16:50.945Z · score: 8 (5 votes) · EA · GW

Having looked at your sources I am not sure they justify the conclusions.


In particular:

  • Your sources for point 1 seem to ignore the >10% case that the world warms significantly more than expected (they generally look at mortality in the business as usual case).
  • Your sources for point 2 focus on whether climate change is truly existential, but do seem to point to a possibly if it being a global catastrophe. (Point 2 appears to be somewhat crucial, the other points, especially 1, 4, 5, 7 depend on this point.)

    It seems plausible from looking at your sources that there are tail risks of extreme warming that could lead to huge global catastrophe (maybe not quite at your cut-off the 10% chance of 10% mortality level but huge).

    Eg Halstead:
    "On current pledges and promises, we’ll probably end up at around 700ppm by 2100 and increasing well beyond that."
    "at 700ppm, ... there is an 11% chance of an eventual >6 degrees of warming"
    "at 1120ppm, there is between a 10% and 34% chance of >9 degrees of warming"
    "Heat stress ... seems like it would be a serious problem for warming >6 degrees for large portions of the planet ... With 11–12 °C warming, such regions would spread to encompass the majority of the human population as currently distributed"
    "6 degrees would drastically change the face of the globe, with multi-metre sea level rises, massive coastal flooding, and the uninhabitability of the tropics." "10 degrees ... would be extremely bad"

Overall I expect these points 1 and 2 are quite possibly correct, but, having looked through your sources and concluded that they do not justify the points very well, I would have low epistemic status in these points.


Also on points 4 and 7, I think they are dependent on what kind of skills and power you have and are using. Eg: If you are long-term focused and have political influence climate issues might be a better thing to focus on than AI safety risks which is not really on the political agenda much.

Comment by weeatquince_duplicate0-37104097316182916 on Where are you donating this year and why – in 2019? Open thread for discussion. · 2019-12-24T08:42:19.453Z · score: 3 (2 votes) · EA · GW

Hi Michael, That all sounds really sensible and well thought out. Good job :-)

Comment by weeatquince_duplicate0-37104097316182916 on Where are you donating this year and why – in 2019? Open thread for discussion. · 2019-12-16T19:03:51.954Z · score: 8 (5 votes) · EA · GW

Hi Michael,

First year donating is super exciting!!

Not an expert but some feedback that jumps to mind is:

  • Overall this looks like a great donation plan.
  • Giving to the Animal Welfare Fund or ACE's top recommended charities seems like a pretty solid surefire bet / way to outsource donations.
  • I am slightly less certain about donating directly to RP or CE unless you have a reason to think the Animal Welfare Fund is not funding these enough (which does happen), but either way you are following the donations of the Animal Welfare Fund so there is really not much in it and it is useful sometimes to donate and see how the orgs are using your money.
  • One extra thing to consider is donating to the charities being created by Charity Entrepreneurship, (for example https://forum.effectivealtruism.org/posts/iMofrSc86iSR7EiAG/introducing-fish-welfare-initiative-1 ). I cant talk for CE but I think CE believe donations to their new charities are a bit more urgent than donations directly to CE. Maybe one of the fish people can say if they are looking for funds.
  • I endorse solving collective action problems that benefit you and other donors. You are probably better placed to evaluate RC Forward than us non-Canadians, and if RC Forward is useful to help you donate more then supporting it with at least some of your donation makes sense.

Hope that helps,

Sam

Comment by weeatquince_duplicate0-37104097316182916 on Managing risk in the EA policy space · 2019-12-16T18:42:59.895Z · score: 3 (2 votes) · EA · GW

Hi, ditto what Khorton said. I don’t have a background that has lead me to be able to opine wisely on this.

My initial intuition is: I am unconvinced by this. From a policy perspective you make a reasonable case that more immigration to the US could be very good, but unless you had more certainty about this (more research, evidence, cases studies, etc), I would worry about the cost of actively pushing out a US vs China message.

But I have no expertise in US politics so I would not put much faith in my judgment.

Comment by weeatquince_duplicate0-37104097316182916 on Where are you donating this year and why – in 2019? Open thread for discussion. · 2019-12-11T08:27:33.174Z · score: 10 (7 votes) · EA · GW

Giving What We Can's impact reports (when I last read them) suggested they had raised for effective charities £6 per £ spent using pessimistic assumptions or £60 per £ best guess.

The Life You Can Save raised $11 per $ spent for effective charities

Raising for Effective giving has raised $24 per $ spent, for effective charities.

EA London (which does not do much fundraising) roughly raised £2.5 per £.

Rethink Forward moves £7 per £.

This are all post hoc analyses of money moved to date, not estimates of future impact. The quality of the evidence for these is variable between the different programs and you can look into it. As well as moving money I believe all of cheese ALSO purport to have improved the effectiveness of donations given.

If helpful to provide a baseline / prior against which to judge these successes note that the standard fundraising ratio in the charity sector is that charities raise £4 per £ spent on fundraising.

Comment by weeatquince_duplicate0-37104097316182916 on Where are you donating this year and why – in 2019? Open thread for discussion. · 2019-12-11T01:04:55.087Z · score: 44 (19 votes) · EA · GW

Donation: £5,000

Cause: EA meta (+ global poverty)

Main donation: £3000 to Happier Lives Institute (HLI)

Other donations: £500 to each of EA meta Fund, Let’s Fund, Rethink Priorities, Against Malaria Foundation (AMF)


Why EA Meta
Leverage: It seems empirically evidence to me that meta EA activity is influencing both the amount and the direction of funding at a ratio of at least £10 influenced £1 inputted.
Evidenced: I was skeptical of what EA meta work could achieve but over the last few years this kind of giving has gone from being an idea to having demonstrated impact.
Underfunded: The EA Meta Fund has than other EA Funds and on its last pay-outs it only filled about 15% of the funding gaps of the organisations it was looking to support.
Collective action: If everyone in EA funded meta work rather than pet causes more money would go to good places (or we may learn we were wrong about our pet causes).


Why EA meta research not outreach
I think we are still learning about how to do good and getting that right is more important than getting more money moved. This has had most impact to date and I am very unconvinced that we are getting close to diminishing marginal returns on this.


Why £3000 to HLI
Happier Lives Institute are doing innovative and useful new research on subjective wellbeing data that I believe could significantly change how people in the EA community think about what causes are most important. I expect this donation combined with a donation from a collaborator can fill their funding gap at least until August 2020. I may donate more at a later date.


Why £500 each EA Meta Fund, Let’s Fund and Rethink Priorities
I am not giving everything to HLI partly because I think HLI’s immediate funding gap can be filled and partly I want to influence and keep up with these other projects and partly just poor heuristics on my part. Note that my view that HLI is better than any of these 3 other donation opportunities is very weak (although I expect they have a more pressing funding gap).
These 3 projects are the other EA meta (research) projects that I think it is worth supporting. Splitting between them because not sure it is worth the time / energy to evaluate and compare them all given the amount of money I am giving. I have not included GPI because I have not been as impressed by the immediate usefulness of their research or research agenda.
On Let’s Fund: They are not actually asking for money but they are doing good work and always seem short of funds so will try to offer them funds. If they don’t take it will split the money between other projects.


Why £500 to AMF
I am not giving everything to meta, partly I want to still force myself to think about what is the most important non meta cause and partly because I think if I am give the amount I would likely have given to non-meta causes had I not come across EA / GWWC then I help avoid the meta trap.
Against Malaria Foundation are an excellent charity, continuously top-rated by GiveWell. (Giving to AMF rather than to GiveWell to distribute as not totally convinced that Deworming or GiveDirectly are as good as AMF).
I might alternatively give the EA Animal Fund – need to think about this more.


Key uncertainties
Is it silly to split my donations this much?
Have I done enough due diligence of HLI?
AMF or the EA Animal Fund?

Comment by weeatquince_duplicate0-37104097316182916 on Some personal thoughts on EA and systemic change · 2019-10-05T17:40:18.782Z · score: 42 (27 votes) · EA · GW

In one key way this post very solidly completely misses the point.

The post makes a number of very good points about systemic change. But bases all of the points on financial cost-effective estimates. It is embedded in the language throughout, discussing: options that "outperformed GiveWell style charities", the "cost ... per marginal vote", lessons for "large-scale spending" or for a "small donor", etc.

I think a way the EA community has neglected systemic change in exactly in this manner. Money is not the only thing that can be leveraged in the world to make change (and in some cases money is not a thing people can give).
I think this some part of what people are pointing to when they criticise EA.

To be constructive I think we should rethink cause priotisation, but not from a financial point of view. Eg:
- If you have political power how best to spend it?
- If you have a public voice how best to use it?
- If you can organise activism what should it focus on?

(PS. Happy to support with money or time people doing this kind of research)

I think we could get noticeably different results. I think things like financial stability (hard to donate to but very important) might show up as more of a priority in the EA space if we start looking at things this way.

I think the EA community currently has a limited amount to say to anyone with power. For example:
• I met the civil servant with oversight of UK's £8bn international development spending who seemed interested in EA but did not feel it was relevant to them – I think they were correct, I had nothing to say they didn’t already know.
• Another case is an EA I know who does not have a huge amount to donate but lots of experience in political organising and activism, I doubt the EA community provides them much useful direction.

It is not that the EA community does none of this, just that we are slow. It feels like it took 80000 Hours a while to start recommending policy/politics as a career path and it is still unclear what people should do once in positions of power. (HIPE.org.uk if doing some research on this for Government careers)

--
Overall a very interesting post. Thank you for posting.

I note you mention a "relative gap in long-termist and high-risk global poverty work". I think this is interesting. I would love it if anyone has the time to do some back of the envelope evaluations of international development governance reform organisations (like Transparency International)


Comment by weeatquince_duplicate0-37104097316182916 on [updated] Global development interventions are generally more effective than Climate change interventions · 2019-10-05T16:44:53.316Z · score: 10 (3 votes) · EA · GW

Tl;dr: This assumes pure rate of time discounting. I curious how well your analysis works for anyone who do not think that we should discount harms in the future simply by virtue of being in the future.

--
1.
THIS IS SO GOOD
This is super good research and super detailed and I am hugely impressed and hope many many people donate to Let's Fund and support you with this kind of research!!!

--
2.
LET'S BE MORE EXPLICIT ABOUT THE ETHICAL ASSUMPTIONS MADE

I enjoyed reading Appendix 3
• I agree with Pindyck that models of the social cost of carbon (SCC) require a host of underlying ethical decisions and so can be highly misleading.
• I don’t however agree with Pindyck that there is no alternative so we might as well ignore this problem

At least for the purposes of making decisions within the EA community, I think we can apply models but be explicit about what ethical assumptions have been made and how they affect the models conclusions. Many people on this forum have a decent understanding of their ethical views and how that affects decisions and so being more explicit would support good cause prioritisation decisions of donors and others.

Of course this is holding people on this forum to a higher standard of rigor than professional academic economists reach so should be seen as a nice to have rather than a default, but lets see what we can do...

--
3.
DISCOUNTING THE FUTURE, AND OTHER ASSUMPTIONS

3.1
My (very rough) understanding of climate analysis is that the SCC is very highly dependent on the discount rate.

(Appendix 3 makes this point. Also the paper you link to on SCC per country says "Discounting assumptions have consistently been one of the biggest determinants of differences between estimations of the social cost of carbon").

The paper you draw your evidence from seems to uses a pure rate of time discounting of 1-2%. This basically assumes that future people matter less.
I think many readers of this forum do not believe that future people matter less than people today.

I do not know how much this matters for the analysis. A high social cost of carbon seems, from the numbers in your article, to make climate interventions of the same order of magnitude but slightly less effective than cash transfers.

3.2
I also understand that estimates of the SCC is also dependent on the calculation of the worse case tail-end effects and there is some concern among people in the x-risk research space that small chances of very catastrophic affects are ignored in climate economics. I do not know how much this matters either.

3.3
I could also imagine that many people (especially negative leaning utilitarians) are more concerned by stopping the damage caused from climate change than impressed by the benefits of cash transfers.

SO:
I do not have answers to what effects these things have on the analysis. I would love to get your views on this.

Thank you for you work on this!!!



Comment by weeatquince_duplicate0-37104097316182916 on UK policy and politics careers · 2019-10-03T18:30:32.687Z · score: 4 (3 votes) · EA · GW

Hi,

If I had to guess and (and I feel uncomfortable doing so as not really going on anything here but my gut) I would say that at an entry level it is all pretty similar but that an entry level job in the civil service is likely slightly higher impact than an entry level job as an MP's research but the variation between jobs and MPs is likely more important. I think your personal expected value is dominated by the jobs you get in later career rather than at an entry level so this is small on the scale of your career.

Value of information to the broader EA community is good, as is any other low-hanging-fruit benefits gained by being an early EA mover into a space.

Comment by weeatquince_duplicate0-37104097316182916 on UK policy and politics careers · 2019-10-02T09:40:13.278Z · score: 3 (2 votes) · EA · GW

Hi, I think the 80K advice is still fairly applicable (also I don’t think it would be a second opinion as my views were taken into account in that 80K article)

Would probably put the diplomatic fast-stream on par with the generalist one (although not very sure about this)

I would say that do not forget you can go in direct entry into a job and if you have a bit of experience (even a year or 2) getting an SEO job (or higher) may well be preferable to the FastStream.

Comment by weeatquince_duplicate0-37104097316182916 on UK policy and politics careers · 2019-10-02T09:39:07.582Z · score: 2 (1 votes) · EA · GW

This image displays for me. I am not sure what I need to do to make it display properly for you or what has gone wrong. Can someone admin-y investigate?

Comment by weeatquince_duplicate0-37104097316182916 on UK policy and politics careers · 2019-10-02T09:32:13.552Z · score: 6 (4 votes) · EA · GW

There are maybe 40 people who are in the EA community currently in the UK civil service and none currently in politics. I think most people I know would agree that it is comparatively more useful and more neglected for EAs to move towards politics.

I also think it is generally more impactful to do well in politics than to do well in the civil service, as ultimately politicians make the decisions. Although I do know EAs would disagree with this and point out that people do not have positions of political power for very long.

I think politics is more challenging: I think it is more competitive to do very well in. Also I think if you want to go into politics you need to really commit to that path and spend your time engaged in party politics whereas I think it is easier to move in and out of the civil service.

Comment by weeatquince_duplicate0-37104097316182916 on Campaign finance reform as an EA priority? · 2019-08-30T12:54:06.983Z · score: 8 (4 votes) · EA · GW

I have been thinking a fair bit about improving institutional decision making practices. I buy the argument that if you fix systems you can make a better world and that making systems that can make good decisions is super important.

There are many things you might want to change to make systems work better. [1]

I am outside the US and really do not understand the US system and certainly do not know of any good analysis on this topic and any comments should be taken with that in mind, but my weak outside view is that campaign financing is the biggest issue with US politics.

As such this seems to me to be plausibly the most important thing for EA folk to be working on in the world today. I am happy to put my money where my mouth is and support (talk to, low level fund, etc) people to do an "EA-style analysis of US campaign finance reform".


[1] https://forum.effectivealtruism.org/posts/cpJRB7thJpESTquBK/introducing-gpi-s-new-research-agenda#Zy8kTJfGrY9z7HRYH

Comment by weeatquince_duplicate0-37104097316182916 on AI & Policy 1/3: On knowing the effect of today’s policies on Transformative AI risks, and the case for institutional improvements. · 2019-08-27T14:23:00.552Z · score: 6 (3 votes) · EA · GW

Thank you for the useful feedback: Corrected!

Comment by weeatquince_duplicate0-37104097316182916 on List of ways in which cost-effectiveness estimates can be misleading · 2019-08-20T22:16:32.433Z · score: 20 (11 votes) · EA · GW

DOUBLE COUNTING

Similar to not costing others work, you can end up in situations where the same impact is counted multiple times across all the charities involved, giving an inflated picture of the total impact.

Eg. If Effective Altruism (EA) London runs an event and this leads to an individual signing the Giving What We Can (GWWC) pledge and donating more the charity, both EA London and GWWC and the individual may take 100% of the credit in their impact measurement.

Comment by weeatquince_duplicate0-37104097316182916 on Age-Weighted Voting · 2019-08-01T23:03:35.306Z · score: 4 (3 votes) · EA · GW

Also I do plan to write this up as a top level post soon

Comment by weeatquince_duplicate0-37104097316182916 on Age-Weighted Voting · 2019-08-01T22:31:41.005Z · score: 41 (13 votes) · EA · GW

It is an interesting suggestion and I had not come across the idea before and it is great to have people thinking of new innovative policy ideas. I agree that this idea is worth investigating.

I think my main point to add is just to set out the wider context. I think it is worth people who are interested in this being aware that there is already a vast array of tried and tested policy solutions that are known to encourage more long term thinking in governments. I would lean towards the view that almost all of the ideas I list below: have very strong evidence of working well, would be much easier to push for than age-weighted voting, and would have a bigger effect size than age-weighted voting.

Here's the list (example of evidence it helps in brackets)

* Longer election cycles (UK compared to Aus)
* A non-democratic second house (UK House of Lords)
* Having a permanent neutral civil service (as in UK)
* An explicit statement of policy intent setting out a consistent cross-government view that policy makers should think long-term.
* A formal guide to best practice on discounting or on how to make policy that balances the needs of present and future generations. (UK Treasury Green Book, but more long term focused)
* An independent Office for Future Generations, or similar, with a responsibility to ensure that Government is acting in a long term manner. (as in Wales)
* Independent government oversight bodies, (UKs National Audit Office, but more long term focused)
* Various other combinations of technocracy and democracy, where details are left to experts. (UK's Bank of England, Infrastructure Commission, etc, etc)
* A duty on Ministers to consider the long term. (as in Wales)
* Horizon scanning and foresight skills, support, tools and training brought into government (UK Gov Office for Science).
* Risk management skills, support, tools and training brought into government (this must happen somewhere right?).
* Good connections between academia and science and government. (UK Open Innovation Team)
* A government body that can support and facilitate others in government with long term planning. (UK Gov Office for Science, but ideally more long term focused).
* Transparency of long term thinking. Through publication of statistics, impact assessments, etc (Eg. UK Office for National Statistics)
* Additional democratic oversight of long term issues (UK parliamentary committees)
* Legislatively binding long term targets (UKs climate change laws)
* Rules forcing Ministers to stay in position longer (untested to my knowledge)
* Being a dictatorship (China, it does work although I don’t recommend)


I hope to find time to do more work to collate suggestions and the evidence for them and do a thorough literature review
(If anyone wants to volunteer to help then get in touch). Some links here. My notes are at: https://docs.google.com/document/d/1KGLc_6bKhi5ClZPGBeEQIDF1cC4Dy8mo/edit#heading=h.mefn6dbmnz2 See also:
http://researchbriefings.files.parliament.uk/documents/LLN-2019-0076/LLN-2019-0076.pdf


As an aside I have a personal little bugbear with people focusing on the voting system when they try to think about how to make policy work. It is a tiny tiny part of the system and one where evidence of how to do it better is often minimal and tractability to change is low. I have written about this here:
https://forum.effectivealtruism.org/posts/cpJRB7thJpESTquBK/introducing-gpi-s-new-research-agenda#Zy8kTJfGrY9z7HRYH

Also my top tip for anyone thinking about tractable policy options is to start with asking: do we already know how to make significant steps to solve this problem, from existing policy best practice. (I think in this case we do.)

Comment by weeatquince_duplicate0-37104097316182916 on GCRI Call for Advisees and Collaborators · 2019-06-05T22:09:17.472Z · score: 6 (4 votes) · EA · GW

Hi, I'm curious, what are the main aims, expectations and things you hope will come from this call out? Cheers

Comment by weeatquince_duplicate0-37104097316182916 on Jade Leung: Why Companies Should be Leading on AI Governance · 2019-05-17T11:37:19.772Z · score: 9 (9 votes) · EA · GW

Hi Jade. I disagree with you. I think you are making a straw man of "regulation" and ignoring what modern best practice regulation actually looks like, whilst painting a rosy picture of industry led governance practice.

Regulation doesn't need to be a whole bunch of strict rules that limit corporate actors. It can (in theory) be a set of high level ethical principles set by society and by government who then defer to experts with industry and policy backgrounds to set more granular rules.

These granular rules can be strict rules that limit certain actions, or can be 'outcome focused regulation' that allows industry to do what it wants as long is it is able to demonstrate that it has taken suitable safety precautions, or can involve assigning legal responsibility to key senior industry actors to help align the incentives of those actors. (Good UK examples include HEFA and the ONR).

Not to say that industry cannot or should not take a lead in governance issues, but that Governments can play a role of similar importance too.

Comment by weeatquince_duplicate0-37104097316182916 on Latest EA Updates for April 2019 · 2019-05-12T22:15:06.741Z · score: 9 (3 votes) · EA · GW

David. This is great.

Your newsletters also (as well as the updates) also have a short story on what one EA community person is doing to make the world better. Why not include those here too?

Comment by weeatquince_duplicate0-37104097316182916 on How do we check for flaws in Effective Altruism? · 2019-05-06T21:18:06.193Z · score: 7 (4 votes) · EA · GW

I very much like the idea of an independent impact auditor for EA orgs.

I would consider funding or otherwise supporting such a project, anyone working on, get in touch...

One solution that happens already is radical transparency.

GiveWell and 80,000 Hours both publicly write about their mistakes. GiveWell have in the past posted vast amounts of their background working online. This level of transparency is laudable.

Comment by weeatquince_duplicate0-37104097316182916 on Should we consider the sleep loss epidemic an urgent global issue? · 2019-05-06T16:16:46.989Z · score: 4 (4 votes) · EA · GW

There is a very obvious upside to sleeping less: when you are not asleep you are awake and when you are awake you can do stuff.

On a very quick glace the economic analysis referenced above (and the quotes from Why Sleep Matters) seems to ignore this. If, as Khorton says, a person is missing sleep to raise kids or work a second job, then this benefits society.

This omission makes me very sceptical of the analysis on this topic.

Comment by weeatquince_duplicate0-37104097316182916 on Will splashy philanthropy cause the biosecurity field to focus on the wrong risks? · 2019-04-30T19:04:38.398Z · score: 21 (10 votes) · EA · GW

Just to note that there's been some discussion on this on Facebook: https://m.facebook.com/groups/437177563005273?view=permalink&id=2251872561535755

Comment by weeatquince_duplicate0-37104097316182916 on Announcing EA Hub 2.0 · 2019-04-13T13:33:58.579Z · score: 8 (3 votes) · EA · GW

This is amazing. Great work for everyone who inputted. Was thinking that a possible future features (although perhaps not a priority) would be integration to the EA funds donation tracking and maybe LinkedIn profile data.

Comment by weeatquince_duplicate0-37104097316182916 on Can my filmmaking/songwriting skills be used more effectively in EA? · 2019-04-09T14:01:53.394Z · score: 9 (6 votes) · EA · GW

Your videos are great.

I am sure there is space for content creators to be having a powerful impact on the world. Not entirely sure how but I did want to flag that the Long Term Future EA Fund has just given a $39,000 grant to a video producer: https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions .

Maybe get in touch or have a look into what was successful there (I get the impression that they found an important areas where there was otherwise a lack of good video content).

Comment by weeatquince_duplicate0-37104097316182916 on I'll Fund You to Give Away 'Doing Good Better' - Surprisingly Effective? · 2019-03-21T18:36:40.772Z · score: 3 (2 votes) · EA · GW

Awesome post.

Suggestion: I have found in person feedback to useful alongside surveys. Suggest making a bit of effort to talk to people in person, especially if it is friends you see anyway, and including this data into a final impact estimate.

Comment by weeatquince_duplicate0-37104097316182916 on Introducing GPI's new research agenda · 2019-03-21T11:43:01.299Z · score: 34 (14 votes) · EA · GW

There are maybe 100+ as important other steps to policy. In rough chronological order I started listing some of them below (I got bored part way through and stopped at what looks like 40 points).

I have aimed to have all of these issues at a roughly similar order of magnitude of importance. The scale of these issues will depend on country to country and the tractability of trying to change these issues will vary with time and depend on individual to individual.

Overall I would say that voting reform is not obviously more or less important than the other 100+ things that could be on this list (although I guess it is often likely to somewhere in the top 50% of issues). There is a lot more uncertainty about what the best voting mechanisms look like than many of the other issues on the list. It is also an issue that may be hard to change compared to some of the others.

Either way voting reform is a tiny part of an incredibly long process, a process with some huge areas for improvements in other parts.

SETTING BOUNDARIES

  • constitution and human rights and setting remits of political powers to change fundamental structures of country
  • devolution and setting remits of central political powers verses local political bodies
  • term limits

CHOOSING POLITICIANS

  • electoral commission body setting or adjusting borders of voting areas / constituencies
  • initial policy research by potential candidates (often with very limited resources)
  • manifesto writing (this is hugely important to set the agenda and hard to change)
  • public / parties choosing candidates (often a lot of internal party squabbling behind the scenes)
  • campaign fundraising (maybe undue influences)
  • campaigning and information spreading (maybe issues with false information)
  • tackling voter apathy / engagement
  • Voting mechanism
  • coalition forming (often very untransparent)
  • government/leader assigns topic areas to ministers / seniors (very political, evidence that understanding a topic is inversely proportional to how long a minister will work on that topic)

CIVIL SERVICE STAFFING

  • hiring staff into government (hiring processes, lack of expertise, diversity issues)
  • how staff in government are managed (values, team building, rewards, progression, diversity)
  • how staff in government are trained (feedback mechanisms, training)

ACCOUNTABILITY

  • splitting out areas where political leadership is needed and areas where technocratic leadership is needed
  • designing clear mechanisms of accountability to topics so that politicians and civil servants are aware of what their responsibilities are and can be held to account for their actions (this is super important)
  • ensuring political representation so each individual has direct access to a politician who is accountable for their concerns
  • putting in place systems that allow changes to the system if an accountability mechanisms is not working
  • ensuring accountability for unknown unknown issues that may arise
  • how poor performance of political and civil staff is addressed (poor performance procedures, whistleblowing)
  • how corruption is rooted out and addressed (yes there is corruption in developed countries)
  • mechanisms to allow parties / populations to kick out bad leaders if needed
  • Ensuring mechanisms for cross party dialogue and that partisan-ism of politics does not lead to distortions of truth

AGENDA SETTING AND INITIAL RESEARCH

  • carrying out research to understand what the policy problems are (often unclear how to do this)
  • understanding what the population wants (public often ignored, need good procedures for information gathering, public consultation, etc)

POLICY DEVELOPMENT

  • Development of policy options to address problems
  • Mechanisms for Cost Benefit Analysis and Impact Assessments to decide best policy options
  • access to expertise advice and best practice (lack of communication between academia and policy)
  • measuring impact of a policy proposal once in place (ensuring that mechanisms to measure impact are initiated at the very start of the policy implementation)
  • actually using information on
  • how politicians are allowed to change their mind given new evidence (updating is often seen as weakness)
  • mechanisms to ensure issues that are not politically immediately necessary are tackled (lack of long term thinking)

LEGISLATIVE PROCESS

POLICY IMPLEMENTATION

POLICY REVIEW

PUBLIC COMMUNICATION

RISK MANAGEMENT

GENERAL

  • flexibility to deal with shocks of every step of the above process (often lacking)
  • transparency of every step of the above process (often lacking)
Comment by weeatquince_duplicate0-37104097316182916 on Climate Change Is, In General, Not An Existential Risk · 2019-03-06T08:59:39.665Z · score: 2 (1 votes) · EA · GW

Another thing to consider is that, given climate modeling is so imprecise and regularly flawed, that our models are wrong and the risk is significantly different than predicted.

(Similar to some of Toby's stuff on the Large Hadron Collider risks: http://blog.practicalethics.ox.ac.uk/2008/04/these-are-not-the-probabilities-you-are-looking-for/)

This could go both ways.

Comment by weeatquince_duplicate0-37104097316182916 on Introducing GPI's new research agenda · 2019-03-06T08:41:37.900Z · score: 23 (9 votes) · EA · GW

This is really really impressive. An amazing collection of really important questions.

POSITIVES. I like the fact that you intend to research:
* Institutional actors (2.8). Significant changes to the world are likely to come through institutional actors and the EA community has largely ignored them to date. The existing research has focused so much on the benefits of marginal donations (or marginal research) that our views on cause prioritisation cannot be easily applied to states. As someone into EA in the business of influencing states this is a really problematic oversight of the community to date, that we should be looking to fix as soon as possible.
* Decision-theoretic issues (2.1)
* The use of discount rates. This is practically useful for decision makers.

OMISSIONS. I did however note a few things that I would have expected to be included, to not be mentioned in this research agenda in particular there was no discussion on
* Useful models for thinking about and talking about cause prioritisation. In particular the scale neglectedness and tractability framework is often used and often criticised. What other models can or should be used by the EA community.
* Social change. Within section 1 there is some discussion of broad verses narrow future focused interventions, and so I would have expected a similar discussion in section 2 on social change interventions verses targeted interventions in general. This was not mentioned.
* (which risks to the future are most concerning. Although I assume this is because those topics are being covered by others such as FHI.)

CONCERN
Like I said above I think the questions within 2.8 are really importation for EA to focus on. I hope that the fact it is low on the list does not mean it is not priorotised.
I also note that there is a sub-question in 2.8 on "what is the best feasible voting system". I think this issue comes up too much and is often a distraction. It feels like a minor sub part of the question on "what is the optimal institution design" which people gravitate too because it is the most visible part of many political systems, but is really unlikely to be thing on the margin that most needs improving.

I hope that helps, Sam

Comment by weeatquince_duplicate0-37104097316182916 on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-06T08:09:54.487Z · score: 28 (15 votes) · EA · GW

CEA run the EA community fund to provide financial support EA community group leaders.

The key metric that CEA for evaluating the success of the groups they fund is the number of people from each local group who reach interview stage for high impact jobs, which largely means jobs within EA organisations. Bonus points available if they get the job.

This information feels like a relevant piece of the puzzle for anyone thinking through these issues. It could be (that in hindsight) CEA pushing chapter organisers to push people to focus on jobs in EA organisations in many ways might not be the best strategy.

Comment by weeatquince_duplicate0-37104097316182916 on Tactical models to improve institutional decision-making · 2019-01-13T23:43:49.160Z · score: 3 (2 votes) · EA · GW

I found this article unclear about what you were talking about when you say "improving institutional decision making" (in policy). I think we can break this down into two very different things.

A: Improving improving the decision making processes and systems of accountability that policy institutions use to make decisions so that these institutions will more generally be better decision makers. (This is what I have always meant by and understood by the term "improving institutional decision making", and what JEss talks about in her post you link to)

B: Having influence in a specific situation on the policy making process. (This is basically what people tend to call "lobbying" or sometimes "campaigning".)

I felt that the DFID story and the three models were all focused on B: lobbying. The models were useful for thinking about how to do B well (assuming you know better than the policy makers what policy should be made). Theoretical advice on lobbying is a nice thing to have* if you are in the field (so thank you for writing them up, I may give them some thought in my upcoming work). And if you are trying to change A it would be useful to understand how to do B.

The models were very useful for advising on how to do A: improving how institutions work generally. And A is where I would say the value lies.

I think the main point is just on how easy the article was to read. I found the article itself was very confusing as to if you were talking about A or B at many points.

*Also in general I think the field of lobbying is as one might say "more of an art than a science" and although a theoretical understanding of how it works is nice it is not super useful comapred to experience in the field in the specific country that you are in.

Comment by weeatquince_duplicate0-37104097316182916 on Climate Change Is, In General, Not An Existential Risk · 2019-01-13T23:15:02.558Z · score: 3 (2 votes) · EA · GW

I would be curious about any views or research you may have done into geoengineering risk?

My understanding is that climate change is not itself an existential risk but that it may lead to other risks (such war which as Peter Hurford mentions). One other risk is geoengineering where humanity starts thinking it can control planetary temperatures and makes a mistake (or the technology is used maliciously) and that presents a risk.

Comment by weeatquince_duplicate0-37104097316182916 on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-13T23:08:37.853Z · score: 7 (6 votes) · EA · GW

Just to flag that the case for this is much much weaker outside the USA.

The matching limits for donations outside the US is much lower and you may also lose your tax benefits of donating.

See: https://docs.google.com/document/d/1hCCfv-1DI4FD5I5Pw5E3Ov4O46uzpIdI0QqRrWzvRsI/edit

Comment by weeatquince_duplicate0-37104097316182916 on CEA on community building, representativeness, and the EA Summit · 2018-09-22T07:20:02.197Z · score: 6 (2 votes) · EA · GW

Hi Kerry, Thank you for the call. I wrote up a short summary of what we discussed. It is a while since we talked so not perfect. Please correct anything I have misremembered.

~

1.

~ ~ Setting the scene ~ ~

  • CEA should champion cause prioritisation. We want people who are willing to pick a new cause based on evidence and research and a community that continues to work out how to do the most good. (We both agreed this.)
  • There is a difference between “cause impartiality”, as defined above, and “actual impartiality”, not having a view on what causes are most important. (There was some confusion but we got through it)
  • There is a difference between long-termism as a methodology where one considers the long run future impacts which CEA should 100% promote and long-termism as a conclusion that the most important thing to focus on right now is shaping the long term future of humanity. (I asserted this, not sure you expressed a view.)
  • A rational EA decision maker could go through a process of cause prioritisation and very legitimately reach different conclusions as to what causes are most important. They may have different skills to apply or different ethics (and we are far away from solving ethics if such a thing is possible). (I asserted this, not sure you expressed a view.)

~

2.

~ ~ Create space, build trust, express a view, do not be perfect ~ ~

  • The EA community needs to create the right kind of space so that people can reach their own decision about what causes are most important. This can be a physical space (a local community) or an online space. People should feel empowered to make their own decisions about causes. This means that they will be more adept at cause prioritisation, more likely to believe the conclusions reached and more likely to come to the correct answer for themselves, and EA is more likely to come to a correct answers overall. To do this they need good tools and resources and to feel that the space they are in is neutral. This needs trust...

  • Creating that space requires trust. People need to trust the tools that are guiding and advising them. If people feel they being subtly pushed in a direction they will reject the resources and tools being offered. Any sign of a breakdown of trust between people reading CEA’s resources and CEA should be taken very seriously.

  • Creating that space does not mean you cannot also express a view. You just want to distinguish when you are doing this. You can create cause prioritisation resources and tools that are truly neutral but still have a separate section on what answers do CEA staff reach or what is CEA’s answer.

  • Perfection is not required as long as there is trust and the system is not breaking down.

  • For example: providing policy advice I gave the example of writing advice to a Gov Minister on a controversial political issue, as a civil servant. The first ~85% of this imaginary advice has an impartial summary of the background and the problem and then a series of suggested actions with evaluations of their impact. The final ~15% has a recommended action based on the civil servant’s view of the matter. The important thing here is that there generally is trust between the Minister and the Department that advice will be neutral, and that in this case the Minister trusts that the section/space setting out the background and possible actions is neutral enough for them to make a good decision. It doesn’t need to be perfect, in fact the Minister will be aware that there is likely some amount of bias, but as long as there is sufficient trust that does not matter. And there is a recommendation which the Minister can choose to follow or not. In many cases the Minister will follow the recommendation.

~

3.

~ ~ How this goes wrong ~ ~

  • Imagine someone who has identified cause X which is super important comes across the EA community. You do not want the community to either be so focused on one cause that this person is either put off or is persuaded that the current EA cause is more important and forgets about cause X

  • I mentioned some of the things that damage trust (see the foot of my previous comment).

  • You mentioned you had seen signs of tribalism in the EA community.

~

4.

~ ~ Conclusion ~ ~

  • You said that you saw more value in CEA creating a space that was “actual impartial” as opposed to “cause impartial” than you had done previously.

~

5.

~ ~ Addendum: Some thoughts on evidence ~ ~

Not discussed but I have some extra thoughts on evidence.

There are two areas of my life where much of what I have learned points towards the views above being true.

  • Coaching. In coaching you need to make sure the coachee feels like you are there to help them not in any way with you own agenda (that is different from theirs).

  • Policy. In policy making you need trust and neutrality between Minister and civil servant.

There is value in following perceived wisdom on a topic. That said I have been looking out for any strong evidence that these things are true (eg. that coaching goes badly if they think you are subtly biased one way or another) and I have yet to find anything particularly persuasive. (Counterpoint: I know one friend who knows their therapist is overly-bias towards pushing them to have additional sessions but this does not put them off attending or mean they find it less useful). Perhaps this deserves further study.

Also worth bearing in mind there maybe dissimilarities between what CEA does and the fields of coaching and policy.

Also worth flagging that the example of policy advice given above is somewhat artificial, some policy advice (especially where controversial) is like that but much of it is just: “please approve action x”

In conclusion my views on this are based on very little evidence and a lot of gut feeling. My intuitions on this are strongly guided by my time doing coaching and doing policy advice.

Comment by weeatquince_duplicate0-37104097316182916 on Additional plans for the new EA Forum · 2018-09-16T01:08:48.125Z · score: 13 (13 votes) · EA · GW

Feature idea: If you co-write an article with someone being able to post as co-authors.

Comment by weeatquince_duplicate0-37104097316182916 on CEA on community building, representativeness, and the EA Summit · 2018-08-26T12:56:58.154Z · score: 3 (3 votes) · EA · GW

Hi Kerry, Some more thoughts prior to having a chat.

-

Is longtermism a cause?

Yes and no. The term is used in multiple ways.

A: Consideration of the long-term future.

It is a core part of cause prioritisation to avoid availability biases: to consider the plights of those we cannot so easily be aware of, such as animals, people in other countries and people in the future. As such, in my view, it is imperative that CEA and EA community leaders promote this.

B: The long-term cause area.

Some people will conclude that the optimal use of their limited resources should be putting them towards shaping the far future. But not everyone, even after full rational consideration, will reach this view. Nor should we expect such unanimity of conclusions. As such, in my view, CEA and EA community leaders can recommend people to consider this causes area, but should not tell people this is the answer.

-

Threading the needle

I agree with the 6 points you make here.

(Although interestingly I personally do not have evidence that “area allegiance is operating as a kind of tribal signal in the movement currently”)

-

CEA and cause-impartiality

I think CEA should be careful about how to express a view. Doing this in wrong way could make it look like CEA is not cause impartial or not representative.

My view is to give recommendations and tools but not answers. This is similar to how we would not expect 80K to have a view on what the best job is (as it depends on an individual and their skills and needs) but we would expect 80K to have recommendations and to have advice on how to choose.

I think this approach is also useful because:

  • People are more likely to trust decisions they reach through their own thinking rather than conclusions they are pushed towards.

  • It handles the fact that everyone is different. The advice or reasoning that works for one person may well not make sense for someone else.

I think (as Khorton says) it is perfectly reasonable for an organisation to not have a conclusion.

-

(One other thought I had was on examples of actions I would be concerned about CEA or another movement building organisations taking would be: Expressing certainty about a area (in internal policy or externally), basing impact measurement solely on a single cause area, hiring staff for cause-general roles based on their views of what causes is most important, attempting to push as many people as possible to a specific cause area, etc)