Comment by weeatquince_duplicate0-37104097316182916 on GCRI Call for Advisees and Collaborators · 2019-06-05T22:09:17.472Z · score: 6 (4 votes) · EA · GW

Hi, I'm curious, what are the main aims, expectations and things you hope will come from this call out? Cheers

Comment by weeatquince_duplicate0-37104097316182916 on Jade Leung: Why Companies Should be Leading on AI Governance · 2019-05-17T11:37:19.772Z · score: 9 (9 votes) · EA · GW

Hi Jade. I disagree with you. I think you are making a straw man of "regulation" and ignoring what modern best practice regulation actually looks like, whilst painting a rosy picture of industry led governance practice.

Regulation doesn't need to be a whole bunch of strict rules that limit corporate actors. It can (in theory) be a set of high level ethical principles set by society and by government who then defer to experts with industry and policy backgrounds to set more granular rules.

These granular rules can be strict rules that limit certain actions, or can be 'outcome focused regulation' that allows industry to do what it wants as long is it is able to demonstrate that it has taken suitable safety precautions, or can involve assigning legal responsibility to key senior industry actors to help align the incentives of those actors. (Good UK examples include HEFA and the ONR).

Not to say that industry cannot or should not take a lead in governance issues, but that Governments can play a role of similar importance too.

Comment by weeatquince_duplicate0-37104097316182916 on Latest EA Updates for April 2019 · 2019-05-12T22:15:06.741Z · score: 9 (3 votes) · EA · GW

David. This is great.

Your newsletters also (as well as the updates) also have a short story on what one EA community person is doing to make the world better. Why not include those here too?

Comment by weeatquince_duplicate0-37104097316182916 on How do we check for flaws in Effective Altruism? · 2019-05-06T21:18:06.193Z · score: 7 (4 votes) · EA · GW

I very much like the idea of an independent impact auditor for EA orgs.

I would consider funding or otherwise supporting such a project, anyone working on, get in touch...

One solution that happens already is radical transparency.

GiveWell and 80,000 Hours both publicly write about their mistakes. GiveWell have in the past posted vast amounts of their background working online. This level of transparency is laudable.

Comment by weeatquince_duplicate0-37104097316182916 on Should we consider the sleep loss epidemic an urgent global issue? · 2019-05-06T16:16:46.989Z · score: 4 (4 votes) · EA · GW

There is a very obvious upside to sleeping less: when you are not asleep you are awake and when you are awake you can do stuff.

On a very quick glace the economic analysis referenced above (and the quotes from Why Sleep Matters) seems to ignore this. If, as Khorton says, a person is missing sleep to raise kids or work a second job, then this benefits society.

This omission makes me very sceptical of the analysis on this topic.

Comment by weeatquince_duplicate0-37104097316182916 on Will splashy philanthropy cause the biosecurity field to focus on the wrong risks? · 2019-04-30T19:04:38.398Z · score: 21 (10 votes) · EA · GW

Just to note that there's been some discussion on this on Facebook:

Comment by weeatquince_duplicate0-37104097316182916 on Announcing EA Hub 2.0 · 2019-04-13T13:33:58.579Z · score: 8 (3 votes) · EA · GW

This is amazing. Great work for everyone who inputted. Was thinking that a possible future features (although perhaps not a priority) would be integration to the EA funds donation tracking and maybe LinkedIn profile data.

Comment by weeatquince_duplicate0-37104097316182916 on Can my filmmaking/songwriting skills be used more effectively in EA? · 2019-04-09T14:01:53.394Z · score: 9 (6 votes) · EA · GW

Your videos are great.

I am sure there is space for content creators to be having a powerful impact on the world. Not entirely sure how but I did want to flag that the Long Term Future EA Fund has just given a $39,000 grant to a video producer: .

Maybe get in touch or have a look into what was successful there (I get the impression that they found an important areas where there was otherwise a lack of good video content).

Comment by weeatquince_duplicate0-37104097316182916 on I'll Fund You to Give Away 'Doing Good Better' - Surprisingly Effective? · 2019-03-21T18:36:40.772Z · score: 3 (2 votes) · EA · GW

Awesome post.

Suggestion: I have found in person feedback to useful alongside surveys. Suggest making a bit of effort to talk to people in person, especially if it is friends you see anyway, and including this data into a final impact estimate.

Comment by weeatquince_duplicate0-37104097316182916 on Introducing GPI's new research agenda · 2019-03-21T11:43:01.299Z · score: 15 (6 votes) · EA · GW

There are maybe 100+ as important other steps to policy. In rough chronological order I started listing some of them below (I got bored part way through and stopped at what looks like 40 points).

I have aimed to have all of these issues at a roughly similar order of magnitude of importance. The scale of these issues will depend on country to country and the tractability of trying to change these issues will vary with time and depend on individual to individual.

Overall I would say that voting reform is not obviously more or less important than the other 100+ things that could be on this list (although I guess it is often likely to somewhere in the top 50% of issues). There is a lot more uncertainty about what the best voting mechanisms look like than many of the other issues on the list. It is also an issue that may be hard to change compared to some of the others.

Either way voting reform is a tiny part of an incredibly long process, a process with some huge areas for improvements in other parts.


  • constitution and human rights and setting remits of political powers to change fundamental structures of country
  • devolution and setting remits of central political powers verses local political bodies
  • term limits


  • electoral commission body setting or adjusting borders of voting areas / constituencies
  • initial policy research by potential candidates (often with very limited resources)
  • manifesto writing (this is hugely important to set the agenda and hard to change)
  • public / parties choosing candidates (often a lot of internal party squabbling behind the scenes)
  • campaign fundraising (maybe undue influences)
  • campaigning and information spreading (maybe issues with false information)
  • tackling voter apathy / engagement
  • Voting mechanism
  • coalition forming (often very untransparent)
  • government/leader assigns topic areas to ministers / seniors (very political, evidence that understanding a topic is inversely proportional to how long a minister will work on that topic)


  • hiring staff into government (hiring processes, lack of expertise, diversity issues)
  • how staff in government are managed (values, team building, rewards, progression, diversity)
  • how staff in government are trained (feedback mechanisms, training)


  • splitting out areas where political leadership is needed and areas where technocratic leadership is needed
  • designing clear mechanisms of accountability to topics so that politicians and civil servants are aware of what their responsibilities are and can be held to account for their actions (this is super important)
  • ensuring political representation so each individual has direct access to a politician who is accountable for their concerns
  • putting in place systems that allow changes to the system if an accountability mechanisms is not working
  • ensuring accountability for unknown unknown issues that may arise
  • how poor performance of political and civil staff is addressed (poor performance procedures, whistleblowing)
  • how corruption is rooted out and addressed (yes there is corruption in developed countries)
  • mechanisms to allow parties / populations to kick out bad leaders if needed
  • Ensuring mechanisms for cross party dialogue and that partisan-ism of politics does not lead to distortions of truth


  • carrying out research to understand what the policy problems are (often unclear how to do this)
  • understanding what the population wants (public often ignored, need good procedures for information gathering, public consultation, etc)


  • Development of policy options to address problems
  • Mechanisms for Cost Benefit Analysis and Impact Assessments to decide best policy options
  • access to expertise advice and best practice (lack of communication between academia and policy)
  • measuring impact of a policy proposal once in place (ensuring that mechanisms to measure impact are initiated at the very start of the policy implementation)
  • actually using information on
  • how politicians are allowed to change their mind given new evidence (updating is often seen as weakness)
  • mechanisms to ensure issues that are not politically immediately necessary are tackled (lack of long term thinking)







  • flexibility to deal with shocks of every step of the above process (often lacking)
  • transparency of every step of the above process (often lacking)
Comment by weeatquince_duplicate0-37104097316182916 on Climate Change Is, In General, Not An Existential Risk · 2019-03-06T08:59:39.665Z · score: 2 (1 votes) · EA · GW

Another thing to consider is that, given climate modeling is so imprecise and regularly flawed, that our models are wrong and the risk is significantly different than predicted.

(Similar to some of Toby's stuff on the Large Hadron Collider risks:

This could go both ways.

Comment by weeatquince_duplicate0-37104097316182916 on Introducing GPI's new research agenda · 2019-03-06T08:41:37.900Z · score: 16 (5 votes) · EA · GW

This is really really impressive. An amazing collection of really important questions.

POSITIVES. I like the fact that you intend to research:
* Institutional actors (2.8). Significant changes to the world are likely to come through institutional actors and the EA community has largely ignored them to date. The existing research has focused so much on the benefits of marginal donations (or marginal research) that our views on cause prioritisation cannot be easily applied to states. As someone into EA in the business of influencing states this is a really problematic oversight of the community to date, that we should be looking to fix as soon as possible.
* Decision-theoretic issues (2.1)
* The use of discount rates. This is practically useful for decision makers.

OMISSIONS. I did however note a few things that I would have expected to be included, to not be mentioned in this research agenda in particular there was no discussion on
* Useful models for thinking about and talking about cause prioritisation. In particular the scale neglectedness and tractability framework is often used and often criticised. What other models can or should be used by the EA community.
* Social change. Within section 1 there is some discussion of broad verses narrow future focused interventions, and so I would have expected a similar discussion in section 2 on social change interventions verses targeted interventions in general. This was not mentioned.
* (which risks to the future are most concerning. Although I assume this is because those topics are being covered by others such as FHI.)

Like I said above I think the questions within 2.8 are really importation for EA to focus on. I hope that the fact it is low on the list does not mean it is not priorotised.
I also note that there is a sub-question in 2.8 on "what is the best feasible voting system". I think this issue comes up too much and is often a distraction. It feels like a minor sub part of the question on "what is the optimal institution design" which people gravitate too because it is the most visible part of many political systems, but is really unlikely to be thing on the margin that most needs improving.

I hope that helps, Sam

Comment by weeatquince_duplicate0-37104097316182916 on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-06T08:09:54.487Z · score: 28 (15 votes) · EA · GW

CEA run the EA community fund to provide financial support EA community group leaders.

The key metric that CEA for evaluating the success of the groups they fund is the number of people from each local group who reach interview stage for high impact jobs, which largely means jobs within EA organisations. Bonus points available if they get the job.

This information feels like a relevant piece of the puzzle for anyone thinking through these issues. It could be (that in hindsight) CEA pushing chapter organisers to push people to focus on jobs in EA organisations in many ways might not be the best strategy.

Comment by weeatquince_duplicate0-37104097316182916 on Tactical models to improve institutional decision-making · 2019-01-13T23:43:49.160Z · score: 3 (2 votes) · EA · GW

I found this article unclear about what you were talking about when you say "improving institutional decision making" (in policy). I think we can break this down into two very different things.

A: Improving improving the decision making processes and systems of accountability that policy institutions use to make decisions so that these institutions will more generally be better decision makers. (This is what I have always meant by and understood by the term "improving institutional decision making", and what JEss talks about in her post you link to)

B: Having influence in a specific situation on the policy making process. (This is basically what people tend to call "lobbying" or sometimes "campaigning".)

I felt that the DFID story and the three models were all focused on B: lobbying. The models were useful for thinking about how to do B well (assuming you know better than the policy makers what policy should be made). Theoretical advice on lobbying is a nice thing to have* if you are in the field (so thank you for writing them up, I may give them some thought in my upcoming work). And if you are trying to change A it would be useful to understand how to do B.

The models were very useful for advising on how to do A: improving how institutions work generally. And A is where I would say the value lies.

I think the main point is just on how easy the article was to read. I found the article itself was very confusing as to if you were talking about A or B at many points.

*Also in general I think the field of lobbying is as one might say "more of an art than a science" and although a theoretical understanding of how it works is nice it is not super useful comapred to experience in the field in the specific country that you are in.

Comment by weeatquince_duplicate0-37104097316182916 on Climate Change Is, In General, Not An Existential Risk · 2019-01-13T23:15:02.558Z · score: 3 (2 votes) · EA · GW

I would be curious about any views or research you may have done into geoengineering risk?

My understanding is that climate change is not itself an existential risk but that it may lead to other risks (such war which as Peter Hurford mentions). One other risk is geoengineering where humanity starts thinking it can control planetary temperatures and makes a mistake (or the technology is used maliciously) and that presents a risk.

Comment by weeatquince_duplicate0-37104097316182916 on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-13T23:08:37.853Z · score: 6 (5 votes) · EA · GW

Just to flag that the case for this is much much weaker outside the USA.

The matching limits for donations outside the US is much lower and you may also lose your tax benefits of donating.


Comment by weeatquince_duplicate0-37104097316182916 on CEA on community building, representativeness, and the EA Summit · 2018-09-22T07:20:02.197Z · score: 6 (2 votes) · EA · GW

Hi Kerry, Thank you for the call. I wrote up a short summary of what we discussed. It is a while since we talked so not perfect. Please correct anything I have misremembered.



~ ~ Setting the scene ~ ~

  • CEA should champion cause prioritisation. We want people who are willing to pick a new cause based on evidence and research and a community that continues to work out how to do the most good. (We both agreed this.)
  • There is a difference between “cause impartiality”, as defined above, and “actual impartiality”, not having a view on what causes are most important. (There was some confusion but we got through it)
  • There is a difference between long-termism as a methodology where one considers the long run future impacts which CEA should 100% promote and long-termism as a conclusion that the most important thing to focus on right now is shaping the long term future of humanity. (I asserted this, not sure you expressed a view.)
  • A rational EA decision maker could go through a process of cause prioritisation and very legitimately reach different conclusions as to what causes are most important. They may have different skills to apply or different ethics (and we are far away from solving ethics if such a thing is possible). (I asserted this, not sure you expressed a view.)



~ ~ Create space, build trust, express a view, do not be perfect ~ ~

  • The EA community needs to create the right kind of space so that people can reach their own decision about what causes are most important. This can be a physical space (a local community) or an online space. People should feel empowered to make their own decisions about causes. This means that they will be more adept at cause prioritisation, more likely to believe the conclusions reached and more likely to come to the correct answer for themselves, and EA is more likely to come to a correct answers overall. To do this they need good tools and resources and to feel that the space they are in is neutral. This needs trust...

  • Creating that space requires trust. People need to trust the tools that are guiding and advising them. If people feel they being subtly pushed in a direction they will reject the resources and tools being offered. Any sign of a breakdown of trust between people reading CEA’s resources and CEA should be taken very seriously.

  • Creating that space does not mean you cannot also express a view. You just want to distinguish when you are doing this. You can create cause prioritisation resources and tools that are truly neutral but still have a separate section on what answers do CEA staff reach or what is CEA’s answer.

  • Perfection is not required as long as there is trust and the system is not breaking down.

  • For example: providing policy advice I gave the example of writing advice to a Gov Minister on a controversial political issue, as a civil servant. The first ~85% of this imaginary advice has an impartial summary of the background and the problem and then a series of suggested actions with evaluations of their impact. The final ~15% has a recommended action based on the civil servant’s view of the matter. The important thing here is that there generally is trust between the Minister and the Department that advice will be neutral, and that in this case the Minister trusts that the section/space setting out the background and possible actions is neutral enough for them to make a good decision. It doesn’t need to be perfect, in fact the Minister will be aware that there is likely some amount of bias, but as long as there is sufficient trust that does not matter. And there is a recommendation which the Minister can choose to follow or not. In many cases the Minister will follow the recommendation.



~ ~ How this goes wrong ~ ~

  • Imagine someone who has identified cause X which is super important comes across the EA community. You do not want the community to either be so focused on one cause that this person is either put off or is persuaded that the current EA cause is more important and forgets about cause X

  • I mentioned some of the things that damage trust (see the foot of my previous comment).

  • You mentioned you had seen signs of tribalism in the EA community.



~ ~ Conclusion ~ ~

  • You said that you saw more value in CEA creating a space that was “actual impartial” as opposed to “cause impartial” than you had done previously.



~ ~ Addendum: Some thoughts on evidence ~ ~

Not discussed but I have some extra thoughts on evidence.

There are two areas of my life where much of what I have learned points towards the views above being true.

  • Coaching. In coaching you need to make sure the coachee feels like you are there to help them not in any way with you own agenda (that is different from theirs).

  • Policy. In policy making you need trust and neutrality between Minister and civil servant.

There is value in following perceived wisdom on a topic. That said I have been looking out for any strong evidence that these things are true (eg. that coaching goes badly if they think you are subtly biased one way or another) and I have yet to find anything particularly persuasive. (Counterpoint: I know one friend who knows their therapist is overly-bias towards pushing them to have additional sessions but this does not put them off attending or mean they find it less useful). Perhaps this deserves further study.

Also worth bearing in mind there maybe dissimilarities between what CEA does and the fields of coaching and policy.

Also worth flagging that the example of policy advice given above is somewhat artificial, some policy advice (especially where controversial) is like that but much of it is just: “please approve action x”

In conclusion my views on this are based on very little evidence and a lot of gut feeling. My intuitions on this are strongly guided by my time doing coaching and doing policy advice.

Comment by weeatquince_duplicate0-37104097316182916 on Additional plans for the new EA Forum · 2018-09-16T01:08:48.125Z · score: 13 (13 votes) · EA · GW

Feature idea: If you co-write an article with someone being able to post as co-authors.

Self-care sessions for EA groups

2018-09-06T15:55:12.835Z · score: 11 (8 votes)
Comment by weeatquince_duplicate0-37104097316182916 on CEA on community building, representativeness, and the EA Summit · 2018-08-26T12:56:58.154Z · score: 3 (3 votes) · EA · GW

Hi Kerry, Some more thoughts prior to having a chat.


Is longtermism a cause?

Yes and no. The term is used in multiple ways.

A: Consideration of the long-term future.

It is a core part of cause prioritisation to avoid availability biases: to consider the plights of those we cannot so easily be aware of, such as animals, people in other countries and people in the future. As such, in my view, it is imperative that CEA and EA community leaders promote this.

B: The long-term cause area.

Some people will conclude that the optimal use of their limited resources should be putting them towards shaping the far future. But not everyone, even after full rational consideration, will reach this view. Nor should we expect such unanimity of conclusions. As such, in my view, CEA and EA community leaders can recommend people to consider this causes area, but should not tell people this is the answer.


Threading the needle

I agree with the 6 points you make here.

(Although interestingly I personally do not have evidence that “area allegiance is operating as a kind of tribal signal in the movement currently”)


CEA and cause-impartiality

I think CEA should be careful about how to express a view. Doing this in wrong way could make it look like CEA is not cause impartial or not representative.

My view is to give recommendations and tools but not answers. This is similar to how we would not expect 80K to have a view on what the best job is (as it depends on an individual and their skills and needs) but we would expect 80K to have recommendations and to have advice on how to choose.

I think this approach is also useful because:

  • People are more likely to trust decisions they reach through their own thinking rather than conclusions they are pushed towards.

  • It handles the fact that everyone is different. The advice or reasoning that works for one person may well not make sense for someone else.

I think (as Khorton says) it is perfectly reasonable for an organisation to not have a conclusion.


(One other thought I had was on examples of actions I would be concerned about CEA or another movement building organisations taking would be: Expressing certainty about a area (in internal policy or externally), basing impact measurement solely on a single cause area, hiring staff for cause-general roles based on their views of what causes is most important, attempting to push as many people as possible to a specific cause area, etc)

Comment by weeatquince_duplicate0-37104097316182916 on CEA on community building, representativeness, and the EA Summit · 2018-08-16T07:27:48.117Z · score: 1 (1 votes) · EA · GW

Yes thanks. Edited.

Comment by weeatquince_duplicate0-37104097316182916 on CEA on community building, representativeness, and the EA Summit · 2018-08-15T23:50:28.751Z · score: 27 (29 votes) · EA · GW

We would like to hear suggestions from forum users about what else they might like to see from CEA in this area.

Here is my two cents. I hope it is constructive:


The policy is excellent but the challenge lies in implementation.

Firstly I want to say that this post is fantastic. I think you have got the policy correct: that CEA should be cause-impartial, but not cause-agnostic and CEA’s work should be cause-general.

However I do not think it looks, from the outside, like CEA is following this policy. Some examples:

  • EA London staff had concerns that they would need to be more focused on the far future in order to receive funding from CEA.

  • You explicitly say on your website: "We put most of our credence in a worldview that says what happens in the long-term future is most of what matters. We are therefore more optimistic about others who roughly share this worldview."[1]

  • The example you give of the new EA handbook

  • There is a close association with 80000 Hours who are explicitly focusing much of their effort on the far future.

These are all quite subtle things, but collectively they give an impression that CEA is not cause impartial (that it is x-risk focused). Of course this is a difficult thing to get correct. It is difficult to draw the line between saying: 'our staff members believe cause___ is important' (a useful factoid that should definitely be said), whilst also putting across a strong front of cause impartiality.


Suggestion: CEA should actively champion cause impartiality

If you genuinely want to be cause impartial I think most of the solutions to this are around being super vigilant about how CEA comes across. Eg:

  • Have a clear internal style guide that sets out to staff good and bad ways to talk about causes

  • Have 'cause impartiality' as a staff value

  • If you do an action that does not look cause impartial (say EA Grants mostly grants money to far future causes) then just acknowledge this and say that you have noted it and explain why it happened.

  • Public posts like this one setting out what CEA believes

  • If you want to do lots of "prescriptive" actions split them off into a sub project or a separate institution.

  • Apply the above retroactively (remove lines from your website that make it look like you are only future focused)

Beyond that, if you really want to champion cause impartiality you may also consider extra things like:

  • More focus on cause prioritisation research.

  • Hiring people who value cause impartiality / cause prioritisation research / community building, above people who have strong views on what causes are important.


Being representative is about making people feel listened too.

Your section on representatives feels like you are trying to pin down a way of finding an exact number so you can say we have this many articles on topic x and this many on topic y and so on. I am not sure this is quite the correct framing.

Things like the EA handbook should (as a lower bound) have enough of a diversity of causes mentioned that the broader EA community does not feel misrepresented but (as an upper bound) not so much that CEA staff [2] feel like it is misrepresenting them. Anything within this range seems fine to me. (Eg. with the EA handbook both groups should feel comfortable handing this book to a friend.) Although I do feel a bit like I have just typed 'just do the thing that makes everyone happy' which is easier said than done.

I also think that "representativeness" is not quite the right issue any way. The important thing is that people in the EA community feel listened too and feel like what CEA is doing represents them. The % of content on different topics is only part of that. The other parts of the solution are:

  • Coming across like you listen: see the aforementioned points on championing cause impartiality. Also expressing uncertainty, mentioning that there are opposing views, giving two sides to a debate, etc.

  • Listening -- ie. consulting publicly (or with trusted parties) wherever possible.

If anything getting these two things correct is more important than getting the exact percentage of your work to be representative.

Sam :-)


[2] Unless you have reason to think that there is a systematic bias in staff, eg if you actively hired people because of the cause they cared about.

Comment by weeatquince_duplicate0-37104097316182916 on EA Funds - An update from CEA · 2018-08-08T23:04:27.074Z · score: 0 (0 votes) · EA · GW

YAY <3

Comment by weeatquince_duplicate0-37104097316182916 on EA Funds - An update from CEA · 2018-08-08T11:17:18.090Z · score: 6 (6 votes) · EA · GW

Marek, well done on all of your hard work on this.

Separate from the managed funds. I really like the work that CEA is doing to help money be moved around the world to other EA charities. I would love to see more organisations on the list of places that donations can be made through the EA Funds platform. Eg, REG or Animal Charity Evaluators or Rethink Charity. Is this in the works?

Comment by weeatquince_duplicate0-37104097316182916 on Leverage Research: reviewing the basic facts · 2018-08-05T21:57:53.473Z · score: 38 (28 votes) · EA · GW

counting our research as 0 value, and using the movement building impact estimates from LEAN, we come out well on EV compared to an average charity ... I will let readers make their own calculations

Hi Geoff. I gave this a little thought and I am not sure it works. In fact it looks quite plausible that someone's EV (expected value) calculation on Leverage might actually come out as negative (ie. Leverage would be causing harm to the world).

This is because:

  • Most EA orgs calculate their counterfactual expected value by taking into account what the people in that organisation would be doing otherwise if they were not in that organisation and then deduct this from their impact. (I believe at least 80K, Charity Science and EA London do this)

  • Given Leverage's tendency to hire ambitious altruistic people and to look for people at EA events it is plausible that a significant proportion of Leverage staff might well have ended up at other EA organisations.

  • There is a talent gap at other EA organisations (see 80K on this)

  • Leverage does spend some time on movement building but I estimate that this is a tiny proportion of the time, >5%, best guess 3%, (based on having talked to people at leverage and also based on looking at your achievements to date compared it to the apparent 100 person-years figure)

  • Therefore if the amount of staff who could be expected to have found jobs in other EA organisations is thought to be above 3% (which seems reasonable) then Leverage is actually displacing EAs from productive action so the total EV of Leverage is negative

Of course this is all assuming the value of your research is 0. This is the assumption you set out in your post. Obviously in practice I don’t think the value of your research is 0 and as such I think it is possible that the total EV of Leverage is positive*. I think more transparency would help here. Given that almost no research is available I do think it would be reasonable for someone who is not at Leverage to give your research an EV of close to 0 and therefore conclude that Leverage is causing harm.

I hope this helps and maybe explains why Leverage gets a bad rep. I am excited to see a more transparency and a new approach to public engagement. Keep on fighting for a better world!

*sentence edited to better match views

Comment by weeatquince_duplicate0-37104097316182916 on Problems with EA representativeness and how to solve it · 2018-08-04T07:34:22.002Z · score: 15 (17 votes) · EA · GW

Hi Joey, thank you for writing this.

I think calling this a problem of representation is actually understating the problem here.

EA has (at least to me) always been a community that inspires encourages and supports people to use all the information and tools available to them (including their individual priors intuitions and sense of morality) to reach a conclusion about what causes and actions are most important for them to take to make a better world (and of course to then take those actions).

Even if 90% of experienced EAs / EA community leaders currently converge on the same conclusion as to where value lies, I would worry that a strong focus on that issue would be detrimental. We'd be at risk of losing the emphasis on cause prioritisation - arguably most useful insight that EA has provided to the world.

  • We'd risk losing an ability to support people though cause prioritisation (coaching, EA or otherwise, should not pre-empt the answers or have ulterior motives)
  • we risk creating a community that is less able to switch to focus on the most important thing
  • we risk stifling useful debate
  • we risk creating a community that does not benefits from collaboration by people working in different areas
  • etc

(Note: Probably worth adding that if 90% of experienced EAs / EA community leaders converged on the same conclusion on causes my intuitions would suggest that this is likley to be evidence of founder effects / group-think as much as it is evidence for that cause. I expect this is because I see a huge diversity in people's values and thinking and a difficulty in reaching strong conclusions in ethics and cause prioritisation)

Comment by weeatquince_duplicate0-37104097316182916 on Open Thread #39 · 2018-07-08T22:18:18.758Z · score: 0 (0 votes) · EA · GW

Hi, a little late, but did you get an answer to this? I am not an expert but can direct this to people in EA London who can maybe help.

My very initial (non-expert) thinking was:

  • this looks like a very useful list of how to mitigate climate consequences through further investment in existing technologies.

  • this looks like a list written by a scientist not a policy maker. Where do diplomatic interventions such as "subsidise China to encourage them not to mine as much coal" etc fall on this list. I would expect subsidies to prevent coal mining are likely to be effective.

  • "atmospheric carbon capture" is not on the list. My understanding is that "atmospheric carbon capture" may be a necessity for allowing us to mitigate climate change in the long run (by controlling CO2 levels) whereas everything else on this list is useful in the medium-short run none of these technologies are necessary.

Comment by weeatquince_duplicate0-37104097316182916 on EA Hotel with free accommodation and board for two years · 2018-06-21T23:23:30.900Z · score: 4 (4 votes) · EA · GW

Greg this is awesome - go you!!! :-D :-D

To provide one extra relevant reference class: I have let EAs stay for free / donations at my place in London to work on EA projects and on the whole was very happy I did so. I think this is worthwhile and there is a need for it (with some caution as to both risky / harmful projects and well intentioned free-riders).

Good luck registering as a CIO - not easy. Get in touch with me if you are having trouble with the Charity Commission. Note: you might need Trustee's that are not going to live for free at the hotel (there's lots of rules against Trustees receiving any direct benefits from their charity).

Also if you think it could be useful for there to be a single room in London for Hotel guests to use for say business or conference attendance then get in touch.

Comment by weeatquince_duplicate0-37104097316182916 on How to improve EA Funds · 2018-04-05T23:46:57.812Z · score: 4 (4 votes) · EA · GW

For information. EA London has neither been funded by the EA Community Fund nor diligently considered for funding by the EA Community Fund.

In December EA London was told that the EA Community Fund was not directly funding local groups as CEA would be doing that. (This seem to be happening, see:

Comment by weeatquince_duplicate0-37104097316182916 on Climate change, geoengineering, and existential risk · 2018-03-25T10:05:04.254Z · score: 0 (0 votes) · EA · GW

Concerns about model uncertainty cut in both directions and I think the preponderance of probabilities favours SAI (provided it can be governed safely)

Good point. Agreed. Had not considered this

I tend to deflate their significance because SAI has natural analogues... volcanoes ... industrial emissions.

This seems like flawed thinking to me. Data from natural analogues should be built into predictive SAI models. Accepting that model uncertainty is a factor worth considering means questioning whether these analogues are actually good predictors of the full effects of SAI.

(Note: LHC also had natural analogues in atmospheric cosmic rays, I believe this was accounted for in FHI's work on the matter)


I think the main thing that model uncertainty suggests is that mitigation or less extreme forms of geoengineering should be prioritised much more.

Comment by weeatquince_duplicate0-37104097316182916 on Meta: notes on EA Forum moderation · 2018-03-23T18:54:35.914Z · score: 6 (6 votes) · EA · GW

Hi, can you give an example or two of an "announcement of a personal nature". I cannot think I have seen any posts that would fall into that category at any point.


Comment by weeatquince_duplicate0-37104097316182916 on Climate change, geoengineering, and existential risk · 2018-03-23T18:50:45.641Z · score: 3 (2 votes) · EA · GW

My very limited understanding of this topic is that climate models, especially of unusual phenomena. are highly uncertain and therefore there is a some chance that our models are incorrect. this means that SAI could go horribly wrong, not have the intended effects or make the climate spin out of control in some catastrophic way.

The chance of this might be small but if you are worried about existential risks it should definitely be considered. (In fact I thought this was the main x-risk associated with SAI and similar grand geo-engineering exercises).

I admit I have not read your article (only this post) but I was surprised this was not mentioned and I wanted to flag the matter.

For a similar case see the work of FHI researchers Toby Ord and Anders Sandberg on the risks of the Large Hadron Collider (LHC) here: and I am reasonably sure that SAI models are a lot more uncertain than the LHC physics.

Comment by weeatquince on [deleted post] 2018-03-23T18:36:21.728Z

In general I would be very wary of taking definitions written for an academic philosophical audience and relying on them in other situations. Often the use of technical language by philosophers does not carry over well to other contexts

The definitions and explanations used here: and here: are in my mind, better and more useful than the quote above for almost any situation I have been in to date.

ADDITIONAL EVIDENCE FOR THE ABOVE For example I have a very vague memory of talking to Will on this and concluding that he had a slightly odd and quite broad definition of "welfarist", where "welfare" in this context just meant 'good for others' without any implications of fulfilling happiness / utility / preference / etc. This comes out in the linked paper, in the line "if we want to claim that one course of action is, as far as we know, the most effective way of increasing the welfare of all, we simply cannot avoid making philosophical assumptions. How should we value improving quality of life compared to saving lives? How should we value alleviating non-human animal suffering compared to alleviating human suffering? How should we value mitigating risks ...." etc

Comment by weeatquince_duplicate0-37104097316182916 on Policy prioritization in a developed country · 2018-03-11T23:06:31.548Z · score: 3 (3 votes) · EA · GW

This sounds like a really good project. You clearly have a decent understanding of the local political issues, a clear ideas of how this project can map to other countries and prove beneficial globally. And a good understanding of how this plays a role in the wider EA community (I think it is good that this project is not branded as 'EA').

Here are a number of hopefully constructive thoughts I have to help you fine tune this work. These maybe things you thought about that did not make the post. I hope they help.




As far as I can tell the CCC seems to not care much about scenarios with a small chance of a very high impact. On the whole the EA community does care about these scenarios. My evidence for this comes from the EA communities concern for the extreme risks of climate change ( and x-risks whereas the CCC work on climate change that I have seen seems to have ignored these extreme risks. I am unsure why the discrepancy (Many EA researchers do not use a future discount rate for utility, does CCC?)

This could be problematic in terms of the cause prioritisation research being useful for EAs, for building a relationship with this project and EA advocacy work, EA funding, etc, etc.




Sometimes the most important priorities will not be the ones that public will latch onto. It is unclear from the post:

2.1 how you intend to find a balance between delivering the messages that are most likely to create change verses saying the things you most believe to be true. And

2.2 how the advocacy part of this work might differ from work that CCC has done in the past. My understanding is that to date the CCC has mostly tried to deliver true messages to an international policy maker audience. Your post however points to the public sentiment as a key driving factor for change. The advocacy methods and expertise used in CCC's international work are not obviously the best methods for this work.




For a prioritization research piece like I could imagine the researcher might dive straight into looking at the existing issues on the political agenda and prioritising between those based on some form of social rate of return. However I think there are a lot of very high level questions that I could be asked first like: • Is it more important to prevent the government making really bad decisions in some areas or to improve the quality of the good decisions • Is it more important to improve policy or to prevent a shift to harmful authoritarianism • How important is it to set policy that future political trends will not undo • How important is the acceptability among policy makers . public of the policy being suggested Are these covered in the research?

Also to what event will the research be looking at improving institutional decision making? To be honest I would genuinely be surprised if the conclusion of this project was that the most high impact policies were those designed to improve the functioning / decision making / checks and balances of the government. If you can cut corruption and change how government works for the better then the government will get more policies correct across the board in future. Is this your intuition too?


Finally to say I would be interested to be kept up-to-date with this project as it progresses. Is there a good way to do this? Looking forward to hearing more.

Comment by weeatquince_duplicate0-37104097316182916 on Announcing Effective Altruism Community Building Grants · 2018-03-11T22:21:09.642Z · score: 2 (2 votes) · EA · GW

EA London estitated counterfactual "large behaviour changes" taken by community members. This includes taking the GWWC pledges and large career shifts (although a change to future career plans probably wouldn't cut it)

Comment by weeatquince_duplicate0-37104097316182916 on Why not to rush to translate effective altruism into other languages · 2018-03-09T15:24:45.698Z · score: 4 (4 votes) · EA · GW

My point was not trying to pick up policy interventions specifically. I think more broadly there is too often an attitude of arrogance among EAs who think that because they can do cause prioritisation better than their peers they can also solve difficult problems better than experts in those fields. (I know I have been guilty of this at points).


In policy, I agree with you that EA policy projects fall across a large spectrum from highly professional to poorly thought-out.

That said I think that even at the better end of the spectrum there is a lack of professional lobbyists being employed by EA organisations and more of a do-it-ourselves attitude. EA orgs often prefer to hire enthusiastic EAs rather than expensive experts (which maybe a totally legitimate approach, I have no strong view on the matter).

Comment by weeatquince_duplicate0-37104097316182916 on Where I am donating this year and meta projects that need funding · 2018-03-09T14:57:56.706Z · score: 2 (2 votes) · EA · GW

Unfortunately I do not have a single easily quotable source for this. Furthermore it is not always clear cut - funding needs change with time and additional funding might mean an ability to start extra projects (like EA Grants). However, unlike Rethink Charity or Charity Science Health, there is not a clear project that I can point to that will not get funded if CEA 80K do not get more funding this year.

If you are donating in the region of £10k+ and are concerned that the larger EA orgs have less need for funding, I would say get in touch with them. They are generally happy to talk to donors in person and give more detailed answers (and my comment on this matter has been shaped by talking to people who have done this).

Comment by weeatquince_duplicate0-37104097316182916 on Why not to rush to translate effective altruism into other languages · 2018-03-05T19:32:32.623Z · score: 10 (12 votes) · EA · GW

Good article Ben!


I think similar risks arise with translating effective altruism to new domains or new audiances with particular expertise.

I've felt this when interacting with people looking to apply effective altruism ideas in policy. Such exercises should be approached with caution: you cannot just tell policy makers to use evidence (they've already heard about evidence) or to put all their resources to whatever looks most effective (wouldn't work) etc.

Similarly I suspect there is something to the fact that I find EA materials have had limited acceptance among experts in international development.


I would go a step further and say that the aim should not solely be one of translating EA ideas but also of improving EA ideas. Currently EA is fairly un-diverse in terms of cultures, plurality of ethical views, academic background, etc. I think we can learn a lot from those we are trying to reach out to.


(Minor aside I think mass outreach efforts done well have been are still are valuable and this article underplays that)

Where I am donating this year and meta projects that need funding

2018-03-02T13:42:18.961Z · score: 11 (11 votes)
Comment by weeatquince_duplicate0-37104097316182916 on Announcing Effective Altruism Community Building Grants · 2018-02-25T12:23:26.540Z · score: 1 (3 votes) · EA · GW

how do you think this compares with an additional employee at a non-local EA org?

EA London estimated with it's first year of a paid staff it had about 50% of the impact of a more established EA organisation such as GWWC or 80K per £ invested.

It is also worth bearing in mind that the non-monetary costs of ' an additional employee' are higher than the non-monetary costs of a grant (eg, training, management time, overheads, risks, opportunity costs)

Comment by weeatquince_duplicate0-37104097316182916 on Effective Volunteering · 2018-01-26T17:13:05.710Z · score: 2 (2 votes) · EA · GW

Awesome job! :-) Is it possible to see the list of the volunteering opportunities you found and considered?

Comment by weeatquince_duplicate0-37104097316182916 on Centre for Effective Altruism (CEA): an overview of 2017 and our 2018 plans · 2017-12-22T15:19:17.075Z · score: 3 (3 votes) · EA · GW

This is fantastic. Thank you for writing up. Whilst reading I jotted down a number of thoughts, comments, questions and concerns.



I am very excited about this and very glad that CEA is doing more of this. How to best move funding to the projects that need it most within the EA community is a really important question that we have yet to solve. I saw a lot of people with some amazing ideas looking to apply for these grants.


"with an anticipated budget of around £2m"

I think it is quite plausible that £2m is too low for the year. Not having enough funding increases the costs to applicants (time spent applying) and you (time spent assessing) relative to the benefits (funding moved), especially if there are applicants above the bar for funding but that you cannot afford to fund. Also I had this thought prior to reading that one of your noted mistakes was "underestimated the number of applications", it feels like you might still be making this mistake.


"mostly evaluating the merits of the applicants themselves rather than their specific plans"

Interesting decision. Seems reasonable. However I think it does have a risk of reducing diversity and I would be concerned that the applicants would be judged on their ability to hold philosophise in an academic oxford manner etc.

Best of luck with it




"encouraging more people to use Try Giving,"

Could CEA comment or provide advise to local group leaders on if they would want local groups to promote the GWWC pledge or the Try Giving pledge or when one might be better than the other? To date the advise seems to have been to as much as possible push the Pledge and not Try Giving


"... is likely to be the best way to help others."

I do not like the implication that there is a single answer to this question regardless of individual's moral frameworks (utilitarian / non-utilitarian / religious / etc) or skills and background. Where the mission is to have an impact as a "a global community of people..." the research should focus on supporting those people to do what they has the biggest impact given their positions.

5 Positives

"Self-sorting: People tend to interact with others who they perceive are similar to themselves"

This is a good thing to have picked up on.

"Community Health"

I am glad this is a team

"CEA’s Mistakes"

I think it is good to have this written up.


"Impact review"

It would have been interesting to see an estimates for costs (time/money) as well as for the outputs of each team.



Comment by weeatquince_duplicate0-37104097316182916 on Centre for Effective Altruism (CEA): an overview of 2017 and our 2018 plans · 2017-12-22T14:53:33.312Z · score: 3 (5 votes) · EA · GW

I have a very similar concern to Michael's. In particular it looked like, to me, that participants picked for this were people with whom CEA had an existing relationship. For example picking from CEA's donor base. This means that participants were those that had a very high opportunity cost in moving to direct work (as they were big donors). I expect that this is a suboptimal way of getting people to move into direct work.

Look forward to seeing:

something much more systematic to avoid the concerns you've raised and to avoid us being too biased in favor of our preexisting social networks

Comment by weeatquince_duplicate0-37104097316182916 on Introducing EA Work Club – high-impact jobs and side projects for EAs · 2017-12-08T16:03:46.159Z · score: 3 (3 votes) · EA · GW

Looks awesome - I hope people find it useful :-) Maybe if it gets popular worth having a way or plans for a way to filter jobs / projects via location?

Comment by weeatquince_duplicate0-37104097316182916 on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-12T17:46:08.254Z · score: 7 (9 votes) · EA · GW

There was a project in London where we decided on where to donate £1000. The participants were EAs in London who have non-utilitarian ethical intuitions that equality / justice are intrinsically morally valuable. The result was a sexual violence prevention charity called 'No Means No' that runs education workshops in the developing world, and has a few RCTs that support their claims about impact.

Project written up here:

Someone is also working on a write up of the evidence base behind 'no means no' but this is not ready for publication. If you are interested I can try to loop you in (PM me on Facebook: Samuel Hilton).

(Disclaimer / apologies: I have a lot on and have not read the whole article or the comments, it looks well researched so good job. But I just wanted to make sure you had seen this project as it maybe relevant for your research.)

Comment by weeatquince_duplicate0-37104097316182916 on Lessons from a full-time community builder. Part 1 of 4. Impact assessment · 2017-10-29T23:14:37.339Z · score: 0 (0 votes) · EA · GW

It roughly came from the idea of treating movement builidng as a marketing funnel. It is similar to the marketing funnel you'd expect of any other organisation except "buy our junk" is replaced with "behaviour change".

I did not have specific evidence on community building that this was a particularly good theory of change, although nothing I read when looking for data on this suggested it would not be a good theory of change.

What is it that it contested about this?

Comment by weeatquince_duplicate0-37104097316182916 on Lessons from a full-time community builder. Part 1 of 4. Impact assessment · 2017-10-29T23:10:19.480Z · score: 2 (2 votes) · EA · GW

Thanks for the feedback Rob

i) The opportunity cost of time has been low.

• For me, there were minimal opportunities to do something higher impact at this stage in my career. For example, I may have stayed in government and I doubt this would have had much impact (also this year out has not significantly damaged my civil service career, I was able to return on a promotion). It is not clear that I had the credibility on any other EA project that I could have found funders willing to cover my costs for the year. I could have worked part time in the civil service and tried to found a different type of other organisation but I think it is unlikely to have gone as well.

• David Nash has invested time but it is helping him move career-wise in a direction he wants to be going in.

• I expect the interns taken on would not have spent time as effectively otherwise.

• Time invested by others was minimal.

ii)-iv) Agree

Comment by weeatquince_duplicate0-37104097316182916 on Personal thoughts on careers in AI policy and strategy · 2017-10-18T13:08:36.114Z · score: 0 (0 votes) · EA · GW

In fact a more general version of the above question is:

What are the existing research / consultancy / etc disciplines that are most similar to the kind of work you are looking for?

If you can identify that it could help people in local communities direct people to this kind of work.

Comment by weeatquince_duplicate0-37104097316182916 on Personal thoughts on careers in AI policy and strategy · 2017-10-18T08:54:48.034Z · score: 0 (0 votes) · EA · GW

Quick question: Is your term "disentanglement research" similar to the discipline of "systems thinking" and what are the differences? (Trying to get to grips with what you mean by "disentanglement research" ) (

General lessons on how to build EA communities. Lessons from a full-time movement builder, part 2 of 4

2017-10-10T18:24:05.400Z · score: 13 (11 votes)
Comment by weeatquince_duplicate0-37104097316182916 on The Effective Altruism Equality and Justice Project · 2017-10-07T13:41:00.929Z · score: 3 (3 votes) · EA · GW

I want to add additional thanks to Ellie Karslake for organising these events, finding venues and so on.

Lessons from a full-time community builder. Part 1 of 4. Impact assessment

2017-10-04T18:14:12.357Z · score: 14 (14 votes)
Comment by weeatquince_duplicate0-37104097316182916 on Is EA Growing? Some EA Growth Metrics for 2017 · 2017-09-07T11:33:52.266Z · score: 2 (2 votes) · EA · GW

Hi, In case helpful for considering the additional Facebook information, I have a bunch of data on EA social media presence to help me compare growth in London to other locations, including a lot of downloaded Sociograph data from 2016.

For example the EA Facebook group size over the last year:

03/06/2016 _ 10263

13/01/2017 _ 12070

10/06/2017 _ 12,953

Obviously you'd expect these things to grow as people join then do not leave (but might ignore it), even if the movement was shrinking.

Comment by weeatquince_duplicate0-37104097316182916 on An argument for broad and inclusive "mindset-focused EA" · 2017-09-06T08:10:40.566Z · score: 0 (0 votes) · EA · GW

I want to suggest a more general version of Ajeya's views which is:

If someone did want to put time and effort into creating the resources to promote something akin to "broad effective altruism" they could focus their effort in two ways:

  1. on research and advocacy that does not add to (and possibly detracts attention from) the "narrow effective altruism" movement.

  2. on research and advocacy that benefits the effective altruism movement.


  1. Eg. Researching what is the best arts charity in the UK. Not useful as it is very unlikely that anyone who does take a cause neutral approach to charity would want to give to a UK arts charity. There is a risk of misleading, for example if you google effective altruism and a bunch of materials on UK arts comes up first.

  2. Eg. Researching general principles of how to evaluate charities. Researching climate change solutions. Researching systemic change charities. These would all expand the scope of EA research and writings, might produce plausible candidates for the best charity/cause, and at the same time act to attract more people into the movement. Consider climate change. It is a problem that at some point this century humanity has to solve (unlike UK arts) and it is also a cause many non-EAs care about strongly


So if there was at least some effort put into any "broad effective altruism" expansion I would strongly recommend starting with finding ways to expand the movement that are simultaneously useful areas for us to be considering in more detail.

(That said, FWIW I am very wary of attempts to expanding to have a "broad effective altruism" for some of the reasons mentioned by others)

Understanding Charity Evaluation

2017-05-11T14:55:05.711Z · score: 3 (3 votes)

Cause: Better political systems and policy making.

2016-11-22T12:37:41.752Z · score: 12 (18 votes)

Thinking about how we respond to criticisms of EA

2016-08-19T09:42:07.397Z · score: 3 (3 votes)

Effective Altruism London – a request for funding

2016-02-05T18:37:54.897Z · score: 5 (9 votes)

Tips on talking about effective altruism

2015-02-21T00:43:28.703Z · score: 12 (12 votes)

How I organise a growing effective altruism group in a big city in less than 30 minutes a month.

2015-02-08T22:20:43.455Z · score: 11 (13 votes)

Meetup : Super fun EA London Pub Social Meetup

2015-02-01T23:34:10.912Z · score: 0 (0 votes)

Top Tips on how to Choose an Effective Charity

2014-12-23T02:09:15.289Z · score: 5 (3 votes)

Outreaching Effective Altruism Locally – Resources and Guides

2014-10-28T01:58:14.236Z · score: 10 (10 votes)

Meetup : Under the influence @ the Shakespeare's Head

2014-09-12T07:11:14.138Z · score: 0 (0 votes)