UK policy and politics careers 2019-09-28T16:18:43.776Z · score: 27 (13 votes)
AI & Policy 1/3: On knowing the effect of today’s policies on Transformative AI risks, and the case for institutional improvements. 2019-08-27T11:04:10.439Z · score: 22 (10 votes)
Self-care sessions for EA groups 2018-09-06T15:55:12.835Z · score: 12 (9 votes)
Where I am donating this year and meta projects that need funding 2018-03-02T13:42:18.961Z · score: 11 (11 votes)
General lessons on how to build EA communities. Lessons from a full-time movement builder, part 2 of 4 2017-10-10T18:24:05.400Z · score: 13 (11 votes)
Lessons from a full-time community builder. Part 1 of 4. Impact assessment 2017-10-04T18:14:12.357Z · score: 14 (14 votes)
Understanding Charity Evaluation 2017-05-11T14:55:05.711Z · score: 3 (3 votes)
Cause: Better political systems and policy making. 2016-11-22T12:37:41.752Z · score: 12 (18 votes)
Thinking about how we respond to criticisms of EA 2016-08-19T09:42:07.397Z · score: 3 (3 votes)
Effective Altruism London – a request for funding 2016-02-05T18:37:54.897Z · score: 5 (9 votes)
Tips on talking about effective altruism 2015-02-21T00:43:28.703Z · score: 12 (12 votes)
How I organise a growing effective altruism group in a big city in less than 30 minutes a month. 2015-02-08T22:20:43.455Z · score: 11 (13 votes)
Meetup : Super fun EA London Pub Social Meetup 2015-02-01T23:34:10.912Z · score: 0 (0 votes)
Top Tips on how to Choose an Effective Charity 2014-12-23T02:09:15.289Z · score: 5 (3 votes)
Outreaching Effective Altruism Locally – Resources and Guides 2014-10-28T01:58:14.236Z · score: 10 (10 votes)
Meetup : Under the influence @ the Shakespeare's Head 2014-09-12T07:11:14.138Z · score: 0 (0 votes)


Comment by weeatquince_duplicate0-37104097316182916 on Some personal thoughts on EA and systemic change · 2019-10-05T17:40:18.782Z · score: 31 (16 votes) · EA · GW

In one key way this post very solidly completely misses the point.

The post makes a number of very good points about systemic change. But bases all of the points on financial cost-effective estimates. It is embedded in the language throughout, discussing: options that "outperformed GiveWell style charities", the "cost ... per marginal vote", lessons for "large-scale spending" or for a "small donor", etc.

I think a way the EA community has neglected systemic change in exactly in this manner. Money is not the only thing that can be leveraged in the world to make change (and in some cases money is not a thing people can give).
I think this some part of what people are pointing to when they criticise EA.

To be constructive I think we should rethink cause priotisation, but not from a financial point of view. Eg:
- If you have political power how best to spend it?
- If you have a public voice how best to use it?
- If you can organise activism what should it focus on?

(PS. Happy to support with money or time people doing this kind of research)

I think we could get noticeably different results. I think things like financial stability (hard to donate to but very important) might show up as more of a priority in the EA space if we start looking at things this way.

I think the EA community currently has a limited amount to say to anyone with power. For example:
• I met the civil servant with oversight of UK's £8bn international development spending who seemed interested in EA but did not feel it was relevant to them – I think they were correct, I had nothing to say they didn’t already know.
• Another case is an EA I know who does not have a huge amount to donate but lots of experience in political organising and activism, I doubt the EA community provides them much useful direction.

It is not that the EA community does none of this, just that we are slow. It feels like it took 80000 Hours a while to start recommending policy/politics as a career path and it is still unclear what people should do once in positions of power. ( if doing some research on this for Government careers)

Overall a very interesting post. Thank you for posting.

I note you mention a "relative gap in long-termist and high-risk global poverty work". I think this is interesting. I would love it if anyone has the time to do some back of the envelope evaluations of international development governance reform organisations (like Transparency International)

Comment by weeatquince_duplicate0-37104097316182916 on [updated] Global development interventions are generally more effective than Climate change interventions · 2019-10-05T16:44:53.316Z · score: 10 (3 votes) · EA · GW

Tl;dr: This assumes pure rate of time discounting. I curious how well your analysis works for anyone who do not think that we should discount harms in the future simply by virtue of being in the future.

This is super good research and super detailed and I am hugely impressed and hope many many people donate to Let's Fund and support you with this kind of research!!!


I enjoyed reading Appendix 3
• I agree with Pindyck that models of the social cost of carbon (SCC) require a host of underlying ethical decisions and so can be highly misleading.
• I don’t however agree with Pindyck that there is no alternative so we might as well ignore this problem

At least for the purposes of making decisions within the EA community, I think we can apply models but be explicit about what ethical assumptions have been made and how they affect the models conclusions. Many people on this forum have a decent understanding of their ethical views and how that affects decisions and so being more explicit would support good cause prioritisation decisions of donors and others.

Of course this is holding people on this forum to a higher standard of rigor than professional academic economists reach so should be seen as a nice to have rather than a default, but lets see what we can do...


My (very rough) understanding of climate analysis is that the SCC is very highly dependent on the discount rate.

(Appendix 3 makes this point. Also the paper you link to on SCC per country says "Discounting assumptions have consistently been one of the biggest determinants of differences between estimations of the social cost of carbon").

The paper you draw your evidence from seems to uses a pure rate of time discounting of 1-2%. This basically assumes that future people matter less.
I think many readers of this forum do not believe that future people matter less than people today.

I do not know how much this matters for the analysis. A high social cost of carbon seems, from the numbers in your article, to make climate interventions of the same order of magnitude but slightly less effective than cash transfers.

I also understand that estimates of the SCC is also dependent on the calculation of the worse case tail-end effects and there is some concern among people in the x-risk research space that small chances of very catastrophic affects are ignored in climate economics. I do not know how much this matters either.

I could also imagine that many people (especially negative leaning utilitarians) are more concerned by stopping the damage caused from climate change than impressed by the benefits of cash transfers.

I do not have answers to what effects these things have on the analysis. I would love to get your views on this.

Thank you for you work on this!!!

Comment by weeatquince_duplicate0-37104097316182916 on UK policy and politics careers · 2019-10-03T18:30:32.687Z · score: 4 (3 votes) · EA · GW


If I had to guess and (and I feel uncomfortable doing so as not really going on anything here but my gut) I would say that at an entry level it is all pretty similar but that an entry level job in the civil service is likely slightly higher impact than an entry level job as an MP's research but the variation between jobs and MPs is likely more important. I think your personal expected value is dominated by the jobs you get in later career rather than at an entry level so this is small on the scale of your career.

Value of information to the broader EA community is good, as is any other low-hanging-fruit benefits gained by being an early EA mover into a space.

Comment by weeatquince_duplicate0-37104097316182916 on UK policy and politics careers · 2019-10-02T09:40:13.278Z · score: 3 (2 votes) · EA · GW

Hi, I think the 80K advice is still fairly applicable (also I don’t think it would be a second opinion as my views were taken into account in that 80K article)

Would probably put the diplomatic fast-stream on par with the generalist one (although not very sure about this)

I would say that do not forget you can go in direct entry into a job and if you have a bit of experience (even a year or 2) getting an SEO job (or higher) may well be preferable to the FastStream.

Comment by weeatquince_duplicate0-37104097316182916 on UK policy and politics careers · 2019-10-02T09:39:07.582Z · score: 2 (1 votes) · EA · GW

This image displays for me. I am not sure what I need to do to make it display properly for you or what has gone wrong. Can someone admin-y investigate?

Comment by weeatquince_duplicate0-37104097316182916 on UK policy and politics careers · 2019-10-02T09:32:13.552Z · score: 6 (4 votes) · EA · GW

There are maybe 40 people who are in the EA community currently in the UK civil service and none currently in politics. I think most people I know would agree that it is comparatively more useful and more neglected for EAs to move towards politics.

I also think it is generally more impactful to do well in politics than to do well in the civil service, as ultimately politicians make the decisions. Although I do know EAs would disagree with this and point out that people do not have positions of political power for very long.

I think politics is more challenging: I think it is more competitive to do very well in. Also I think if you want to go into politics you need to really commit to that path and spend your time engaged in party politics whereas I think it is easier to move in and out of the civil service.

Comment by weeatquince_duplicate0-37104097316182916 on Campaign finance reform as an EA priority? · 2019-08-30T12:54:06.983Z · score: 8 (4 votes) · EA · GW

I have been thinking a fair bit about improving institutional decision making practices. I buy the argument that if you fix systems you can make a better world and that making systems that can make good decisions is super important.

There are many things you might want to change to make systems work better. [1]

I am outside the US and really do not understand the US system and certainly do not know of any good analysis on this topic and any comments should be taken with that in mind, but my weak outside view is that campaign financing is the biggest issue with US politics.

As such this seems to me to be plausibly the most important thing for EA folk to be working on in the world today. I am happy to put my money where my mouth is and support (talk to, low level fund, etc) people to do an "EA-style analysis of US campaign finance reform".


Comment by weeatquince_duplicate0-37104097316182916 on AI & Policy 1/3: On knowing the effect of today’s policies on Transformative AI risks, and the case for institutional improvements. · 2019-08-27T14:23:00.552Z · score: 6 (3 votes) · EA · GW

Thank you for the useful feedback: Corrected!

Comment by weeatquince_duplicate0-37104097316182916 on List of ways in which cost-effectiveness estimates can be misleading · 2019-08-20T22:16:32.433Z · score: 20 (11 votes) · EA · GW


Similar to not costing others work, you can end up in situations where the same impact is counted multiple times across all the charities involved, giving an inflated picture of the total impact.

Eg. If Effective Altruism (EA) London runs an event and this leads to an individual signing the Giving What We Can (GWWC) pledge and donating more the charity, both EA London and GWWC and the individual may take 100% of the credit in their impact measurement.

Comment by weeatquince_duplicate0-37104097316182916 on Age-Weighted Voting · 2019-08-01T23:03:35.306Z · score: 3 (2 votes) · EA · GW

Also I do plan to write this up as a top level post soon

Comment by weeatquince_duplicate0-37104097316182916 on Age-Weighted Voting · 2019-08-01T22:31:41.005Z · score: 38 (10 votes) · EA · GW

It is an interesting suggestion and I had not come across the idea before and it is great to have people thinking of new innovative policy ideas. I agree that this idea is worth investigating.

I think my main point to add is just to set out the wider context. I think it is worth people who are interested in this being aware that there is already a vast array of tried and tested policy solutions that are known to encourage more long term thinking in governments. I would lean towards the view that almost all of the ideas I list below: have very strong evidence of working well, would be much easier to push for than age-weighted voting, and would have a bigger effect size than age-weighted voting.

Here's the list (example of evidence it helps in brackets)

* Longer election cycles (UK compared to Aus)
* A non-democratic second house (UK House of Lords)
* Having a permanent neutral civil service (as in UK)
* An explicit statement of policy intent setting out a consistent cross-government view that policy makers should think long-term.
* A formal guide to best practice on discounting or on how to make policy that balances the needs of present and future generations. (UK Treasury Green Book, but more long term focused)
* An independent Office for Future Generations, or similar, with a responsibility to ensure that Government is acting in a long term manner. (as in Wales)
* Independent government oversight bodies, (UKs National Audit Office, but more long term focused)
* Various other combinations of technocracy and democracy, where details are left to experts. (UK's Bank of England, Infrastructure Commission, etc, etc)
* A duty on Ministers to consider the long term. (as in Wales)
* Horizon scanning and foresight skills, support, tools and training brought into government (UK Gov Office for Science).
* Risk management skills, support, tools and training brought into government (this must happen somewhere right?).
* Good connections between academia and science and government. (UK Open Innovation Team)
* A government body that can support and facilitate others in government with long term planning. (UK Gov Office for Science, but ideally more long term focused).
* Transparency of long term thinking. Through publication of statistics, impact assessments, etc (Eg. UK Office for National Statistics)
* Additional democratic oversight of long term issues (UK parliamentary committees)
* Legislatively binding long term targets (UKs climate change laws)
* Rules forcing Ministers to stay in position longer (untested to my knowledge)
* Being a dictatorship (China, it does work although I don’t recommend)

I hope to find time to do more work to collate suggestions and the evidence for them and do a thorough literature review
(If anyone wants to volunteer to help then get in touch). Some links here. My notes are at: See also:

As an aside I have a personal little bugbear with people focusing on the voting system when they try to think about how to make policy work. It is a tiny tiny part of the system and one where evidence of how to do it better is often minimal and tractability to change is low. I have written about this here:

Also my top tip for anyone thinking about tractable policy options is to start with asking: do we already know how to make significant steps to solve this problem, from existing policy best practice. (I think in this case we do.)

Comment by weeatquince_duplicate0-37104097316182916 on GCRI Call for Advisees and Collaborators · 2019-06-05T22:09:17.472Z · score: 6 (4 votes) · EA · GW

Hi, I'm curious, what are the main aims, expectations and things you hope will come from this call out? Cheers

Comment by weeatquince_duplicate0-37104097316182916 on Jade Leung: Why Companies Should be Leading on AI Governance · 2019-05-17T11:37:19.772Z · score: 9 (9 votes) · EA · GW

Hi Jade. I disagree with you. I think you are making a straw man of "regulation" and ignoring what modern best practice regulation actually looks like, whilst painting a rosy picture of industry led governance practice.

Regulation doesn't need to be a whole bunch of strict rules that limit corporate actors. It can (in theory) be a set of high level ethical principles set by society and by government who then defer to experts with industry and policy backgrounds to set more granular rules.

These granular rules can be strict rules that limit certain actions, or can be 'outcome focused regulation' that allows industry to do what it wants as long is it is able to demonstrate that it has taken suitable safety precautions, or can involve assigning legal responsibility to key senior industry actors to help align the incentives of those actors. (Good UK examples include HEFA and the ONR).

Not to say that industry cannot or should not take a lead in governance issues, but that Governments can play a role of similar importance too.

Comment by weeatquince_duplicate0-37104097316182916 on Latest EA Updates for April 2019 · 2019-05-12T22:15:06.741Z · score: 9 (3 votes) · EA · GW

David. This is great.

Your newsletters also (as well as the updates) also have a short story on what one EA community person is doing to make the world better. Why not include those here too?

Comment by weeatquince_duplicate0-37104097316182916 on How do we check for flaws in Effective Altruism? · 2019-05-06T21:18:06.193Z · score: 7 (4 votes) · EA · GW

I very much like the idea of an independent impact auditor for EA orgs.

I would consider funding or otherwise supporting such a project, anyone working on, get in touch...

One solution that happens already is radical transparency.

GiveWell and 80,000 Hours both publicly write about their mistakes. GiveWell have in the past posted vast amounts of their background working online. This level of transparency is laudable.

Comment by weeatquince_duplicate0-37104097316182916 on Should we consider the sleep loss epidemic an urgent global issue? · 2019-05-06T16:16:46.989Z · score: 4 (4 votes) · EA · GW

There is a very obvious upside to sleeping less: when you are not asleep you are awake and when you are awake you can do stuff.

On a very quick glace the economic analysis referenced above (and the quotes from Why Sleep Matters) seems to ignore this. If, as Khorton says, a person is missing sleep to raise kids or work a second job, then this benefits society.

This omission makes me very sceptical of the analysis on this topic.

Comment by weeatquince_duplicate0-37104097316182916 on Will splashy philanthropy cause the biosecurity field to focus on the wrong risks? · 2019-04-30T19:04:38.398Z · score: 21 (10 votes) · EA · GW

Just to note that there's been some discussion on this on Facebook:

Comment by weeatquince_duplicate0-37104097316182916 on Announcing EA Hub 2.0 · 2019-04-13T13:33:58.579Z · score: 8 (3 votes) · EA · GW

This is amazing. Great work for everyone who inputted. Was thinking that a possible future features (although perhaps not a priority) would be integration to the EA funds donation tracking and maybe LinkedIn profile data.

Comment by weeatquince_duplicate0-37104097316182916 on Can my filmmaking/songwriting skills be used more effectively in EA? · 2019-04-09T14:01:53.394Z · score: 9 (6 votes) · EA · GW

Your videos are great.

I am sure there is space for content creators to be having a powerful impact on the world. Not entirely sure how but I did want to flag that the Long Term Future EA Fund has just given a $39,000 grant to a video producer: .

Maybe get in touch or have a look into what was successful there (I get the impression that they found an important areas where there was otherwise a lack of good video content).

Comment by weeatquince_duplicate0-37104097316182916 on I'll Fund You to Give Away 'Doing Good Better' - Surprisingly Effective? · 2019-03-21T18:36:40.772Z · score: 3 (2 votes) · EA · GW

Awesome post.

Suggestion: I have found in person feedback to useful alongside surveys. Suggest making a bit of effort to talk to people in person, especially if it is friends you see anyway, and including this data into a final impact estimate.

Comment by weeatquince_duplicate0-37104097316182916 on Introducing GPI's new research agenda · 2019-03-21T11:43:01.299Z · score: 26 (9 votes) · EA · GW

There are maybe 100+ as important other steps to policy. In rough chronological order I started listing some of them below (I got bored part way through and stopped at what looks like 40 points).

I have aimed to have all of these issues at a roughly similar order of magnitude of importance. The scale of these issues will depend on country to country and the tractability of trying to change these issues will vary with time and depend on individual to individual.

Overall I would say that voting reform is not obviously more or less important than the other 100+ things that could be on this list (although I guess it is often likely to somewhere in the top 50% of issues). There is a lot more uncertainty about what the best voting mechanisms look like than many of the other issues on the list. It is also an issue that may be hard to change compared to some of the others.

Either way voting reform is a tiny part of an incredibly long process, a process with some huge areas for improvements in other parts.


  • constitution and human rights and setting remits of political powers to change fundamental structures of country
  • devolution and setting remits of central political powers verses local political bodies
  • term limits


  • electoral commission body setting or adjusting borders of voting areas / constituencies
  • initial policy research by potential candidates (often with very limited resources)
  • manifesto writing (this is hugely important to set the agenda and hard to change)
  • public / parties choosing candidates (often a lot of internal party squabbling behind the scenes)
  • campaign fundraising (maybe undue influences)
  • campaigning and information spreading (maybe issues with false information)
  • tackling voter apathy / engagement
  • Voting mechanism
  • coalition forming (often very untransparent)
  • government/leader assigns topic areas to ministers / seniors (very political, evidence that understanding a topic is inversely proportional to how long a minister will work on that topic)


  • hiring staff into government (hiring processes, lack of expertise, diversity issues)
  • how staff in government are managed (values, team building, rewards, progression, diversity)
  • how staff in government are trained (feedback mechanisms, training)


  • splitting out areas where political leadership is needed and areas where technocratic leadership is needed
  • designing clear mechanisms of accountability to topics so that politicians and civil servants are aware of what their responsibilities are and can be held to account for their actions (this is super important)
  • ensuring political representation so each individual has direct access to a politician who is accountable for their concerns
  • putting in place systems that allow changes to the system if an accountability mechanisms is not working
  • ensuring accountability for unknown unknown issues that may arise
  • how poor performance of political and civil staff is addressed (poor performance procedures, whistleblowing)
  • how corruption is rooted out and addressed (yes there is corruption in developed countries)
  • mechanisms to allow parties / populations to kick out bad leaders if needed
  • Ensuring mechanisms for cross party dialogue and that partisan-ism of politics does not lead to distortions of truth


  • carrying out research to understand what the policy problems are (often unclear how to do this)
  • understanding what the population wants (public often ignored, need good procedures for information gathering, public consultation, etc)


  • Development of policy options to address problems
  • Mechanisms for Cost Benefit Analysis and Impact Assessments to decide best policy options
  • access to expertise advice and best practice (lack of communication between academia and policy)
  • measuring impact of a policy proposal once in place (ensuring that mechanisms to measure impact are initiated at the very start of the policy implementation)
  • actually using information on
  • how politicians are allowed to change their mind given new evidence (updating is often seen as weakness)
  • mechanisms to ensure issues that are not politically immediately necessary are tackled (lack of long term thinking)







  • flexibility to deal with shocks of every step of the above process (often lacking)
  • transparency of every step of the above process (often lacking)
Comment by weeatquince_duplicate0-37104097316182916 on Climate Change Is, In General, Not An Existential Risk · 2019-03-06T08:59:39.665Z · score: 2 (1 votes) · EA · GW

Another thing to consider is that, given climate modeling is so imprecise and regularly flawed, that our models are wrong and the risk is significantly different than predicted.

(Similar to some of Toby's stuff on the Large Hadron Collider risks:

This could go both ways.

Comment by weeatquince_duplicate0-37104097316182916 on Introducing GPI's new research agenda · 2019-03-06T08:41:37.900Z · score: 16 (5 votes) · EA · GW

This is really really impressive. An amazing collection of really important questions.

POSITIVES. I like the fact that you intend to research:
* Institutional actors (2.8). Significant changes to the world are likely to come through institutional actors and the EA community has largely ignored them to date. The existing research has focused so much on the benefits of marginal donations (or marginal research) that our views on cause prioritisation cannot be easily applied to states. As someone into EA in the business of influencing states this is a really problematic oversight of the community to date, that we should be looking to fix as soon as possible.
* Decision-theoretic issues (2.1)
* The use of discount rates. This is practically useful for decision makers.

OMISSIONS. I did however note a few things that I would have expected to be included, to not be mentioned in this research agenda in particular there was no discussion on
* Useful models for thinking about and talking about cause prioritisation. In particular the scale neglectedness and tractability framework is often used and often criticised. What other models can or should be used by the EA community.
* Social change. Within section 1 there is some discussion of broad verses narrow future focused interventions, and so I would have expected a similar discussion in section 2 on social change interventions verses targeted interventions in general. This was not mentioned.
* (which risks to the future are most concerning. Although I assume this is because those topics are being covered by others such as FHI.)

Like I said above I think the questions within 2.8 are really importation for EA to focus on. I hope that the fact it is low on the list does not mean it is not priorotised.
I also note that there is a sub-question in 2.8 on "what is the best feasible voting system". I think this issue comes up too much and is often a distraction. It feels like a minor sub part of the question on "what is the optimal institution design" which people gravitate too because it is the most visible part of many political systems, but is really unlikely to be thing on the margin that most needs improving.

I hope that helps, Sam

Comment by weeatquince_duplicate0-37104097316182916 on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-06T08:09:54.487Z · score: 28 (15 votes) · EA · GW

CEA run the EA community fund to provide financial support EA community group leaders.

The key metric that CEA for evaluating the success of the groups they fund is the number of people from each local group who reach interview stage for high impact jobs, which largely means jobs within EA organisations. Bonus points available if they get the job.

This information feels like a relevant piece of the puzzle for anyone thinking through these issues. It could be (that in hindsight) CEA pushing chapter organisers to push people to focus on jobs in EA organisations in many ways might not be the best strategy.

Comment by weeatquince_duplicate0-37104097316182916 on Tactical models to improve institutional decision-making · 2019-01-13T23:43:49.160Z · score: 3 (2 votes) · EA · GW

I found this article unclear about what you were talking about when you say "improving institutional decision making" (in policy). I think we can break this down into two very different things.

A: Improving improving the decision making processes and systems of accountability that policy institutions use to make decisions so that these institutions will more generally be better decision makers. (This is what I have always meant by and understood by the term "improving institutional decision making", and what JEss talks about in her post you link to)

B: Having influence in a specific situation on the policy making process. (This is basically what people tend to call "lobbying" or sometimes "campaigning".)

I felt that the DFID story and the three models were all focused on B: lobbying. The models were useful for thinking about how to do B well (assuming you know better than the policy makers what policy should be made). Theoretical advice on lobbying is a nice thing to have* if you are in the field (so thank you for writing them up, I may give them some thought in my upcoming work). And if you are trying to change A it would be useful to understand how to do B.

The models were very useful for advising on how to do A: improving how institutions work generally. And A is where I would say the value lies.

I think the main point is just on how easy the article was to read. I found the article itself was very confusing as to if you were talking about A or B at many points.

*Also in general I think the field of lobbying is as one might say "more of an art than a science" and although a theoretical understanding of how it works is nice it is not super useful comapred to experience in the field in the specific country that you are in.

Comment by weeatquince_duplicate0-37104097316182916 on Climate Change Is, In General, Not An Existential Risk · 2019-01-13T23:15:02.558Z · score: 3 (2 votes) · EA · GW

I would be curious about any views or research you may have done into geoengineering risk?

My understanding is that climate change is not itself an existential risk but that it may lead to other risks (such war which as Peter Hurford mentions). One other risk is geoengineering where humanity starts thinking it can control planetary temperatures and makes a mistake (or the technology is used maliciously) and that presents a risk.

Comment by weeatquince_duplicate0-37104097316182916 on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-13T23:08:37.853Z · score: 7 (6 votes) · EA · GW

Just to flag that the case for this is much much weaker outside the USA.

The matching limits for donations outside the US is much lower and you may also lose your tax benefits of donating.


Comment by weeatquince_duplicate0-37104097316182916 on CEA on community building, representativeness, and the EA Summit · 2018-09-22T07:20:02.197Z · score: 6 (2 votes) · EA · GW

Hi Kerry, Thank you for the call. I wrote up a short summary of what we discussed. It is a while since we talked so not perfect. Please correct anything I have misremembered.



~ ~ Setting the scene ~ ~

  • CEA should champion cause prioritisation. We want people who are willing to pick a new cause based on evidence and research and a community that continues to work out how to do the most good. (We both agreed this.)
  • There is a difference between “cause impartiality”, as defined above, and “actual impartiality”, not having a view on what causes are most important. (There was some confusion but we got through it)
  • There is a difference between long-termism as a methodology where one considers the long run future impacts which CEA should 100% promote and long-termism as a conclusion that the most important thing to focus on right now is shaping the long term future of humanity. (I asserted this, not sure you expressed a view.)
  • A rational EA decision maker could go through a process of cause prioritisation and very legitimately reach different conclusions as to what causes are most important. They may have different skills to apply or different ethics (and we are far away from solving ethics if such a thing is possible). (I asserted this, not sure you expressed a view.)



~ ~ Create space, build trust, express a view, do not be perfect ~ ~

  • The EA community needs to create the right kind of space so that people can reach their own decision about what causes are most important. This can be a physical space (a local community) or an online space. People should feel empowered to make their own decisions about causes. This means that they will be more adept at cause prioritisation, more likely to believe the conclusions reached and more likely to come to the correct answer for themselves, and EA is more likely to come to a correct answers overall. To do this they need good tools and resources and to feel that the space they are in is neutral. This needs trust...

  • Creating that space requires trust. People need to trust the tools that are guiding and advising them. If people feel they being subtly pushed in a direction they will reject the resources and tools being offered. Any sign of a breakdown of trust between people reading CEA’s resources and CEA should be taken very seriously.

  • Creating that space does not mean you cannot also express a view. You just want to distinguish when you are doing this. You can create cause prioritisation resources and tools that are truly neutral but still have a separate section on what answers do CEA staff reach or what is CEA’s answer.

  • Perfection is not required as long as there is trust and the system is not breaking down.

  • For example: providing policy advice I gave the example of writing advice to a Gov Minister on a controversial political issue, as a civil servant. The first ~85% of this imaginary advice has an impartial summary of the background and the problem and then a series of suggested actions with evaluations of their impact. The final ~15% has a recommended action based on the civil servant’s view of the matter. The important thing here is that there generally is trust between the Minister and the Department that advice will be neutral, and that in this case the Minister trusts that the section/space setting out the background and possible actions is neutral enough for them to make a good decision. It doesn’t need to be perfect, in fact the Minister will be aware that there is likely some amount of bias, but as long as there is sufficient trust that does not matter. And there is a recommendation which the Minister can choose to follow or not. In many cases the Minister will follow the recommendation.



~ ~ How this goes wrong ~ ~

  • Imagine someone who has identified cause X which is super important comes across the EA community. You do not want the community to either be so focused on one cause that this person is either put off or is persuaded that the current EA cause is more important and forgets about cause X

  • I mentioned some of the things that damage trust (see the foot of my previous comment).

  • You mentioned you had seen signs of tribalism in the EA community.



~ ~ Conclusion ~ ~

  • You said that you saw more value in CEA creating a space that was “actual impartial” as opposed to “cause impartial” than you had done previously.



~ ~ Addendum: Some thoughts on evidence ~ ~

Not discussed but I have some extra thoughts on evidence.

There are two areas of my life where much of what I have learned points towards the views above being true.

  • Coaching. In coaching you need to make sure the coachee feels like you are there to help them not in any way with you own agenda (that is different from theirs).

  • Policy. In policy making you need trust and neutrality between Minister and civil servant.

There is value in following perceived wisdom on a topic. That said I have been looking out for any strong evidence that these things are true (eg. that coaching goes badly if they think you are subtly biased one way or another) and I have yet to find anything particularly persuasive. (Counterpoint: I know one friend who knows their therapist is overly-bias towards pushing them to have additional sessions but this does not put them off attending or mean they find it less useful). Perhaps this deserves further study.

Also worth bearing in mind there maybe dissimilarities between what CEA does and the fields of coaching and policy.

Also worth flagging that the example of policy advice given above is somewhat artificial, some policy advice (especially where controversial) is like that but much of it is just: “please approve action x”

In conclusion my views on this are based on very little evidence and a lot of gut feeling. My intuitions on this are strongly guided by my time doing coaching and doing policy advice.

Comment by weeatquince_duplicate0-37104097316182916 on Additional plans for the new EA Forum · 2018-09-16T01:08:48.125Z · score: 13 (13 votes) · EA · GW

Feature idea: If you co-write an article with someone being able to post as co-authors.

Comment by weeatquince_duplicate0-37104097316182916 on CEA on community building, representativeness, and the EA Summit · 2018-08-26T12:56:58.154Z · score: 3 (3 votes) · EA · GW

Hi Kerry, Some more thoughts prior to having a chat.


Is longtermism a cause?

Yes and no. The term is used in multiple ways.

A: Consideration of the long-term future.

It is a core part of cause prioritisation to avoid availability biases: to consider the plights of those we cannot so easily be aware of, such as animals, people in other countries and people in the future. As such, in my view, it is imperative that CEA and EA community leaders promote this.

B: The long-term cause area.

Some people will conclude that the optimal use of their limited resources should be putting them towards shaping the far future. But not everyone, even after full rational consideration, will reach this view. Nor should we expect such unanimity of conclusions. As such, in my view, CEA and EA community leaders can recommend people to consider this causes area, but should not tell people this is the answer.


Threading the needle

I agree with the 6 points you make here.

(Although interestingly I personally do not have evidence that “area allegiance is operating as a kind of tribal signal in the movement currently”)


CEA and cause-impartiality

I think CEA should be careful about how to express a view. Doing this in wrong way could make it look like CEA is not cause impartial or not representative.

My view is to give recommendations and tools but not answers. This is similar to how we would not expect 80K to have a view on what the best job is (as it depends on an individual and their skills and needs) but we would expect 80K to have recommendations and to have advice on how to choose.

I think this approach is also useful because:

  • People are more likely to trust decisions they reach through their own thinking rather than conclusions they are pushed towards.

  • It handles the fact that everyone is different. The advice or reasoning that works for one person may well not make sense for someone else.

I think (as Khorton says) it is perfectly reasonable for an organisation to not have a conclusion.


(One other thought I had was on examples of actions I would be concerned about CEA or another movement building organisations taking would be: Expressing certainty about a area (in internal policy or externally), basing impact measurement solely on a single cause area, hiring staff for cause-general roles based on their views of what causes is most important, attempting to push as many people as possible to a specific cause area, etc)

Comment by weeatquince_duplicate0-37104097316182916 on CEA on community building, representativeness, and the EA Summit · 2018-08-16T07:27:48.117Z · score: 1 (1 votes) · EA · GW

Yes thanks. Edited.

Comment by weeatquince_duplicate0-37104097316182916 on CEA on community building, representativeness, and the EA Summit · 2018-08-15T23:50:28.751Z · score: 27 (29 votes) · EA · GW

We would like to hear suggestions from forum users about what else they might like to see from CEA in this area.

Here is my two cents. I hope it is constructive:


The policy is excellent but the challenge lies in implementation.

Firstly I want to say that this post is fantastic. I think you have got the policy correct: that CEA should be cause-impartial, but not cause-agnostic and CEA’s work should be cause-general.

However I do not think it looks, from the outside, like CEA is following this policy. Some examples:

  • EA London staff had concerns that they would need to be more focused on the far future in order to receive funding from CEA.

  • You explicitly say on your website: "We put most of our credence in a worldview that says what happens in the long-term future is most of what matters. We are therefore more optimistic about others who roughly share this worldview."[1]

  • The example you give of the new EA handbook

  • There is a close association with 80000 Hours who are explicitly focusing much of their effort on the far future.

These are all quite subtle things, but collectively they give an impression that CEA is not cause impartial (that it is x-risk focused). Of course this is a difficult thing to get correct. It is difficult to draw the line between saying: 'our staff members believe cause___ is important' (a useful factoid that should definitely be said), whilst also putting across a strong front of cause impartiality.


Suggestion: CEA should actively champion cause impartiality

If you genuinely want to be cause impartial I think most of the solutions to this are around being super vigilant about how CEA comes across. Eg:

  • Have a clear internal style guide that sets out to staff good and bad ways to talk about causes

  • Have 'cause impartiality' as a staff value

  • If you do an action that does not look cause impartial (say EA Grants mostly grants money to far future causes) then just acknowledge this and say that you have noted it and explain why it happened.

  • Public posts like this one setting out what CEA believes

  • If you want to do lots of "prescriptive" actions split them off into a sub project or a separate institution.

  • Apply the above retroactively (remove lines from your website that make it look like you are only future focused)

Beyond that, if you really want to champion cause impartiality you may also consider extra things like:

  • More focus on cause prioritisation research.

  • Hiring people who value cause impartiality / cause prioritisation research / community building, above people who have strong views on what causes are important.


Being representative is about making people feel listened too.

Your section on representatives feels like you are trying to pin down a way of finding an exact number so you can say we have this many articles on topic x and this many on topic y and so on. I am not sure this is quite the correct framing.

Things like the EA handbook should (as a lower bound) have enough of a diversity of causes mentioned that the broader EA community does not feel misrepresented but (as an upper bound) not so much that CEA staff [2] feel like it is misrepresenting them. Anything within this range seems fine to me. (Eg. with the EA handbook both groups should feel comfortable handing this book to a friend.) Although I do feel a bit like I have just typed 'just do the thing that makes everyone happy' which is easier said than done.

I also think that "representativeness" is not quite the right issue any way. The important thing is that people in the EA community feel listened too and feel like what CEA is doing represents them. The % of content on different topics is only part of that. The other parts of the solution are:

  • Coming across like you listen: see the aforementioned points on championing cause impartiality. Also expressing uncertainty, mentioning that there are opposing views, giving two sides to a debate, etc.

  • Listening -- ie. consulting publicly (or with trusted parties) wherever possible.

If anything getting these two things correct is more important than getting the exact percentage of your work to be representative.

Sam :-)


[2] Unless you have reason to think that there is a systematic bias in staff, eg if you actively hired people because of the cause they cared about.

Comment by weeatquince_duplicate0-37104097316182916 on EA Funds - An update from CEA · 2018-08-08T23:04:27.074Z · score: 0 (0 votes) · EA · GW

YAY <3

Comment by weeatquince_duplicate0-37104097316182916 on EA Funds - An update from CEA · 2018-08-08T11:17:18.090Z · score: 6 (6 votes) · EA · GW

Marek, well done on all of your hard work on this.

Separate from the managed funds. I really like the work that CEA is doing to help money be moved around the world to other EA charities. I would love to see more organisations on the list of places that donations can be made through the EA Funds platform. Eg, REG or Animal Charity Evaluators or Rethink Charity. Is this in the works?

Comment by weeatquince_duplicate0-37104097316182916 on Leverage Research: reviewing the basic facts · 2018-08-05T21:57:53.473Z · score: 32 (29 votes) · EA · GW

counting our research as 0 value, and using the movement building impact estimates from LEAN, we come out well on EV compared to an average charity ... I will let readers make their own calculations

Hi Geoff. I gave this a little thought and I am not sure it works. In fact it looks quite plausible that someone's EV (expected value) calculation on Leverage might actually come out as negative (ie. Leverage would be causing harm to the world).

This is because:

  • Most EA orgs calculate their counterfactual expected value by taking into account what the people in that organisation would be doing otherwise if they were not in that organisation and then deduct this from their impact. (I believe at least 80K, Charity Science and EA London do this)

  • Given Leverage's tendency to hire ambitious altruistic people and to look for people at EA events it is plausible that a significant proportion of Leverage staff might well have ended up at other EA organisations.

  • There is a talent gap at other EA organisations (see 80K on this)

  • Leverage does spend some time on movement building but I estimate that this is a tiny proportion of the time, >5%, best guess 3%, (based on having talked to people at leverage and also based on looking at your achievements to date compared it to the apparent 100 person-years figure)

  • Therefore if the amount of staff who could be expected to have found jobs in other EA organisations is thought to be above 3% (which seems reasonable) then Leverage is actually displacing EAs from productive action so the total EV of Leverage is negative

Of course this is all assuming the value of your research is 0. This is the assumption you set out in your post. Obviously in practice I don’t think the value of your research is 0 and as such I think it is possible that the total EV of Leverage is positive*. I think more transparency would help here. Given that almost no research is available I do think it would be reasonable for someone who is not at Leverage to give your research an EV of close to 0 and therefore conclude that Leverage is causing harm.

I hope this helps and maybe explains why Leverage gets a bad rep. I am excited to see a more transparency and a new approach to public engagement. Keep on fighting for a better world!

*sentence edited to better match views

Comment by weeatquince_duplicate0-37104097316182916 on Problems with EA representativeness and how to solve it · 2018-08-04T07:34:22.002Z · score: 15 (17 votes) · EA · GW

Hi Joey, thank you for writing this.

I think calling this a problem of representation is actually understating the problem here.

EA has (at least to me) always been a community that inspires encourages and supports people to use all the information and tools available to them (including their individual priors intuitions and sense of morality) to reach a conclusion about what causes and actions are most important for them to take to make a better world (and of course to then take those actions).

Even if 90% of experienced EAs / EA community leaders currently converge on the same conclusion as to where value lies, I would worry that a strong focus on that issue would be detrimental. We'd be at risk of losing the emphasis on cause prioritisation - arguably most useful insight that EA has provided to the world.

  • We'd risk losing an ability to support people though cause prioritisation (coaching, EA or otherwise, should not pre-empt the answers or have ulterior motives)
  • we risk creating a community that is less able to switch to focus on the most important thing
  • we risk stifling useful debate
  • we risk creating a community that does not benefits from collaboration by people working in different areas
  • etc

(Note: Probably worth adding that if 90% of experienced EAs / EA community leaders converged on the same conclusion on causes my intuitions would suggest that this is likley to be evidence of founder effects / group-think as much as it is evidence for that cause. I expect this is because I see a huge diversity in people's values and thinking and a difficulty in reaching strong conclusions in ethics and cause prioritisation)

Comment by weeatquince_duplicate0-37104097316182916 on Open Thread #39 · 2018-07-08T22:18:18.758Z · score: 0 (0 votes) · EA · GW

Hi, a little late, but did you get an answer to this? I am not an expert but can direct this to people in EA London who can maybe help.

My very initial (non-expert) thinking was:

  • this looks like a very useful list of how to mitigate climate consequences through further investment in existing technologies.

  • this looks like a list written by a scientist not a policy maker. Where do diplomatic interventions such as "subsidise China to encourage them not to mine as much coal" etc fall on this list. I would expect subsidies to prevent coal mining are likely to be effective.

  • "atmospheric carbon capture" is not on the list. My understanding is that "atmospheric carbon capture" may be a necessity for allowing us to mitigate climate change in the long run (by controlling CO2 levels) whereas everything else on this list is useful in the medium-short run none of these technologies are necessary.

Comment by weeatquince_duplicate0-37104097316182916 on EA Hotel with free accommodation and board for two years · 2018-06-21T23:23:30.900Z · score: 4 (4 votes) · EA · GW

Greg this is awesome - go you!!! :-D :-D

To provide one extra relevant reference class: I have let EAs stay for free / donations at my place in London to work on EA projects and on the whole was very happy I did so. I think this is worthwhile and there is a need for it (with some caution as to both risky / harmful projects and well intentioned free-riders).

Good luck registering as a CIO - not easy. Get in touch with me if you are having trouble with the Charity Commission. Note: you might need Trustee's that are not going to live for free at the hotel (there's lots of rules against Trustees receiving any direct benefits from their charity).

Also if you think it could be useful for there to be a single room in London for Hotel guests to use for say business or conference attendance then get in touch.

Comment by weeatquince_duplicate0-37104097316182916 on How to improve EA Funds · 2018-04-05T23:46:57.812Z · score: 4 (4 votes) · EA · GW

For information. EA London has neither been funded by the EA Community Fund nor diligently considered for funding by the EA Community Fund.

In December EA London was told that the EA Community Fund was not directly funding local groups as CEA would be doing that. (This seem to be happening, see:

Comment by weeatquince_duplicate0-37104097316182916 on Climate change, geoengineering, and existential risk · 2018-03-25T10:05:04.254Z · score: 0 (0 votes) · EA · GW

Concerns about model uncertainty cut in both directions and I think the preponderance of probabilities favours SAI (provided it can be governed safely)

Good point. Agreed. Had not considered this

I tend to deflate their significance because SAI has natural analogues... volcanoes ... industrial emissions.

This seems like flawed thinking to me. Data from natural analogues should be built into predictive SAI models. Accepting that model uncertainty is a factor worth considering means questioning whether these analogues are actually good predictors of the full effects of SAI.

(Note: LHC also had natural analogues in atmospheric cosmic rays, I believe this was accounted for in FHI's work on the matter)


I think the main thing that model uncertainty suggests is that mitigation or less extreme forms of geoengineering should be prioritised much more.

Comment by weeatquince_duplicate0-37104097316182916 on Meta: notes on EA Forum moderation · 2018-03-23T18:54:35.914Z · score: 6 (6 votes) · EA · GW

Hi, can you give an example or two of an "announcement of a personal nature". I cannot think I have seen any posts that would fall into that category at any point.


Comment by weeatquince_duplicate0-37104097316182916 on Climate change, geoengineering, and existential risk · 2018-03-23T18:50:45.641Z · score: 3 (2 votes) · EA · GW

My very limited understanding of this topic is that climate models, especially of unusual phenomena. are highly uncertain and therefore there is a some chance that our models are incorrect. this means that SAI could go horribly wrong, not have the intended effects or make the climate spin out of control in some catastrophic way.

The chance of this might be small but if you are worried about existential risks it should definitely be considered. (In fact I thought this was the main x-risk associated with SAI and similar grand geo-engineering exercises).

I admit I have not read your article (only this post) but I was surprised this was not mentioned and I wanted to flag the matter.

For a similar case see the work of FHI researchers Toby Ord and Anders Sandberg on the risks of the Large Hadron Collider (LHC) here: and I am reasonably sure that SAI models are a lot more uncertain than the LHC physics.

Comment by weeatquince on [deleted post] 2018-03-23T18:36:21.728Z

In general I would be very wary of taking definitions written for an academic philosophical audience and relying on them in other situations. Often the use of technical language by philosophers does not carry over well to other contexts

The definitions and explanations used here: and here: are in my mind, better and more useful than the quote above for almost any situation I have been in to date.

ADDITIONAL EVIDENCE FOR THE ABOVE For example I have a very vague memory of talking to Will on this and concluding that he had a slightly odd and quite broad definition of "welfarist", where "welfare" in this context just meant 'good for others' without any implications of fulfilling happiness / utility / preference / etc. This comes out in the linked paper, in the line "if we want to claim that one course of action is, as far as we know, the most effective way of increasing the welfare of all, we simply cannot avoid making philosophical assumptions. How should we value improving quality of life compared to saving lives? How should we value alleviating non-human animal suffering compared to alleviating human suffering? How should we value mitigating risks ...." etc

Comment by weeatquince_duplicate0-37104097316182916 on Policy prioritization in a developed country · 2018-03-11T23:06:31.548Z · score: 3 (3 votes) · EA · GW

This sounds like a really good project. You clearly have a decent understanding of the local political issues, a clear ideas of how this project can map to other countries and prove beneficial globally. And a good understanding of how this plays a role in the wider EA community (I think it is good that this project is not branded as 'EA').

Here are a number of hopefully constructive thoughts I have to help you fine tune this work. These maybe things you thought about that did not make the post. I hope they help.




As far as I can tell the CCC seems to not care much about scenarios with a small chance of a very high impact. On the whole the EA community does care about these scenarios. My evidence for this comes from the EA communities concern for the extreme risks of climate change ( and x-risks whereas the CCC work on climate change that I have seen seems to have ignored these extreme risks. I am unsure why the discrepancy (Many EA researchers do not use a future discount rate for utility, does CCC?)

This could be problematic in terms of the cause prioritisation research being useful for EAs, for building a relationship with this project and EA advocacy work, EA funding, etc, etc.




Sometimes the most important priorities will not be the ones that public will latch onto. It is unclear from the post:

2.1 how you intend to find a balance between delivering the messages that are most likely to create change verses saying the things you most believe to be true. And

2.2 how the advocacy part of this work might differ from work that CCC has done in the past. My understanding is that to date the CCC has mostly tried to deliver true messages to an international policy maker audience. Your post however points to the public sentiment as a key driving factor for change. The advocacy methods and expertise used in CCC's international work are not obviously the best methods for this work.




For a prioritization research piece like I could imagine the researcher might dive straight into looking at the existing issues on the political agenda and prioritising between those based on some form of social rate of return. However I think there are a lot of very high level questions that I could be asked first like: • Is it more important to prevent the government making really bad decisions in some areas or to improve the quality of the good decisions • Is it more important to improve policy or to prevent a shift to harmful authoritarianism • How important is it to set policy that future political trends will not undo • How important is the acceptability among policy makers . public of the policy being suggested Are these covered in the research?

Also to what event will the research be looking at improving institutional decision making? To be honest I would genuinely be surprised if the conclusion of this project was that the most high impact policies were those designed to improve the functioning / decision making / checks and balances of the government. If you can cut corruption and change how government works for the better then the government will get more policies correct across the board in future. Is this your intuition too?


Finally to say I would be interested to be kept up-to-date with this project as it progresses. Is there a good way to do this? Looking forward to hearing more.

Comment by weeatquince_duplicate0-37104097316182916 on Announcing Effective Altruism Community Building Grants · 2018-03-11T22:21:09.642Z · score: 2 (2 votes) · EA · GW

EA London estitated counterfactual "large behaviour changes" taken by community members. This includes taking the GWWC pledges and large career shifts (although a change to future career plans probably wouldn't cut it)

Comment by weeatquince_duplicate0-37104097316182916 on Why not to rush to translate effective altruism into other languages · 2018-03-09T15:24:45.698Z · score: 4 (4 votes) · EA · GW

My point was not trying to pick up policy interventions specifically. I think more broadly there is too often an attitude of arrogance among EAs who think that because they can do cause prioritisation better than their peers they can also solve difficult problems better than experts in those fields. (I know I have been guilty of this at points).


In policy, I agree with you that EA policy projects fall across a large spectrum from highly professional to poorly thought-out.

That said I think that even at the better end of the spectrum there is a lack of professional lobbyists being employed by EA organisations and more of a do-it-ourselves attitude. EA orgs often prefer to hire enthusiastic EAs rather than expensive experts (which maybe a totally legitimate approach, I have no strong view on the matter).

Comment by weeatquince_duplicate0-37104097316182916 on Where I am donating this year and meta projects that need funding · 2018-03-09T14:57:56.706Z · score: 2 (2 votes) · EA · GW

Unfortunately I do not have a single easily quotable source for this. Furthermore it is not always clear cut - funding needs change with time and additional funding might mean an ability to start extra projects (like EA Grants). However, unlike Rethink Charity or Charity Science Health, there is not a clear project that I can point to that will not get funded if CEA 80K do not get more funding this year.

If you are donating in the region of £10k+ and are concerned that the larger EA orgs have less need for funding, I would say get in touch with them. They are generally happy to talk to donors in person and give more detailed answers (and my comment on this matter has been shaped by talking to people who have done this).

Comment by weeatquince_duplicate0-37104097316182916 on Why not to rush to translate effective altruism into other languages · 2018-03-05T19:32:32.623Z · score: 10 (12 votes) · EA · GW

Good article Ben!


I think similar risks arise with translating effective altruism to new domains or new audiances with particular expertise.

I've felt this when interacting with people looking to apply effective altruism ideas in policy. Such exercises should be approached with caution: you cannot just tell policy makers to use evidence (they've already heard about evidence) or to put all their resources to whatever looks most effective (wouldn't work) etc.

Similarly I suspect there is something to the fact that I find EA materials have had limited acceptance among experts in international development.


I would go a step further and say that the aim should not solely be one of translating EA ideas but also of improving EA ideas. Currently EA is fairly un-diverse in terms of cultures, plurality of ethical views, academic background, etc. I think we can learn a lot from those we are trying to reach out to.


(Minor aside I think mass outreach efforts done well have been are still are valuable and this article underplays that)

Comment by weeatquince_duplicate0-37104097316182916 on Announcing Effective Altruism Community Building Grants · 2018-02-25T12:23:26.540Z · score: 1 (3 votes) · EA · GW

how do you think this compares with an additional employee at a non-local EA org?

EA London estimated with it's first year of a paid staff it had about 50% of the impact of a more established EA organisation such as GWWC or 80K per £ invested.

It is also worth bearing in mind that the non-monetary costs of ' an additional employee' are higher than the non-monetary costs of a grant (eg, training, management time, overheads, risks, opportunity costs)

Comment by weeatquince_duplicate0-37104097316182916 on Effective Volunteering · 2018-01-26T17:13:05.710Z · score: 2 (2 votes) · EA · GW

Awesome job! :-) Is it possible to see the list of the volunteering opportunities you found and considered?