Comment by michelle_hutchinson on EA grants available to individuals (crosspost from LessWrong) · 2019-02-09T18:35:04.039Z · score: 6 (4 votes) · EA · GW

Let's Fund has recently set up to try to get funding for neglected and speculate projects in effective altruism. They seem to particularly focus on research. It could be worth reaching out to them about whether your project is the kind they'd be interested in fundraising for.

Comment by michelle_hutchinson on Simultaneous Shortage and Oversupply · 2019-02-03T20:33:38.415Z · score: 4 (2 votes) · EA · GW

In case you haven't come across it yet, the 80,000 Hours job board has a filter for jobs which can be done remotely, which you might find useful.

Comment by michelle_hutchinson on Requesting community input on the upcoming EA Projects Platform · 2018-12-10T22:38:07.911Z · score: 11 (4 votes) · EA · GW

It's always great to see interesting new projects like this to improve the EA community! There might also be learnings for the project from EA Ventures which tried to coordinate between speculative EA projects and funders.

Comment by michelle_hutchinson on William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" · 2018-11-24T12:25:31.113Z · score: 4 (3 votes) · EA · GW

That's what I meant by 'though it turns out to be correct'. Sorry for being unclear.

Comment by michelle_hutchinson on William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" · 2018-11-24T11:31:36.918Z · score: 6 (4 votes) · EA · GW

I didn't downvote the comment, but it did seem a little harsh to me. I can easily imagine being forwarded a draft article, and reading the text the person forwarding wrote, then looking at the draft, without reading the text in the email they were originally sent. (Hence missing text saying the draft was supposed to be confidential.) Assuming that Will read the part saying it was confidential seemed uncharitable to me (though it turns out to be correct). That seemed in surprising contrast to the understanding attitude taken to Julia's mistake.

Comment by michelle_hutchinson on Guide to Successful Community 1-1s · 2018-11-24T11:13:47.640Z · score: 10 (9 votes) · EA · GW

Thanks, this seems like a really useful guide!

One thing I find important in conversations, particularly if I'm doing them back to back, is writing down action points (eg people I want to introduce them to) as I go. People sometimes think it's rude to do this on a phone, so probably having a note book with you is the best approach.

Something I struggle with is making sure that I build up enough rapport with a person fast that they will feel comfortable pushing back on things, and in particular bringing up more socially awkward considerations (eg I've heard that effective altruists don't think it's particularly impactful to get a job doing x but I've been working towards that goal for years, and hate the idea of never getting to do it). I've found it pretty useful watching other people who are really good at getting on with people meet new people, and seeing what they do that makes people feel quickly at ease. Because I know this is a weak spot of mine, I try after some of my 1-1 conversations to think through whether there was anything in particular that went well/badly on this dimension (I waited a while for them to respond after saying y, rather than bulldozering on...; when I pushed back on z I accidentally got into 'philosophy debate' mode rather than friendly discussion mode). I also find reading books that get me to think through these kinds of dynamics useful: I've found 'Charisma Myth' useful enough to have read it a couple of times, and right now I'm reading 'Never Split the Difference'. (A lot of these kinds of books sound like they'll be about getting your own way and persuading people into things they don't want to do, but they actually spend most of their time on how to make sure you properly hear and understand the person you're talking to, and help them feel at ease.)

Comment by michelle_hutchinson on Takeaways from EAF's Hiring Round · 2018-11-21T11:45:15.516Z · score: 18 (6 votes) · EA · GW

This seems to be something that varies a lot by field. In academic jobs (and PhD applications), it's absolutely standard to ask for references in the first round of applications, and to ask for as many as 3. It's a really useful part of the process, and since academics know that, they don't begrudge writing references fairly frequently.

Writing frequent references in academia might be a bit easier than when people are applying for other types of jobs: a supervisor can simply have a letter on file for a past student saying how good they are at research and send that out each time they're asked for a reference. Another thing which might contribute to academia using references more is it being a very competitive field, where large returns are expected from differentiating between the very best candidate and the next best. As an employer, I've found references very helpful. So if we expect EA orgs to have competitive hiring rounds where there are large returns on finding the best candidate, it could be worth our spending more time writing/giving references than is typical.

I find it difficult to gauge how off-putting asking for references early would be for the typical candidate. In my last job application, I gave a number of referees, some of whom were contacted at the time of my trial, and I felt fine about that - but that could be because I'm used to academia, or because my referees were in the EA community and so I knew they would value the org I was applying for making the right hiring decision, rather than experience giving a reference as an undue burden.

I would guess the most important in asking for references early is being willing to accept not getting a references from current employers / colleagues, since if you don't know whether you have a job offer you're often not going to want your current employer to know you're applying for other jobs.

Comment by michelle_hutchinson on Takeaways from EAF's Hiring Round · 2018-11-20T15:45:20.228Z · score: 5 (4 votes) · EA · GW

My impression is that while specifically *IQ* tests in hiring are restricted in the US, many of the standard hiring tests used there (eg Wonderlic https://www.wonderlic.com/) are basically trying to get at GMA. So I wouldn't say the outside view was that testing for GMA was bad (though I don't know what proportion of employers use such tests).

Comment by michelle_hutchinson on William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" · 2018-11-18T00:08:39.372Z · score: 6 (6 votes) · EA · GW

I agree with this take on the comment as it's literally written. I think there's a chance that Siebe meant 'written in bad faith' as something more like 'written with less attention to detail than it could have been', which seems like a very reasonable conclusion to come to.

(I just wanted to add a possibly more charitable interpretation, since otherwise the description of why the comment is unhelpful might seem a little harsh)

Keeping Absolutes in Mind

2018-10-21T22:40:49.160Z · score: 46 (24 votes)
Comment by michelle_hutchinson on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-12T08:55:22.603Z · score: 1 (2 votes) · EA · GW

I don't know how others answered this question, but personally I didn't answer for how good I thought the last grants were to each other (ie, I wasn't comparing CfAR/MIRI to Founders Pledge) or in expectation of changover in grant maker. I was thinking about something like whether I preferred funding over the next 5 years to go to organisations which focused on the far future vs community building, knowing that these might or might not converge. I'd expect over that period a bunch of things to come up that we don't yet know about (in the same way that BERI did a year or so ago).

Comment by michelle_hutchinson on EA needs a cause prioritization journal · 2018-09-13T15:56:06.894Z · score: 20 (16 votes) · EA · GW

There do seem to be some strong arguments in favour of having a cause prioritisation journal. I think there are some reasons against too though, which you don't mention:

  • For work people are happy to do in sufficient detail and depth to publish, there are significant downsides to publishing in a new and unknown journal. It will get much less readership and engagement, as well as generally less prestige. That means if this journal is pulling in pieces which could have been published elsewhere, it will be decreasing the engagement the ideas get from other academics who might have had lots of useful comments, and will be decreasing the extent to which people in general know about and take the ideas seriously.

  • For early stage work, getting an article to the point of being publishable in a journal is a large amount of work. Simply from how people understand journal publishing to work, there's a much higher bar for publishing than there is on a blog. So the benefits of having things looking more professional are actually quite expensive.

  • The actual work it is to set up and run a journal, and do so well enough to make sure that cause prioritisation as a field gains rather than loses credibility from it.

Making Organisations More Welcoming

2018-09-12T21:52:52.530Z · score: 35 (16 votes)
Comment by michelle_hutchinson on Concerns with ACE research · 2018-09-09T11:46:06.040Z · score: 14 (14 votes) · EA · GW

It sounds like AEF is doing a fantastic job of ensuring rigour in its messaging!

But we have to realize that when it comes to animal suffering, as far as I know ACE is the only game in town. In my opinion, this is a precarious state of affairs, and we should do our best to protect criticism of ACE, even when it does not come with the highest level of politeness.

I think in cases where there is little primary research, it's all the more important to ensure that discourse remain not merely polite, but friendly and kind. Research isn't easy at the best of times, and the animal space has a number of factors making it harder than others like global poverty (eg historic neglect and the difficulty of understanding experiences unlike our own). In cases like this where people are pushing ahead despite difficulty, it is all the more important to make sure that the work is actively appreciated, and at baseline that people do not end up feeling attacked simply for doing it. Criticisms that are framed badly can easily be worse than nothing, in leading those working in this area to think that their work isn't useful and they should leave the area, and by dissuading others from joining the area in the first place.

This makes me all the more grateful to John for being so thoughtful in his feedback - suggesting improvements directly to ACE in the first instance, running a public piece by them before publishing, and for highlighting reasons for being optimistic as well as potential problems.

Good news that matters

2018-08-27T05:38:06.870Z · score: 22 (24 votes)
Comment by michelle_hutchinson on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-01T22:52:07.677Z · score: 17 (17 votes) · EA · GW

I'm Head of Operations for the Global Priorities Institute (GPI) at Oxford University. OpenPhil is GPI's largest donor, and Nick Beckstead was the program officer who made that grant decision.

I can't speak for other universities, but I agree with his assessment that Oxford's regulations make it much more difficult to use donations get productivity enhancements than it would be at other non-profits. For example, we would not be able to pay for the child care of our employees directly, nor raise their salary in order for them to be able to pay for more child care (since there is a standard pay scale). I therefore believe that the reason he gave for ruling out university-based grantees is the true reason, and one which is justified in at least some cases.

Comment by michelle_hutchinson on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-07-25T10:44:24.755Z · score: 16 (18 votes) · EA · GW

I don't know what others think about the qualifications needed/desired for this, but as a donor to these EA Funds, some of the reasons I'm enthusiastic to give to Nick's funds are:

  • His full-time day job is working out which organisations will do the most good over the long run (especially of those seeking to grow the EA movement), and how much funding they need.

  • He does that alongside extremely smart, well-informed colleagues with the same aims, giving him lots of opportunities to test and improve his views

  • He has worked formally and informally in this area for coming on for ten years

  • He's consistently shown himself to be smart, well-informed and to have excellent judgement.

I've been very grateful to be able to off-load where/when/how to donate optimally to him, and hope if/when a new fund manager is found, they share at least some of the above qualities.

[Disclaimer: I used to work for CEA]

Comment by michelle_hutchinson on Effective Thesis project review · 2018-06-04T15:39:39.077Z · score: 5 (5 votes) · EA · GW

Thanks for writing a summary of your progress and learnings so far, it's so useful for the EA community to share its findings.

A few comments:

You might consider making the website more targeted. It seems best suited to undergraduate theses, so it would be useful to focus in on that. For example, it might be valuable to increase the focus on learning. During your degree, building career capital is likely to be the most impactful thing you can do. Although things like building connections can be valuable for career capital, learning useful skills and researching deeply into a topic are the expected goals a thesis and so what most university courses give you the best opportunity to do. Choosing a topic which gives you the best opportunity for learning could mean, for example, thinking about which people in your department you can learn the most from (whether because the best researchers, or because they are likely to be the most conscientious supervisors), and what topic is of interest to them so that they'll be enthusiastic to work with you on it.

People in academia tend to be sticklers wrt writing style, so it could be worth getting someone to copy edit your main pages for typos.

Coming up with a topic to research is often a very personal process that happens when reading around an area. So it could be useful to have a page linking to recommended EA research / reading lists, to give people an idea of where they could start if they want to read around in areas where ideas are likely to be particularly useful. For example you might link to this list of syllabi and reading lists Pablo compiled.

Comment by michelle_hutchinson on Triple counting impact in EA · 2018-05-31T10:35:57.477Z · score: 2 (2 votes) · EA · GW

I agree with you that impact is importantly relative to a particular comparison world, and so you can't straightforwardly sum different people's impacts. But my impression is that Joey's argument is actually that it's important for us to try to work collectively rather than individually. Consider a case of three people:

Anna and Bob each have $600 to donate, and want to donate as effectively as possible. Anna is deciding between donating to TLYCS and AMF, Bob between GWWC and AMF. Casey is currently not planning to donate, but if introduced to EA by TLYCS and convinced of the efficacy of donating by GWWC, would donate $1000 to AMF.

It might be the case that Anna knows that Bob plans to donate the GWWC, and therefore she's choosing between a case of causing $600 of impact or $1000. I take Joey's point not to be that you can't think of Anna's impact as being $1000, but to be that it would be better to concentrate on the collective case than the individual case. Rather than considering what her impact would be holding fixed Bob's actions ($1000 if she donates to TLYCS, $600 if she gives to AMF), Anna should try to coordinate with Bob and think about their collective impact ($1200 if they give to AMF, $1000 if they give to TLYCS/GWWC).

Given that, I would add 'increased co-ordination' to the list of things that could help with the problem. Given the highlighted fact that often multiple steps by different organisations are required to achieve particular impact, we should be thinking not just about how to optimise each step individually but also about the process overall.

Comment by michelle_hutchinson on Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was · 2018-05-25T10:13:54.915Z · score: 0 (0 votes) · EA · GW

If you want to make something to randomise the text suggestions, you might be able to do it pretty quickly and easily with Guided Track. Personally, I think I would find it more helpful looking at the whole list than being given a random suggestion from it. If you wanted to give people that option without making it publicly available for free, you could put the list on the private and unsearchable Facebook group EA self help, with a request not to share.

Comment by michelle_hutchinson on How to improve EA Funds · 2018-04-08T22:14:15.874Z · score: 3 (3 votes) · EA · GW

This strikes me as making a false dichotomy between 'trust the grant making because lots of information is made public about its decisions' and 'trust the grant making because you personally know the re-granter (or know someone who knows someone etc)'. I would expect this is instead supposed to work in the way a lot of for profit funds presumably work: you trust your money to a particular fund manager because they have a strong history of their funds making money. You don't need to know Elie personally (or know about how he works / makes decisions) to know his track record of setting up GW and thereby finding excellent giving opportunities.

Comment by michelle_hutchinson on How much further does your dollar go overseas? · 2018-02-05T14:48:52.995Z · score: 2 (2 votes) · EA · GW

[Note: It is difficult to compare the cost effectiveness of developed country anti-smoking MMCs and developing country anti-smoking MMCs because the systematic review cited above did not uncover any studies based on a developing country anti-smoking MMC. The one developing country study that it found was for a hypothetical anti-smoking MMC. That study, Higashi et al. 2011, estimated that an anti-smoking MMC in Vietnam would result in one DLYG (discount rate = 3%) for every 78,300 VND (about 4 USD). Additionally, the Giving What We Can report that shows tobacco control in developing countries being highly cost effective is based on the cost-effectiveness of tobacco taxes, not the cost-effectiveness of anti-smoking MMCs, and the estimated cost-effectiveness of tobacco taxes is based on the cost to the government, not the cost to the organization lobbying for a tobacco tax.]

This report briefly discusses MMCs as well as tax increases. It mentions MMCs are likely to be much more effective than those in the UK, due to the comparatively far lower awareness of the harms of smoking in developing countries, and far higher incidences in smoking. I wonder if we could learn more about the potential efficacy of such campaigns by comparing them to campaigns to try to lower road traffic injury? My impression is that in the latter case there has been a bit more study done specifically in developing world contexts.

Comment by michelle_hutchinson on EA #GivingTuesday Fundraiser Matching Retrospective · 2018-01-16T05:24:31.112Z · score: 4 (4 votes) · EA · GW

Thank you, this is a really useful write up of what sounds like a great project.

Comment by michelle_hutchinson on New releases: Global Priorities Institute research agenda and posts we’re hiring for · 2017-12-21T17:02:42.226Z · score: 3 (3 votes) · EA · GW

Glad to hear you're finding it useful!

a) Yes, that's the plan

b) We haven't decided on our model yet. Right now, we have a number of full-time academics, a number of research associates who attend seminars and collaborate with the full-time crew, and research research visitors coming periodically. Having researchers visit from other institutions seems useful for bringing in new ideas, getting to collaborate more closely than one could online, and having the visitors take back elements of our work to their home institutions. I would guess in future it would make sense to have at least some researchers who visit periodically, as well as people coming just as a one-off. But I couldn't say for sure at the moment.

c) Yes, we are. Behavioural economics is already something we've thought a little about. Our reason for not expanding into more subjects at the moment is the difficulty of building thoroughly interdisciplinary groups within academia. As a small example, GPI is based in the Philosophy Department at Oxford, which isn't ideal for hiring Economists, who would prefer to be based in the Economics department. Given that, and the close tie in the past between EA and philosophy, we see a genuine risk of GPI/EA being thought of as 'philosophy plus' rather than truly multi/interdisciplinary. For that reason, we're starting with just one other discipline, and trying to build strong roots there. At the same time, we're trying to remain cognisant of other disciplines likely to be relevant, and the work that's going on there. (As an example in psychology, Lucius Caviola has been publishing interesting work both on speciesism and on how to develop a better scale for measuring moral traits EAs might be interested in.)

d) The best source of information is our website. I do plan on putting occasional updates on the EA forum, but as our work output will largely be academic papers, we're unlikely to publish them on here.

Thanks for the heads up!

Comment by michelle_hutchinson on New releases: Global Priorities Institute research agenda and posts we’re hiring for · 2017-12-16T21:12:41.346Z · score: 1 (1 votes) · EA · GW

Yes, that's right. For the researcher roles, you would at least need to be in Oxford during term time. For the operations role, it would important to be there for essentially the whole period.

Comment by michelle_hutchinson on New releases: Global Priorities Institute research agenda and posts we’re hiring for · 2017-12-15T12:15:53.347Z · score: 0 (0 votes) · EA · GW

Thanks for the heads up! I think this is a browser issue with the uni website. It actually works for me on Chrome and Edge, but others have found they don't work on Chrome, but do work on Safari. Would you mind trying a different browser and seeing if that works?

New releases: Global Priorities Institute research agenda and posts we’re hiring for

2017-12-14T14:57:22.838Z · score: 17 (17 votes)
Comment by michelle_hutchinson on EA Survey 2017 Series: Cause Area Preferences · 2017-09-12T14:44:28.914Z · score: 2 (2 votes) · EA · GW

Will, you might be interested in these conversation notes between GiveWell and the Tax Justice Network: http://files.givewell.org/files/conversations/Alex_Cobham_07-14-17_(public).pdf (you have to c&p the link)

Comment by michelle_hutchinson on EA Survey 2017 Series: Cause Area Preferences · 2017-09-05T15:24:53.216Z · score: 3 (3 votes) · EA · GW

Thanks Tee.

I don't know that this is necessarily true beyond reporting what is actually there. When poverty is favored by more than double the number of people who favor the next most popular cause area (graph #1), favored by more people than a handful of other causes combined, and disliked the least, those facts need to be put into perspective.

I agree - my comment was in the context of the false graph; given the true one, the emphasis on poverty seems warranted.

Comment by michelle_hutchinson on EA Survey 2017 Series: Cause Area Preferences · 2017-09-05T09:00:44.955Z · score: 3 (3 votes) · EA · GW

Thanks for clarifying.

The claim you're defending is that the Bay is an outlier in terms of the percentage of people who think AI is the top priority. But what the paragraph I quoted says is 'favoring a cause area outlier' - so 'outlier' is picking out AI amongst causes people think are important. Saying that the Bay favours AI which is an outlier amongst causes people favour is a stronger claim than saying that the Bay is an outlier in how much it favours AI. The data seems to support the latter but not the former.

Comment by michelle_hutchinson on EA Survey 2017 Series: Cause Area Preferences · 2017-09-04T14:40:19.048Z · score: 7 (7 votes) · EA · GW

I'm having trouble interpreting the first graph. It looks like 600 people put poverty as the top cause, which you state is 41% of respondents, and that 500 people put cause prioritisation, which you state is 19% of respondents.

The article in general seems to put quite a bit of emphasis on the fact that poverty came out as the most favoured cause. Yet while 600 people said it was the top cause, according to the graph around 800 people said that long run future was the top cause (AI + non-AI far future). It seems plausible to disaggregate AI and non-AI long run future, but at least as plausible to aggregate them (given the aggregation of health / education / economic interventions in poverty), and conclude that most EAs think the top cause is improving the long-run future. Although you might have been allowing people to pick multiple answers, and found that most people who picked poverty picked only that, and most who picked AI / non-AI FF picked both?

The following statement appears to me rather loaded: "For years, the San Francisco Bay area has been known anecdotally as a hotbed of support for artificial intelligence as a cause area. Interesting to note would be the concentration of EA-aligned organizations in the area, and the potential ramifications of these organizations being located in a locale heavily favoring a cause area outlier." The term 'outlier' seems false according to the stats you cite (over 40% of respondents outside the Bay thinking AI is a top or near top cause), and particularly misleading given the differences made here by choices of aggregation. (Ie. that you could frame it as 'most EAs in general think that long-run future causes are most important; this effect is a bit stronger in the Bay)

Writing on my own behalf, not my employer's.

Comment by michelle_hutchinson on How should we assess very uncertain and non-testable stuff? · 2017-08-25T09:44:28.241Z · score: 1 (1 votes) · EA · GW

If you haven't come across it yet, you might like to look at Back of the Envelope Guide to Philanthropy, which tries to estimate the value of some really uncertain stuff.

Comment by michelle_hutchinson on The marketing gap and a plea for moral inclusivity · 2017-07-19T13:47:04.193Z · score: 3 (3 votes) · EA · GW

I broadly agree with you on the importance of inclusivity, but I’m not convinced by your way of cashing it out or the implications you draw from it.

Inclusivity/exclusivity strikes me as importantly being a spectrum, rather than a binary choice. I doubt when you said EA should be about ‘making things better or worse for humans and animals but being neutral on what makes things better or worse’, you meant the extreme end of the inclusivity scale. One thing I assume we wouldn’t want EA to include, for example, is the view that human wellbeing is increased by coming only into contact with people of the same race as yourself.

More plausibly, the reasons you outline in favour of inclusivity point towards a view such as ‘EA is about making things better or worse for sentient beings but being neutral between reasonable theories of what makes things better or worse’. Of course, that brings up the question of what it takes to count as a reasonable theory. One thing it could mean is that some substantial number of people hold / have held it. Presumably we would want to circumscribe which people are included here: not all moral theories which have at any time in the past by a large group of people are reasonable. At the other end of the spectrum, you could include only views currently held by many people who have made it their life’s work to determine the correct moral theory. My guess is that in fact we should take into account which views are and aren’t held by both the general public and by philosophers.

I think given this more plausible cashing out of inclusivity, we might want to be both more and less inclusive than you suggest. Here are a few specific ways it might cash out:

  • We should be thinking about and discussing theories which put constraints on actions you’re allowed to take to increase welfare. Most people think there are some limits on be what we’re allowed to do to others to benefit others. Most philosophers believe there are some deontological principles / agent centred constraints or prerogatives.

  • We should be considering how prioritarian to be. Many people think we should give priority to those who are worst off, even if we can benefit them less than we could others. Many philosophers think that there’s (some degree of) diminishing moral value to welfare.

  • Perhaps we ought to be inclusive of views to the effect that (at least some) non-human sentient beings have little or no moral value. Many people’s actions imply they believe that a large number of animals have little or no moral value, and that robots never could have moral value. Fewer philosophers seem to hold this view.

  • I’m less convinced about being inclusive towards views which place no value on the future. It seems widely accepted that climate change is very bad, despite the fact that most of the harms will accrue to those in the future. It’s controversial what the discount rate should be, but not that the pure time discount rate should be small. Very few philosophers defend purely person-affecting views.

Comment by michelle_hutchinson on The marketing gap and a plea for moral inclusivity · 2017-07-12T11:13:17.002Z · score: 9 (11 votes) · EA · GW

You might think The Life You Can Save plays this role.

I've generally been surprised over the years by the extent to which the more general 'helping others as much as we can, using evidence and reason' has been easy for people to get on board with. I had initially expected that to be less appealing, due to its abstractness/potentially leading to weird conclusions. But I'm not actually convinced that's the case anymore. And if it's not detrimental, it seems more straightforward to start with the general case, plus examples, than to start with only a more narrow example.

Comment by michelle_hutchinson on Intuition Jousting: What It Is And Why It Should Stop · 2017-03-31T10:27:37.884Z · score: 11 (11 votes) · EA · GW

I'm not totally sure I understand what you mean by IJ. It sounds like what you're getting at is telling someone they can't possible have the fundamental intuition that they claim they have (either that they don't really hold that intuition or that they are wrong to do so). Eg: 'I simply feel fundamentally that what matters most is positive conscious experiences' 'That seems like a crazy thing to think!'. But then your example is

"But hold on: you think X, so your view entails Y and that’s ridiculous! You can’t possibly think that.".

That seems like a different structure of argument, more akin to: 'I feel that what matters most is having positive conscious experiences (X)' 'But that implies you think people ought to choose to enter the experience machine (Y), which is a crazy thing to think!' The difference is significant: if the person is coming up with a novel Y, or even one that hasn't been made salient to the person in this context, it actually seems really useful. Since that's the case, I assume you meant IJ to refer to arguments more like the former kind.

I'm strongly in favour of people framing their arguments considerately, politely and charitably. But I do think there might be something in the ball-park of IJ which is useful, and should be used more by EAs than it is by philosophers. Philosophers have strong incentives to have views that no other philosophers hold, because to publish you have to be presenting a novel argument and it's easier to describe and explore a novel theory you feel invested in. It's also more interesting for other philosophers to explore novel theories, so in a sense they don't have an incentive to convince other philosophers to agree with them. All reasoning should be sound, but differing in fundamental intuitions just makes for a greater array of interesting arguments. Whereas the project of effective altruism is fundamentally different: for those who think there is moral truth to be had, it's absolutely crucial not just that an individual works out what that is, but that everyone converges on it. That means it's important to thoroughly question our own fundamental moral intuitions, and to challenge those of others which we think are wrong. One way to do this is to point out when someone holds an intuition that is shared by hardly anyone else who has thought about this deeply. 'No other serious philosophers hold that view' might be a bonus in academic philosophy, but is a serious worry in EA. So I think when people say 'Your intuition that A is ludicrous', they might be meaning something which is actually useful: they might be highlighting just how unusual your intuition is, and thereby indicating that you should be strongly questioning it.

Comment by michelle_hutchinson on What the EA community can learn from the rise of the neoliberals · 2016-12-08T15:57:23.974Z · score: 4 (4 votes) · EA · GW

That's a horrible story!

Comment by michelle_hutchinson on Why Animals Matter for Effective Altruism · 2016-08-24T10:41:36.252Z · score: 5 (7 votes) · EA · GW

Recognizing the scale of animal suffering starts with appreciating the sentience of individual animals — something surprisingly difficult to do given society’s bias against them (this bias is sometimes referred to as speciesism). For me, this appreciation has come from getting to know the three animals in my home: Apollo, a six-year-old labrador/border collie mix from an animal shelter in Texas, and Snow and Dualla, two chickens rescued from a battery cage farm in California.

I wonder if we might do ourselves a disservice by making it sound really controversial / surprising that animals are thoroughly sentient? It makes it seem more ok not to believe it, but I think also can come across as patronising / strange to interlocutors. I've in the past had people tell me they're 'pleasantly surprised' that I care about animals, and ask when I began caring about animal suffering. (I have no idea how to answer that - I don't remember a time when I didn't) This feels to me somewhat similar to telling someone who doesn't donate to developing countries that you're surprised they care about extreme poverty, and asking when they started thinking that it was bad for people to be dying of malaria. On the one hand, it feels like a reasonable inference from their behaviour. On the other hand, for almost everyone we're likely to be talking to it will be the case that they do in fact care about the plight of others, and that their reasons for not donating aren't lack of belief in the suffering, or lack of caring about it. I would guess that would be similar for most of the people we talk to about animal suffering: they already know and care about animal suffering, and would be offended to have it implied otherwise. This makes the case easier to make, because it means we're already approximately on the same page, and we can start talking immediately about the scale and tractibility of the problem.

Comment by michelle_hutchinson on Making EA groups more welcoming · 2016-07-29T22:05:12.620Z · score: 2 (2 votes) · EA · GW

Thanks Julia, this is an awesome resource! I'm really grateful for these kinds of super specific suggestions.

Comment by michelle_hutchinson on Some Organisational Changes at the Centre for Effective Altruism · 2016-07-26T18:01:09.445Z · score: 2 (2 votes) · EA · GW

Hey Sam, For people who choose to let us decide where the money goes, the next payout (Oct) will be the same as before (1/4 each to SCI, AMF, DWI, PHC), and the one after that (Jan) will be to on the allocation GW recommends in its Dec update. I expect we will continue allowing donations to the charities the Trust has given to in the past (eg PHC, IPA), but that the default charities suggested for donations will be the ones GW lists as top charities.

Comment by michelle_hutchinson on Giving What We Can is Cause Neutral · 2016-04-27T15:51:51.923Z · score: 1 (1 votes) · EA · GW

Hi Richard, Thanks for your comments. Sorry to have been unclear - there isn't a major rebranding planned. The changed vision should be thought of more as clarifying what lies at the heart of gwwc and what makes it unique. In large part, the reason for doing it is to further focus the team, rather than to change anything for others. It doesn't mean that we plan to move away from working most on extreme poverty (for the reasons outlined in my more recent blog post). Ending extreme poverty is still a major focus for us (as it is for many EAs), but we wanted a vision that articulated why we work on that, and encapsulated the other things we care about. I am planning to write a blog post about our vision on the GWWC blog in May, I'm glad that seems like a helpful thing to do. Michelle

Comment by michelle_hutchinson on Giving What We Can is Cause Neutral · 2016-04-25T12:49:48.302Z · score: 1 (1 votes) · EA · GW

I think even if there's no tension, there could still be an open question about how you think your actions generate value. For example, cause-neutral-Jeff could be donating to AMF because he thinks it's the charity with the highest expected value per $, or because he's risk averse and thinks it's the best if you're going for a trade off between expected value and low variance in value per $, or because he wants to encourage other charities to be as transparent and impact focused as AMF. So although it's not surprising that cause-neutral-Jeff focuses his donations on just one charity, and that it's AMF, it's still interesting to hear the answer to 'why does he donate to AMF?'.

But I agree, it's difficult not to slide between definitions on a concept like cause neutrality, and I'm sorry I'm not as clear as I'd like to be.

Comment by michelle_hutchinson on Giving What We Can is Cause Neutral · 2016-04-24T21:39:27.825Z · score: 1 (1 votes) · EA · GW

Hi Richard, I'm sorry it's rather confusing at the moment, and thank you so much for all the work you do with the GWWC/EA Calgary chapter. I'm hoping my more recent post on the Forum might help bring some clarity. I think part of the reason it's particularly confusing at the moment is that our website has been undergoing some changes, so the page with our mission/vision/values is currently not up. We've also, as Jon mentioned, been clarifying what GWWC is fundamentally about, including whether we are necessarily an organisation which focuses primarily on poverty or only contingently so (it's the latter).

These are our vision/mission/values:

Our Vision

A world in which giving 10% of our income to the most effective organisations is the norm

Our Mission

Inspire donations to the world’s most effective charities

Our Values

We are a welcoming community, sharing our passion and energy to improve the lives of others.

We care. We have a deep commitment to helping others, and We are dedicated to helping other members of our community give more and give better.

We take action based on evidence. We apply rigorous academic processes to develop trustworthy research to guide our actions. We are open-minded towards new approaches to altruism that may show greater effectiveness. We are honest when it comes to what we don't know or mistakes we have made.

We are optimistic. We are ambitious in terms of the change we believe we can create. We apply energy and enthusiasm to support and build our community.

All the best, Michelle

Why Poverty?

2016-04-24T21:25:53.942Z · score: 11 (11 votes)
Comment by michelle_hutchinson on Giving What We Can is Cause Neutral · 2016-04-24T19:42:43.765Z · score: 0 (0 votes) · EA · GW

Hi David, It doesn’t seem problematic to me to say that a person or individual could be cause-neutral but currently focused on just one area. If that weren’t the case, the only people who would count as cause neutral would be those working on / donating to cause prioritisation itself. That seems like a less useful concept to me than the one I tried to carve out (though equally plausible as a way of understanding ‘cause neutral’). One way to frame my understanding of cause neutrality is that what matters is not whether a person/organisation is currently focused on one area, but if they’d be willing to switch to focusing on a different area if they became persuaded it would be more effective to do so. There’s also the difference between an individual and an organisation being cause neutral. It’s very plausible that a cause neutral individual could work for an organisation that isn’t cause neutral. It even seems plausible that an organisation might be not cause neutral, while being staffed entirely by people who are cause neutral. That would be true, on my understanding, if it were the case that those individuals would be willing to pivot away from working on that cause if it turned out not to be the best, but wouldn’t do so by pivoting the organisation (rather by closing it down, or finding others to staff it). On this understanding, Giving What We Can is both run by individuals who are cause neutral, and (separately) is cause neutral as an organisation.

Comment by michelle_hutchinson on Giving What We Can is Cause Neutral · 2016-04-22T23:51:15.945Z · score: 2 (2 votes) · EA · GW

Thanks Claire, I'm really glad it was helpful. That's what the follow up posts will be about! I have a tendency to splurge onto a page, and was advised to cut the piece into several posts - hence not having answered that yet.

Giving What We Can is Cause Neutral

2016-04-22T12:54:14.312Z · score: 10 (12 votes)
Comment by michelle_hutchinson on What is up with carbon dioxide and cognition? An offer · 2016-04-06T08:50:21.345Z · score: 2 (2 votes) · EA · GW

But I promise I am generally a nice guy.

I'm very happy to vouch for this ;-)

Thanks Paul, nice idea. Look forward to reading what people come up with!

Review of Giving What We Can staff retreat

2016-03-21T16:31:02.923Z · score: 8 (8 votes)
Comment by michelle_hutchinson on Using Breaking News Stories for Effective Altruism · 2016-03-19T15:50:30.731Z · score: 1 (1 votes) · EA · GW

Stefan - Alison Woodman is one of the mods, you can email her about spam/trolling.

Comment by michelle_hutchinson on Effective Altruism, Environmentalism, and Climate Change: An Introduction · 2016-03-14T12:14:07.580Z · score: 2 (2 votes) · EA · GW

The Giving What We Can report on Climate Change can be found here, in case you're interested.

Comment by michelle_hutchinson on Giving What We Can's 6 monthly update · 2016-03-07T11:54:36.789Z · score: 2 (2 votes) · EA · GW

On the last two points:

People to ask advice from

We’d hope to find advisors in both these areas, but we’re particularly interested in advocacy within development and research (where knowledge is fairly easily transferable – our outreach is a bit more different from what others are doing). For example over the last couple of weeks we added someone to our advisory board whose PhD was on schistosomiasis and is about to take up a position working on a Gates foundation project to eliminate STHs, and connected with the person who runs the Emerging Policy, Innovation and Capability unit for the UK Department of international development.

HNW advising

We’ve started this by supporting a large donor who had previously been assessing projects entirely independently, and by working with the Founders’ Pledge. In the former case, it seems we have fairly good reason to think that we have been of genuine help to the person in determining which of the high impact projects considered were the best. As a HNW donor with good connections, he has various projects open to him that aren’t to other donors, and which smaller and more risk averse donors wouldn’t be interested in. He has said that he has found our advice useful. In the latter case, we produced reports like this one on mental health for donors who want to give partly within restricted areas. This collaboration is in early stages, so we don’t yet know to what extent donors will follow our recommendations. But so far donors have seemed grateful for the recommendations and interested in the research, and it seems likely they will follow them. If that’s the case, it would mean money going to our top recommended charities which otherwise almost certainly wouldn’t have, and money going to more effective charities within particular areas than it would otherwise have. Since these donors are donating substantial amounts, they expect more bespoke reports than simply being pointed towards GW’s research. Currently, the reports are somewhat labour intensive to create, though we estimate they are still worth it since each is moving 10s of thousands of dollars. In the future, we can pull together the reports from research we did previously, so we expect it the cost-effectiveness to increase fairly swiftly. This is something we are monitoring carefully however. One expected benefit of this work is simply presenting research in a form that makes it more likely to be acted on. The Founders’ Pledge know their donors well, and seem to think that the reports we produce are the kind that are likely to be acted on. Another benefit is in understanding the overall health sphere well enough to find particular synergies. People often want to help a particular sphere. Since the charities we recommend are the most cost-effective we could find, and helping in very basic ways, there are often synergies that aren’t immediately obvious. For example, as we write about in our mental health report, it seems that the most cost-effective way to reduce epilepsy incidence is actually to prevent malaria, since cerebral malaria causes in the region of a five-fold increase in chance of epilepsy. A similar example: when we’ve looked into cancer, the most cost-effective treatment to cancer we could find was to prevent Vitamin A deficiency, since that causes stomach cancer. In both these cases, this ignores the other benefits of malaria prevention and Vitamin A supplementation. These kinds of cases aren’t just interesting, they are likely to be persuasive to people who might not otherwise have given to such cost-effective charities. In neither of these cases did the points seem to have been made by others, so it does not appear to be a duplication of effort.

Comment by michelle_hutchinson on Accomplishments Open Thread - March 2016 · 2016-03-07T10:51:21.281Z · score: 3 (3 votes) · EA · GW

Congratulations Scott! What a great month. Keep us up to date with where you decide to study.

Comment by michelle_hutchinson on Charity Science 2.5 Year Internal Review and Plans Going Forward · 2016-02-28T16:16:21.078Z · score: 1 (1 votes) · EA · GW

Thanks, this is really useful and interesting!

The '(an example of how we do this)' doc doesn't have sharing permissions.

Comment by michelle_hutchinson on Giving What We Can's 6 monthly update · 2016-02-28T14:14:09.560Z · score: 3 (3 votes) · EA · GW

Hey Peter, Very glad you found it useful. I'll answer the things I can quickly now, and get back to others later on.

Management levels

Sorry, mispoke - I meant going from me managing everyone on the team, to me managing managers. The structure is: I manage Alison, Jon, Sam and Hauke; they manage Marinella, James and Larissa.

Pledge drive run by a person who hadn't yet taken the pledge

Agree this isn't a decisive consideration, and we were going ahead with Harri as the main lead (who was already a member). Finding Linch was great in terms of him doing amazing work, and had the additional benefit of him not having taken the pledge yet. I do think typically a framing of 'I'm going to do this thing I'm excited about, would you like to join me' is better than 'you should all be doing this thing, why aren't you yet?', but that might well be harder in future - if it is, that definitely won't hinder future pledge drives. It may not be hard in the future though - we do every now and then chat to people who would like to join, but want to find the most impactful time possible to join - them leading a pledge drive is a pretty compelling answer to that.

Other outreach activities

One example we're working on at the moment is working out how we can support our members to be effective champions of GWWC, and reach out to their friends about joining. Getting this kind of chain going could be a really promising growth model. Our planned outreach activities with our estimated priority ranking is here (supporting our members as champions we've been referring to as 'activating membership').

Causes and barriers to joining

She's written a doc on Insights from member skypes in which she describes these (sections 2b-d) qualitatively, and we have a spreadsheet of what things people cited on the join form as how they heard about us. (Note that the latter was just made for our internal use, so is rough rather than explanatory.) Still working out what the most effective ways to summarise and present these things are.

Comment by michelle_hutchinson on Independent re-analysis of MFA veg ads RCT data · 2016-02-20T16:22:28.247Z · score: 4 (4 votes) · EA · GW

Thanks very much for doing this Jeff. It's useful to have an independent re-analysis. My credence that these ads work is increased from knowing that the data has been re-analysed by someone who would have expected no effect, and in fact did find one. Even if the effect was reducing recidivism, that still seems pretty useful! Hopefully in the future there will be more studies done that actually get statistically significant results.

Comment by michelle_hutchinson on Giving What We Can's 6 monthly update · 2016-02-10T09:52:34.118Z · score: 6 (6 votes) · EA · GW

Thanks, it's really useful to know you found this valuable! We've done reviews like this most 6 month periods for the last few years. They're linked to from this page. I've tried out a number of different lengths/contents/styles for them - very much a work in progress as to what's most useful both for us and for readers.

Giving What We Can's 6 monthly update

2016-02-09T20:19:08.574Z · score: 15 (15 votes)
Comment by michelle_hutchinson on Effective Altruism London – a request for funding · 2016-02-08T11:32:16.005Z · score: 0 (0 votes) · EA · GW

Probably waiting until the update would be better, because that way the questions and answers are collected into one space. We publish updates on our blog and here. Our main reviews and plans are published on an ongoing basis on this page. Our most recent prospectus had a lot about our plans etc, and we answered more detailed questions about our impact evaluations on this post. We also have this doc in which we tried to estimate the value of individual outreach, but note that it was intended for internal use, so is rather rough around the edges. (The data it uses is from a year or so ago, rather than being the most recent round.)

Finding more effective causes

2016-01-01T22:54:53.607Z · score: 17 (19 votes)

Why do effective altruists support the causes we do?

2015-12-30T17:51:59.470Z · score: 19 (27 votes)

Giving What We Can needs your help this Christmas!

2015-12-07T23:24:53.359Z · score: 11 (17 votes)

Updates from Giving What We Can

2015-11-27T15:04:48.219Z · score: 8 (12 votes)

Giving What We Can needs your support — only 5 days left to close our funding gap

2015-06-25T16:26:31.611Z · score: 7 (9 votes)

Giving What We Can needs your help!

2015-05-26T22:11:33.646Z · score: 5 (11 votes)

Please support Giving What We Can this Spring

2015-04-24T18:22:16.230Z · score: 10 (16 votes)

The role of time in comparing diverse benefits

2015-04-13T20:18:52.049Z · score: 2 (4 votes)

Why I Give

2015-01-25T13:51:48.885Z · score: 14 (14 votes)

Supportive scepticism in practice

2015-01-15T16:35:57.403Z · score: 17 (17 votes)

Should Giving What We Can change its Pledge?

2014-10-22T16:40:35.480Z · score: 8 (18 votes)