Comment by michelle_hutchinson on I find this forum increasingly difficult to navigate · 2019-07-05T13:07:48.242Z · score: 7 (16 votes) · EA · GW

Thanks for bringing up suggestions like filtering by subject matter and rankings - I imagine it's really useful for the forum team to hear what improvements people most want, and how people typically use the forum.

I downvoted though, because I think it could be expressed in a kinder and more constructive way. It would have been nice if the title was more like a specific feature request rather than a general attack on someone's hard work, and if the post included some appreciation for things that have been improved over the years, and the hard work people have put in to make that happen. It might also be good to rephrase particularly loaded statements like 'The latest version has reached the point where I just don't see the point of visiting the forum any more'. Being in effective altruism is kind of challenging, because we're always trying to do everything as well as we can, and optimise. I think that makes it all the more important for us all to try to be as supportive and caring to each other as we can.

My impression, incidentally, is that the search functionality is decidedly better than it was on the old forum: the search results seem to be more related to what I'm looking for, and be easier to sort through (eg separating 'comments' and 'posts')

Comment by michelle_hutchinson on Announcing plans for a German Effective Altruism Network focused on Community Building · 2019-07-04T10:08:21.442Z · score: 14 (7 votes) · EA · GW

As Aaron says, it seems that the team is doing a great job of forming a thorough and considered plan. Have you thought much about what a minimum viable product (as described eg in the Lean Start-up) would look like, and how to use that to test your assumptions? That might be particularly useful given the complexity of the project.

Comment by michelle_hutchinson on Is EA Growing? EA Growth Metrics for 2018 · 2019-06-03T11:06:00.870Z · score: 19 (10 votes) · EA · GW

Thanks for this, really interesting.

It might be useful to include page views of ea.org in future, given that that's arguably the page that has been most designed to be a good landing page for EA.

Comment by michelle_hutchinson on Which scientific discovery was most ahead of its time? · 2019-05-17T13:05:08.636Z · score: 12 (6 votes) · EA · GW

AI Impacts' project on discontinuities in technological progress might have some relevant examples for this: https://aiimpacts.org/cases-of-discontinuous-technological-progress/

Comment by michelle_hutchinson on How does one live/do community as an Effective Altruist? · 2019-05-16T11:57:19.047Z · score: 21 (9 votes) · EA · GW

Thank you for opening this discussion - this feels like a really important topic. I've never been religious, and my parents moved around a lot when I was young. So I didn't have the experience of growing up in a community, but it has always seemed really appealing to me. One thing I've been particularly glad about being surrounded by EAs is that it's so accepted that living in group houses is a good idea. My parents generation, and even my non-EA friends, tend to feel that it's weird to live with other adults, particularly when you're married. But I've found living with friends to be immensely supportive and an easier way than usual to forge strong, lasting friendships. At the extreme of this, when I had a late term still birth my housemates cleared all evidence of baby away before I came home from hospital, made sure that all the friends I wanted to be told knew without me having to talk about it, bought groceries and cooked for me. This kind of community seems immensely valuable, quite apart from it being cheaper to share houses!

If you haven't come across it yet, you might be interested to go to Secular Solstice gatherings (https://www.lesswrong.com/posts/ERboWueanAyqwKbiQ/boston-solstice-2018). They talk about challenges humanity has overcome and ones we still need to face, and sing songs like these (https://www.lesswrong.com/posts/ERboWueanAyqwKbiQ/boston-solstice-2018). Unfortunately they're just once a year though!

Comment by michelle_hutchinson on Is preventing child abuse a plausible Cause X? · 2019-05-08T11:04:57.585Z · score: 6 (3 votes) · EA · GW

As far as I'm aware, no grantmaking happened, for the reason in the paragraph before that line - that no charities doing effective work on it were found.

Comment by michelle_hutchinson on Is preventing child abuse a plausible Cause X? · 2019-05-06T16:57:16.243Z · score: 6 (4 votes) · EA · GW

Only tangentially related, but you might be interested in the report on child marriage that Jacob Williamson did a few years ago from an EA point of view. In addition to discussing the harms caused by child marriage, it discusses various possible interventions for tackling it.

https://drive.google.com/file/d/0B57LagYEumRfLWhHLXZOS1QwOVk/edit

Comment by michelle_hutchinson on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T15:56:20.665Z · score: 11 (5 votes) · EA · GW

This sounds pretty sensible to me. On the other hand, if people are worried about it being harder for people who are already less plugged in to networks to get funding, you might not want an additional dimension on which these harder-to-evaluate grants could lose out compared to easier to evaluate ones (where the latter end up having a lower minimum threshold).

It also might create quite a bit of extra overhead for granters having to decide the opportunity cost case by case, which could reduce the number of grants they can make, or again push towards easier to evaluate ones.

Comment by michelle_hutchinson on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T14:02:54.368Z · score: 46 (24 votes) · EA · GW

I strongly agree with this. EA funds seemed to have a tough time finding grant makers who were both qualified and had sufficient time, and I would expect that to be partly because of the harsh online environment previous grant makers faced. The current team seems to have impressively addressed the worries people had in terms of donating to smaller and more speculative projects, and providing detailed write-ups on them. I imagine that in depth, harsh attacks on each grant decision will make it still harder to recruit great people for these committees, and mean those serving on them are likely to step down sooner. That's not to say we shouldn't be discussing the grants - presumably it's useful for the committee to hear other people's views on the grants to get more information about them. But following Ben's suggestions seems crucial to EA funds continuing to be a useful way of donating into the future. In addition, to try to engage more in collaborative truthseeking rather than adversarial debate, we might try to:

  • Focus on constructive information / suggestions for future grants rather than going into depth on what's wrong with grants already given.
  • Spend at least as much time describing which grants you think are good and how, so that they can be built on, as on things you disagree with.
Comment by michelle_hutchinson on Why animal charities are much more effective than human ones · 2019-04-09T14:00:29.395Z · score: 8 (10 votes) · EA · GW

You might also want to take longer-run effects into account, as is discussed in this article: http://globalprioritiesproject.org/2014/06/human-and-animal-interventions/

Comment by michelle_hutchinson on The career and the community · 2019-03-26T17:23:03.558Z · score: 29 (11 votes) · EA · GW

I actually don’t agree that the majority of of roles for our first 6 priority paths are ‘within the EA bubble’: my view is that this is only true of ‘working in EA organisations’ and ‘operations management in EA organisations’. As a couple of examples: ‘AI policy research and implementation’ is, as you indicate, something that could be done at places like FHI or CSET. But it might also mean joining a think tank like the Center for American Security, the Belfer Center or RAND; or it could mean joining a government department. EA orgs are pretty clearly the minority in both our older and newer articles on AI policy. ‘Global priorities researcher’ in academia could be done at GPI (where I used to work), but could also be done as an independent academic, whether that simply means writing papers on relevant topics, or joining/building a research group like the Institute for Future Studies (https://www.iffs.se/en/) in Stockholm.

One thing that could be going on here is that the roles people in the EA community hear about within a priority path are skewed towards those at EA orgs. The job board is probably better than what people hear about by word of mouth in the community, but it still suffers from the same skew - which we’d like to work towards reducing.

Comment by michelle_hutchinson on A guide to improving your odds at getting a job in EA · 2019-03-24T12:06:15.605Z · score: 12 (4 votes) · EA · GW

Thanks for all these useful tips Joey. Something I wanted to disagree with you on - the idea that it’s best only to apply for a couple of organisations / jobs. In my experience, most organisations aren’t put off by an applicant also looking into working at a broad range of other places. That makes sense to me for a couple of reasons: there are a huge number of very high impact roles out there and it’s really tough to tell which are the very most high impact; and as an individual it’s hard to know which job you’re going to be best suited for and so it makes sense to apply broadly.

I think the idea that it’s sensible to apply broadly both holds in the sense of applying for many different roles at EA organisations, but also in the sense of applying for jobs outside of EA organisations. There are ultimately very few jobs at EA organisations, so it’s unlikely anyone should be exclusively applying for those.

Comment by michelle_hutchinson on The career and the community · 2019-03-23T21:54:48.654Z · score: 58 (19 votes) · EA · GW

[I work for 80,000 Hours]

Thanks for your thoughts. I’m afraid I won’t be able to address everything, but I wanted to share a few considerations.

There were a few points here I particularly liked:

  • People should be thinking about the impact they can have in their career over the period of decades, rather than just the next year or so. This seems really useful to highlight, because it’s pretty difficult to keep in mind, particularly early on in your career.
  • We need to avoid a sense in the community that ‘direct work’ means ‘work in EA organisations’: the vast majority of the most impactful roles in the world are outside EA organisations - whether in government, academia, non-profits or companies.
  • The paths to these roles are very often going to be long, and involve building up skills, credibility/credentials and a network.
  • I agree that the phrase ‘skill bottleneck’ might fail to adequately capture resources like credentials and networks but we think that these forms of career capital are as important as specific skills. However, we think that they are most useful when they are reasonably relevant to a priority path. For example, we think Jason Matheny’s career capital is so valuable largely because his network and credentials were in national security, intelligence, U.S. policy, and emerging technology - areas we think are some of the most relevant to our priority problems. If he had worked at a management consulting firm or in corporate law he would still have acquired generally impressive networks and prestige but couldn’t have founded CSET.

There are a few things I disagree with:

You seem to be fairly positive about pretty broad capital building (eg working at McKinsey). While we used to recommend working in consulting early in people’s careers, we’ve updated pretty substantially away from that in favour of taking a more directed approach to your career. The idea is to try to find the specific area you think you think is most suited to you and where you’ll have the most impact, and then to try out roles directly relevant to that. That’s not to say, of course, that it will be clear what type of role you should pursue, but rather that it seems worth thinking about which types of role seem best suited to you, and then trying out things of that type. Often, people who are able to acquire prestigious generalist jobs (like McKinsey) are able to acquire more useful targeted jobs that would be nearly as good of a credential. For example, if you think you might be interested going into policy, it is probably better to take a job at a top think tank (especially if you can do work on a topic that’s relevant to one of our priority problem such as national security or emerging technology policy) than to do something like management consulting. The former has nearly as much general prestige, but has much more information value to help you decide whether to pursue policy, and will allow you to build up a network, knowledge (including tacit knowledge), and skills which are more relevant to roles in priority areas that you might aim for later in your career. One heuristic we sometimes use to compare the career capital of two opportunities is to ask in which option you'd expect your career to be more advanced in a priority path 5-10 years down the line. It's sometimes the case that spending years getting broad career capital and then shifting into a relevant area will progress you faster than acquiring more targeted career capital but in our experience, narrow career capital wins out more often.

I agree that it’s really important for people to find jobs that truly interest them and which they can excel at. Having said that, I’m not that keen on the advice to start your career decision with what most fascinates you. Personally, I haven’t found it obvious what I’ll find interesting until I try it, which makes the advice not that action guiding. More importantly, in order to help others as much as we can, we really need to both work on the world’s most pressing problems and find what inputs are most needed in order to make progress on them. While this will describe a huge range of roles in a wide variety of areas, it will still be the minority of jobs. That makes me think it’s better to approach career decisions by first thinking through what problems in the world you think most need solving and what the biggest bottlenecks to them being solved are, followed by which of those tasks seem interesting and appealing to you, rather than starting with the question of which jobs seem most interesting and appealing.

I’m a little worried that people will take away the message from your piece that they shouldn’t apply to EA organisations early in their careers, or should turn down a job there if offered one. Like I said - the vast majority of the highest impact roles will be outside EA organisations, and of course there’ll be many people who are better suited to work elsewhere. But it still seems to be the case that organisations like the Open Philanthropy Project and GiveWell are occasionally interested in hiring people 0-2 years out of university. And while there seem to be some people to whom working at EA organisations seems more appealing than it should, there are also many people for whom it seems less appealing or cognitively available than it should. For example, while the people on this forum are likely to be very inclined to apply for jobs at EA organisations, many of the people I talk to in coaching don’t know that much about various EA organisations and why they might be good places to work.

I think the thing to bear in mind is that it’s important not only to apply for jobs at EA organisations. The total number of jobs advertised at EA organisations at any one time is only small, and new graduates should expect to apply to tens of jobs before getting one. Typically, the cost of applying to a valuable direct work job is fairly small relative to the benefit if it turns out you learn that you’re already in a position to start making large contributions to a priority area, as long as you’re at the same time applying to jobs that would help you generate career capital.

Unfortunately, as you say, it seems very difficult to convey accurate impressions - whether about how hard it is to get into various areas, or what kind of skill bottlenecks we currently think there are. I think this is in part due to people having such different starting points. I both come across people who had the impression that it was easy to get into AI safety or EA organisations and then struggled to do so, and people who thought it was so competitive there was no point in them even trying who (when strongly encouraged to do so) ended up excelling. We’re hoping that focusing more on the long-form material like the podcast will help to get a more nuanced picture across for people coming from different starting points.

Comment by michelle_hutchinson on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-06T09:58:06.625Z · score: 12 (5 votes) · EA · GW

I'm not sure if this is the kind of place you were thinking, but the EA work club is linked to on the 80,000 Hours Job Board page (https://80000hours.org/job-board/) - at the bottom under 'Other places to find vacancies'

Comment by michelle_hutchinson on Latest Research and Updates for February 2019 · 2019-03-02T22:10:42.806Z · score: 3 (2 votes) · EA · GW

Love the good news roundup!

Comment by michelle_hutchinson on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T17:35:17.299Z · score: 42 (15 votes) · EA · GW

There actually is a listing like this: the EA Work Club (https://eawork.club/). Although it's aiming at EA community jobs, rather than all possible jobs which could be defined as EA, which is maybe what you were after.

Comment by michelle_hutchinson on EA grants available to individuals (crosspost from LessWrong) · 2019-02-09T18:35:04.039Z · score: 6 (4 votes) · EA · GW

Let's Fund has recently set up to try to get funding for neglected and speculate projects in effective altruism. They seem to particularly focus on research. It could be worth reaching out to them about whether your project is the kind they'd be interested in fundraising for.

Comment by michelle_hutchinson on Simultaneous Shortage and Oversupply · 2019-02-03T20:33:38.415Z · score: 4 (2 votes) · EA · GW

In case you haven't come across it yet, the 80,000 Hours job board has a filter for jobs which can be done remotely, which you might find useful.

Comment by michelle_hutchinson on Requesting community input on the upcoming EA Projects Platform · 2018-12-10T22:38:07.911Z · score: 11 (4 votes) · EA · GW

It's always great to see interesting new projects like this to improve the EA community! There might also be learnings for the project from EA Ventures which tried to coordinate between speculative EA projects and funders.

Comment by michelle_hutchinson on William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" · 2018-11-24T12:25:31.113Z · score: 4 (3 votes) · EA · GW

That's what I meant by 'though it turns out to be correct'. Sorry for being unclear.

Comment by michelle_hutchinson on William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" · 2018-11-24T11:31:36.918Z · score: 6 (4 votes) · EA · GW

I didn't downvote the comment, but it did seem a little harsh to me. I can easily imagine being forwarded a draft article, and reading the text the person forwarding wrote, then looking at the draft, without reading the text in the email they were originally sent. (Hence missing text saying the draft was supposed to be confidential.) Assuming that Will read the part saying it was confidential seemed uncharitable to me (though it turns out to be correct). That seemed in surprising contrast to the understanding attitude taken to Julia's mistake.

Comment by michelle_hutchinson on Guide to Successful Community 1-1s · 2018-11-24T11:13:47.640Z · score: 11 (10 votes) · EA · GW

Thanks, this seems like a really useful guide!

One thing I find important in conversations, particularly if I'm doing them back to back, is writing down action points (eg people I want to introduce them to) as I go. People sometimes think it's rude to do this on a phone, so probably having a note book with you is the best approach.

Something I struggle with is making sure that I build up enough rapport with a person fast that they will feel comfortable pushing back on things, and in particular bringing up more socially awkward considerations (eg I've heard that effective altruists don't think it's particularly impactful to get a job doing x but I've been working towards that goal for years, and hate the idea of never getting to do it). I've found it pretty useful watching other people who are really good at getting on with people meet new people, and seeing what they do that makes people feel quickly at ease. Because I know this is a weak spot of mine, I try after some of my 1-1 conversations to think through whether there was anything in particular that went well/badly on this dimension (I waited a while for them to respond after saying y, rather than bulldozering on...; when I pushed back on z I accidentally got into 'philosophy debate' mode rather than friendly discussion mode). I also find reading books that get me to think through these kinds of dynamics useful: I've found 'Charisma Myth' useful enough to have read it a couple of times, and right now I'm reading 'Never Split the Difference'. (A lot of these kinds of books sound like they'll be about getting your own way and persuading people into things they don't want to do, but they actually spend most of their time on how to make sure you properly hear and understand the person you're talking to, and help them feel at ease.)

Comment by michelle_hutchinson on Takeaways from EAF's Hiring Round · 2018-11-21T11:45:15.516Z · score: 18 (6 votes) · EA · GW

This seems to be something that varies a lot by field. In academic jobs (and PhD applications), it's absolutely standard to ask for references in the first round of applications, and to ask for as many as 3. It's a really useful part of the process, and since academics know that, they don't begrudge writing references fairly frequently.

Writing frequent references in academia might be a bit easier than when people are applying for other types of jobs: a supervisor can simply have a letter on file for a past student saying how good they are at research and send that out each time they're asked for a reference. Another thing which might contribute to academia using references more is it being a very competitive field, where large returns are expected from differentiating between the very best candidate and the next best. As an employer, I've found references very helpful. So if we expect EA orgs to have competitive hiring rounds where there are large returns on finding the best candidate, it could be worth our spending more time writing/giving references than is typical.

I find it difficult to gauge how off-putting asking for references early would be for the typical candidate. In my last job application, I gave a number of referees, some of whom were contacted at the time of my trial, and I felt fine about that - but that could be because I'm used to academia, or because my referees were in the EA community and so I knew they would value the org I was applying for making the right hiring decision, rather than experience giving a reference as an undue burden.

I would guess the most important in asking for references early is being willing to accept not getting a references from current employers / colleagues, since if you don't know whether you have a job offer you're often not going to want your current employer to know you're applying for other jobs.

Comment by michelle_hutchinson on Takeaways from EAF's Hiring Round · 2018-11-20T15:45:20.228Z · score: 5 (4 votes) · EA · GW

My impression is that while specifically *IQ* tests in hiring are restricted in the US, many of the standard hiring tests used there (eg Wonderlic https://www.wonderlic.com/) are basically trying to get at GMA. So I wouldn't say the outside view was that testing for GMA was bad (though I don't know what proportion of employers use such tests).

Comment by michelle_hutchinson on William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" · 2018-11-18T00:08:39.372Z · score: 6 (6 votes) · EA · GW

I agree with this take on the comment as it's literally written. I think there's a chance that Siebe meant 'written in bad faith' as something more like 'written with less attention to detail than it could have been', which seems like a very reasonable conclusion to come to.

(I just wanted to add a possibly more charitable interpretation, since otherwise the description of why the comment is unhelpful might seem a little harsh)

Keeping Absolutes in Mind

2018-10-21T22:40:49.160Z · score: 47 (25 votes)
Comment by michelle_hutchinson on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-12T08:55:22.603Z · score: 1 (2 votes) · EA · GW

I don't know how others answered this question, but personally I didn't answer for how good I thought the last grants were to each other (ie, I wasn't comparing CfAR/MIRI to Founders Pledge) or in expectation of changover in grant maker. I was thinking about something like whether I preferred funding over the next 5 years to go to organisations which focused on the far future vs community building, knowing that these might or might not converge. I'd expect over that period a bunch of things to come up that we don't yet know about (in the same way that BERI did a year or so ago).

Comment by michelle_hutchinson on EA needs a cause prioritization journal · 2018-09-13T15:56:06.894Z · score: 20 (16 votes) · EA · GW

There do seem to be some strong arguments in favour of having a cause prioritisation journal. I think there are some reasons against too though, which you don't mention:

  • For work people are happy to do in sufficient detail and depth to publish, there are significant downsides to publishing in a new and unknown journal. It will get much less readership and engagement, as well as generally less prestige. That means if this journal is pulling in pieces which could have been published elsewhere, it will be decreasing the engagement the ideas get from other academics who might have had lots of useful comments, and will be decreasing the extent to which people in general know about and take the ideas seriously.

  • For early stage work, getting an article to the point of being publishable in a journal is a large amount of work. Simply from how people understand journal publishing to work, there's a much higher bar for publishing than there is on a blog. So the benefits of having things looking more professional are actually quite expensive.

  • The actual work it is to set up and run a journal, and do so well enough to make sure that cause prioritisation as a field gains rather than loses credibility from it.

Making Organisations More Welcoming

2018-09-12T21:52:52.530Z · score: 35 (16 votes)
Comment by michelle_hutchinson on Concerns with ACE research · 2018-09-09T11:46:06.040Z · score: 13 (16 votes) · EA · GW

It sounds like AEF is doing a fantastic job of ensuring rigour in its messaging!

But we have to realize that when it comes to animal suffering, as far as I know ACE is the only game in town. In my opinion, this is a precarious state of affairs, and we should do our best to protect criticism of ACE, even when it does not come with the highest level of politeness.

I think in cases where there is little primary research, it's all the more important to ensure that discourse remain not merely polite, but friendly and kind. Research isn't easy at the best of times, and the animal space has a number of factors making it harder than others like global poverty (eg historic neglect and the difficulty of understanding experiences unlike our own). In cases like this where people are pushing ahead despite difficulty, it is all the more important to make sure that the work is actively appreciated, and at baseline that people do not end up feeling attacked simply for doing it. Criticisms that are framed badly can easily be worse than nothing, in leading those working in this area to think that their work isn't useful and they should leave the area, and by dissuading others from joining the area in the first place.

This makes me all the more grateful to John for being so thoughtful in his feedback - suggesting improvements directly to ACE in the first instance, running a public piece by them before publishing, and for highlighting reasons for being optimistic as well as potential problems.

Good news that matters

2018-08-27T05:38:06.870Z · score: 22 (24 votes)
Comment by michelle_hutchinson on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-01T22:52:07.677Z · score: 17 (17 votes) · EA · GW

I'm Head of Operations for the Global Priorities Institute (GPI) at Oxford University. OpenPhil is GPI's largest donor, and Nick Beckstead was the program officer who made that grant decision.

I can't speak for other universities, but I agree with his assessment that Oxford's regulations make it much more difficult to use donations get productivity enhancements than it would be at other non-profits. For example, we would not be able to pay for the child care of our employees directly, nor raise their salary in order for them to be able to pay for more child care (since there is a standard pay scale). I therefore believe that the reason he gave for ruling out university-based grantees is the true reason, and one which is justified in at least some cases.

Comment by michelle_hutchinson on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-07-25T10:44:24.755Z · score: 16 (18 votes) · EA · GW

I don't know what others think about the qualifications needed/desired for this, but as a donor to these EA Funds, some of the reasons I'm enthusiastic to give to Nick's funds are:

  • His full-time day job is working out which organisations will do the most good over the long run (especially of those seeking to grow the EA movement), and how much funding they need.

  • He does that alongside extremely smart, well-informed colleagues with the same aims, giving him lots of opportunities to test and improve his views

  • He has worked formally and informally in this area for coming on for ten years

  • He's consistently shown himself to be smart, well-informed and to have excellent judgement.

I've been very grateful to be able to off-load where/when/how to donate optimally to him, and hope if/when a new fund manager is found, they share at least some of the above qualities.

[Disclaimer: I used to work for CEA]

Comment by michelle_hutchinson on Effective Thesis project review · 2018-06-04T15:39:39.077Z · score: 5 (5 votes) · EA · GW

Thanks for writing a summary of your progress and learnings so far, it's so useful for the EA community to share its findings.

A few comments:

You might consider making the website more targeted. It seems best suited to undergraduate theses, so it would be useful to focus in on that. For example, it might be valuable to increase the focus on learning. During your degree, building career capital is likely to be the most impactful thing you can do. Although things like building connections can be valuable for career capital, learning useful skills and researching deeply into a topic are the expected goals a thesis and so what most university courses give you the best opportunity to do. Choosing a topic which gives you the best opportunity for learning could mean, for example, thinking about which people in your department you can learn the most from (whether because the best researchers, or because they are likely to be the most conscientious supervisors), and what topic is of interest to them so that they'll be enthusiastic to work with you on it.

People in academia tend to be sticklers wrt writing style, so it could be worth getting someone to copy edit your main pages for typos.

Coming up with a topic to research is often a very personal process that happens when reading around an area. So it could be useful to have a page linking to recommended EA research / reading lists, to give people an idea of where they could start if they want to read around in areas where ideas are likely to be particularly useful. For example you might link to this list of syllabi and reading lists Pablo compiled.

Comment by michelle_hutchinson on Triple counting impact in EA · 2018-05-31T10:35:57.477Z · score: 2 (2 votes) · EA · GW

I agree with you that impact is importantly relative to a particular comparison world, and so you can't straightforwardly sum different people's impacts. But my impression is that Joey's argument is actually that it's important for us to try to work collectively rather than individually. Consider a case of three people:

Anna and Bob each have $600 to donate, and want to donate as effectively as possible. Anna is deciding between donating to TLYCS and AMF, Bob between GWWC and AMF. Casey is currently not planning to donate, but if introduced to EA by TLYCS and convinced of the efficacy of donating by GWWC, would donate $1000 to AMF.

It might be the case that Anna knows that Bob plans to donate the GWWC, and therefore she's choosing between a case of causing $600 of impact or $1000. I take Joey's point not to be that you can't think of Anna's impact as being $1000, but to be that it would be better to concentrate on the collective case than the individual case. Rather than considering what her impact would be holding fixed Bob's actions ($1000 if she donates to TLYCS, $600 if she gives to AMF), Anna should try to coordinate with Bob and think about their collective impact ($1200 if they give to AMF, $1000 if they give to TLYCS/GWWC).

Given that, I would add 'increased co-ordination' to the list of things that could help with the problem. Given the highlighted fact that often multiple steps by different organisations are required to achieve particular impact, we should be thinking not just about how to optimise each step individually but also about the process overall.

Comment by michelle_hutchinson on Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was · 2018-05-25T10:13:54.915Z · score: 0 (0 votes) · EA · GW

If you want to make something to randomise the text suggestions, you might be able to do it pretty quickly and easily with Guided Track. Personally, I think I would find it more helpful looking at the whole list than being given a random suggestion from it. If you wanted to give people that option without making it publicly available for free, you could put the list on the private and unsearchable Facebook group EA self help, with a request not to share.

Comment by michelle_hutchinson on How to improve EA Funds · 2018-04-08T22:14:15.874Z · score: 3 (3 votes) · EA · GW

This strikes me as making a false dichotomy between 'trust the grant making because lots of information is made public about its decisions' and 'trust the grant making because you personally know the re-granter (or know someone who knows someone etc)'. I would expect this is instead supposed to work in the way a lot of for profit funds presumably work: you trust your money to a particular fund manager because they have a strong history of their funds making money. You don't need to know Elie personally (or know about how he works / makes decisions) to know his track record of setting up GW and thereby finding excellent giving opportunities.

Comment by michelle_hutchinson on How much further does your dollar go overseas? · 2018-02-05T14:48:52.995Z · score: 2 (2 votes) · EA · GW

[Note: It is difficult to compare the cost effectiveness of developed country anti-smoking MMCs and developing country anti-smoking MMCs because the systematic review cited above did not uncover any studies based on a developing country anti-smoking MMC. The one developing country study that it found was for a hypothetical anti-smoking MMC. That study, Higashi et al. 2011, estimated that an anti-smoking MMC in Vietnam would result in one DLYG (discount rate = 3%) for every 78,300 VND (about 4 USD). Additionally, the Giving What We Can report that shows tobacco control in developing countries being highly cost effective is based on the cost-effectiveness of tobacco taxes, not the cost-effectiveness of anti-smoking MMCs, and the estimated cost-effectiveness of tobacco taxes is based on the cost to the government, not the cost to the organization lobbying for a tobacco tax.]

This report briefly discusses MMCs as well as tax increases. It mentions MMCs are likely to be much more effective than those in the UK, due to the comparatively far lower awareness of the harms of smoking in developing countries, and far higher incidences in smoking. I wonder if we could learn more about the potential efficacy of such campaigns by comparing them to campaigns to try to lower road traffic injury? My impression is that in the latter case there has been a bit more study done specifically in developing world contexts.

Comment by michelle_hutchinson on EA #GivingTuesday Fundraiser Matching Retrospective · 2018-01-16T05:24:31.112Z · score: 4 (4 votes) · EA · GW

Thank you, this is a really useful write up of what sounds like a great project.

Comment by michelle_hutchinson on New releases: Global Priorities Institute research agenda and posts we’re hiring for · 2017-12-21T17:02:42.226Z · score: 3 (3 votes) · EA · GW

Glad to hear you're finding it useful!

a) Yes, that's the plan

b) We haven't decided on our model yet. Right now, we have a number of full-time academics, a number of research associates who attend seminars and collaborate with the full-time crew, and research research visitors coming periodically. Having researchers visit from other institutions seems useful for bringing in new ideas, getting to collaborate more closely than one could online, and having the visitors take back elements of our work to their home institutions. I would guess in future it would make sense to have at least some researchers who visit periodically, as well as people coming just as a one-off. But I couldn't say for sure at the moment.

c) Yes, we are. Behavioural economics is already something we've thought a little about. Our reason for not expanding into more subjects at the moment is the difficulty of building thoroughly interdisciplinary groups within academia. As a small example, GPI is based in the Philosophy Department at Oxford, which isn't ideal for hiring Economists, who would prefer to be based in the Economics department. Given that, and the close tie in the past between EA and philosophy, we see a genuine risk of GPI/EA being thought of as 'philosophy plus' rather than truly multi/interdisciplinary. For that reason, we're starting with just one other discipline, and trying to build strong roots there. At the same time, we're trying to remain cognisant of other disciplines likely to be relevant, and the work that's going on there. (As an example in psychology, Lucius Caviola has been publishing interesting work both on speciesism and on how to develop a better scale for measuring moral traits EAs might be interested in.)

d) The best source of information is our website. I do plan on putting occasional updates on the EA forum, but as our work output will largely be academic papers, we're unlikely to publish them on here.

Thanks for the heads up!

Comment by michelle_hutchinson on New releases: Global Priorities Institute research agenda and posts we’re hiring for · 2017-12-16T21:12:41.346Z · score: 1 (1 votes) · EA · GW

Yes, that's right. For the researcher roles, you would at least need to be in Oxford during term time. For the operations role, it would important to be there for essentially the whole period.

Comment by michelle_hutchinson on New releases: Global Priorities Institute research agenda and posts we’re hiring for · 2017-12-15T12:15:53.347Z · score: 0 (0 votes) · EA · GW

Thanks for the heads up! I think this is a browser issue with the uni website. It actually works for me on Chrome and Edge, but others have found they don't work on Chrome, but do work on Safari. Would you mind trying a different browser and seeing if that works?

New releases: Global Priorities Institute research agenda and posts we’re hiring for

2017-12-14T14:57:22.838Z · score: 17 (17 votes)
Comment by michelle_hutchinson on EA Survey 2017 Series: Cause Area Preferences · 2017-09-12T14:44:28.914Z · score: 2 (2 votes) · EA · GW

Will, you might be interested in these conversation notes between GiveWell and the Tax Justice Network: http://files.givewell.org/files/conversations/Alex_Cobham_07-14-17_(public).pdf (you have to c&p the link)

Comment by michelle_hutchinson on EA Survey 2017 Series: Cause Area Preferences · 2017-09-05T15:24:53.216Z · score: 3 (3 votes) · EA · GW

Thanks Tee.

I don't know that this is necessarily true beyond reporting what is actually there. When poverty is favored by more than double the number of people who favor the next most popular cause area (graph #1), favored by more people than a handful of other causes combined, and disliked the least, those facts need to be put into perspective.

I agree - my comment was in the context of the false graph; given the true one, the emphasis on poverty seems warranted.

Comment by michelle_hutchinson on EA Survey 2017 Series: Cause Area Preferences · 2017-09-05T09:00:44.955Z · score: 3 (3 votes) · EA · GW

Thanks for clarifying.

The claim you're defending is that the Bay is an outlier in terms of the percentage of people who think AI is the top priority. But what the paragraph I quoted says is 'favoring a cause area outlier' - so 'outlier' is picking out AI amongst causes people think are important. Saying that the Bay favours AI which is an outlier amongst causes people favour is a stronger claim than saying that the Bay is an outlier in how much it favours AI. The data seems to support the latter but not the former.

Comment by michelle_hutchinson on EA Survey 2017 Series: Cause Area Preferences · 2017-09-04T14:40:19.048Z · score: 7 (7 votes) · EA · GW

I'm having trouble interpreting the first graph. It looks like 600 people put poverty as the top cause, which you state is 41% of respondents, and that 500 people put cause prioritisation, which you state is 19% of respondents.

The article in general seems to put quite a bit of emphasis on the fact that poverty came out as the most favoured cause. Yet while 600 people said it was the top cause, according to the graph around 800 people said that long run future was the top cause (AI + non-AI far future). It seems plausible to disaggregate AI and non-AI long run future, but at least as plausible to aggregate them (given the aggregation of health / education / economic interventions in poverty), and conclude that most EAs think the top cause is improving the long-run future. Although you might have been allowing people to pick multiple answers, and found that most people who picked poverty picked only that, and most who picked AI / non-AI FF picked both?

The following statement appears to me rather loaded: "For years, the San Francisco Bay area has been known anecdotally as a hotbed of support for artificial intelligence as a cause area. Interesting to note would be the concentration of EA-aligned organizations in the area, and the potential ramifications of these organizations being located in a locale heavily favoring a cause area outlier." The term 'outlier' seems false according to the stats you cite (over 40% of respondents outside the Bay thinking AI is a top or near top cause), and particularly misleading given the differences made here by choices of aggregation. (Ie. that you could frame it as 'most EAs in general think that long-run future causes are most important; this effect is a bit stronger in the Bay)

Writing on my own behalf, not my employer's.

Comment by michelle_hutchinson on How should we assess very uncertain and non-testable stuff? · 2017-08-25T09:44:28.241Z · score: 1 (1 votes) · EA · GW

If you haven't come across it yet, you might like to look at Back of the Envelope Guide to Philanthropy, which tries to estimate the value of some really uncertain stuff.

Comment by michelle_hutchinson on The marketing gap and a plea for moral inclusivity · 2017-07-19T13:47:04.193Z · score: 3 (3 votes) · EA · GW

I broadly agree with you on the importance of inclusivity, but I’m not convinced by your way of cashing it out or the implications you draw from it.

Inclusivity/exclusivity strikes me as importantly being a spectrum, rather than a binary choice. I doubt when you said EA should be about ‘making things better or worse for humans and animals but being neutral on what makes things better or worse’, you meant the extreme end of the inclusivity scale. One thing I assume we wouldn’t want EA to include, for example, is the view that human wellbeing is increased by coming only into contact with people of the same race as yourself.

More plausibly, the reasons you outline in favour of inclusivity point towards a view such as ‘EA is about making things better or worse for sentient beings but being neutral between reasonable theories of what makes things better or worse’. Of course, that brings up the question of what it takes to count as a reasonable theory. One thing it could mean is that some substantial number of people hold / have held it. Presumably we would want to circumscribe which people are included here: not all moral theories which have at any time in the past by a large group of people are reasonable. At the other end of the spectrum, you could include only views currently held by many people who have made it their life’s work to determine the correct moral theory. My guess is that in fact we should take into account which views are and aren’t held by both the general public and by philosophers.

I think given this more plausible cashing out of inclusivity, we might want to be both more and less inclusive than you suggest. Here are a few specific ways it might cash out:

  • We should be thinking about and discussing theories which put constraints on actions you’re allowed to take to increase welfare. Most people think there are some limits on be what we’re allowed to do to others to benefit others. Most philosophers believe there are some deontological principles / agent centred constraints or prerogatives.

  • We should be considering how prioritarian to be. Many people think we should give priority to those who are worst off, even if we can benefit them less than we could others. Many philosophers think that there’s (some degree of) diminishing moral value to welfare.

  • Perhaps we ought to be inclusive of views to the effect that (at least some) non-human sentient beings have little or no moral value. Many people’s actions imply they believe that a large number of animals have little or no moral value, and that robots never could have moral value. Fewer philosophers seem to hold this view.

  • I’m less convinced about being inclusive towards views which place no value on the future. It seems widely accepted that climate change is very bad, despite the fact that most of the harms will accrue to those in the future. It’s controversial what the discount rate should be, but not that the pure time discount rate should be small. Very few philosophers defend purely person-affecting views.

Comment by michelle_hutchinson on The marketing gap and a plea for moral inclusivity · 2017-07-12T11:13:17.002Z · score: 9 (11 votes) · EA · GW

You might think The Life You Can Save plays this role.

I've generally been surprised over the years by the extent to which the more general 'helping others as much as we can, using evidence and reason' has been easy for people to get on board with. I had initially expected that to be less appealing, due to its abstractness/potentially leading to weird conclusions. But I'm not actually convinced that's the case anymore. And if it's not detrimental, it seems more straightforward to start with the general case, plus examples, than to start with only a more narrow example.

Comment by michelle_hutchinson on Intuition Jousting: What It Is And Why It Should Stop · 2017-03-31T10:27:37.884Z · score: 11 (11 votes) · EA · GW

I'm not totally sure I understand what you mean by IJ. It sounds like what you're getting at is telling someone they can't possible have the fundamental intuition that they claim they have (either that they don't really hold that intuition or that they are wrong to do so). Eg: 'I simply feel fundamentally that what matters most is positive conscious experiences' 'That seems like a crazy thing to think!'. But then your example is

"But hold on: you think X, so your view entails Y and that’s ridiculous! You can’t possibly think that.".

That seems like a different structure of argument, more akin to: 'I feel that what matters most is having positive conscious experiences (X)' 'But that implies you think people ought to choose to enter the experience machine (Y), which is a crazy thing to think!' The difference is significant: if the person is coming up with a novel Y, or even one that hasn't been made salient to the person in this context, it actually seems really useful. Since that's the case, I assume you meant IJ to refer to arguments more like the former kind.

I'm strongly in favour of people framing their arguments considerately, politely and charitably. But I do think there might be something in the ball-park of IJ which is useful, and should be used more by EAs than it is by philosophers. Philosophers have strong incentives to have views that no other philosophers hold, because to publish you have to be presenting a novel argument and it's easier to describe and explore a novel theory you feel invested in. It's also more interesting for other philosophers to explore novel theories, so in a sense they don't have an incentive to convince other philosophers to agree with them. All reasoning should be sound, but differing in fundamental intuitions just makes for a greater array of interesting arguments. Whereas the project of effective altruism is fundamentally different: for those who think there is moral truth to be had, it's absolutely crucial not just that an individual works out what that is, but that everyone converges on it. That means it's important to thoroughly question our own fundamental moral intuitions, and to challenge those of others which we think are wrong. One way to do this is to point out when someone holds an intuition that is shared by hardly anyone else who has thought about this deeply. 'No other serious philosophers hold that view' might be a bonus in academic philosophy, but is a serious worry in EA. So I think when people say 'Your intuition that A is ludicrous', they might be meaning something which is actually useful: they might be highlighting just how unusual your intuition is, and thereby indicating that you should be strongly questioning it.

Comment by michelle_hutchinson on What the EA community can learn from the rise of the neoliberals · 2016-12-08T15:57:23.974Z · score: 4 (4 votes) · EA · GW

That's a horrible story!

Comment by michelle_hutchinson on Why Animals Matter for Effective Altruism · 2016-08-24T10:41:36.252Z · score: 5 (7 votes) · EA · GW

Recognizing the scale of animal suffering starts with appreciating the sentience of individual animals — something surprisingly difficult to do given society’s bias against them (this bias is sometimes referred to as speciesism). For me, this appreciation has come from getting to know the three animals in my home: Apollo, a six-year-old labrador/border collie mix from an animal shelter in Texas, and Snow and Dualla, two chickens rescued from a battery cage farm in California.

I wonder if we might do ourselves a disservice by making it sound really controversial / surprising that animals are thoroughly sentient? It makes it seem more ok not to believe it, but I think also can come across as patronising / strange to interlocutors. I've in the past had people tell me they're 'pleasantly surprised' that I care about animals, and ask when I began caring about animal suffering. (I have no idea how to answer that - I don't remember a time when I didn't) This feels to me somewhat similar to telling someone who doesn't donate to developing countries that you're surprised they care about extreme poverty, and asking when they started thinking that it was bad for people to be dying of malaria. On the one hand, it feels like a reasonable inference from their behaviour. On the other hand, for almost everyone we're likely to be talking to it will be the case that they do in fact care about the plight of others, and that their reasons for not donating aren't lack of belief in the suffering, or lack of caring about it. I would guess that would be similar for most of the people we talk to about animal suffering: they already know and care about animal suffering, and would be offended to have it implied otherwise. This makes the case easier to make, because it means we're already approximately on the same page, and we can start talking immediately about the scale and tractibility of the problem.

Comment by michelle_hutchinson on Making EA groups more welcoming · 2016-07-29T22:05:12.620Z · score: 2 (2 votes) · EA · GW

Thanks Julia, this is an awesome resource! I'm really grateful for these kinds of super specific suggestions.

Why Poverty?

2016-04-24T21:25:53.942Z · score: 11 (11 votes)

Giving What We Can is Cause Neutral

2016-04-22T12:54:14.312Z · score: 10 (12 votes)

Review of Giving What We Can staff retreat

2016-03-21T16:31:02.923Z · score: 8 (8 votes)

Giving What We Can's 6 monthly update

2016-02-09T20:19:08.574Z · score: 15 (15 votes)

Finding more effective causes

2016-01-01T22:54:53.607Z · score: 18 (20 votes)

Why do effective altruists support the causes we do?

2015-12-30T17:51:59.470Z · score: 19 (27 votes)

Giving What We Can needs your help this Christmas!

2015-12-07T23:24:53.359Z · score: 11 (17 votes)

Updates from Giving What We Can

2015-11-27T15:04:48.219Z · score: 8 (12 votes)

Giving What We Can needs your support — only 5 days left to close our funding gap

2015-06-25T16:26:31.611Z · score: 7 (9 votes)

Giving What We Can needs your help!

2015-05-26T22:11:33.646Z · score: 5 (11 votes)

Please support Giving What We Can this Spring

2015-04-24T18:22:16.230Z · score: 10 (16 votes)

The role of time in comparing diverse benefits

2015-04-13T20:18:52.049Z · score: 2 (4 votes)

Why I Give

2015-01-25T13:51:48.885Z · score: 14 (14 votes)

Supportive scepticism in practice

2015-01-15T16:35:57.403Z · score: 17 (17 votes)

Should Giving What We Can change its Pledge?

2014-10-22T16:40:35.480Z · score: 8 (18 votes)