Posts

80k would be happy to see more projects in the careers space 2022-06-21T18:02:58.057Z
Advice on people management from EA Global 2022-04-17T15:26:23.333Z
Managing 'Imposters' 2022-01-29T15:46:08.637Z
80,000 Hours is hiring! 2022-01-20T16:12:16.243Z
EA Infrastructure Fund: May–August 2021 grant recommendations 2021-12-24T10:42:08.969Z
EA Infrastructure Fund: Ask us anything! 2021-06-03T01:06:19.360Z
EA Infrastructure Fund: May 2021 grant recommendations 2021-06-03T01:01:01.202Z
What gives me hope 2021-05-19T10:00:12.660Z
Why I find longtermism hard, and what keeps me motivated 2021-02-22T23:01:47.312Z
80,000 Hours one-on-one team plans, plus projects we’d like to see 2021-02-09T19:57:52.643Z
Possible gaps in the EA community 2021-01-23T18:56:51.373Z
Training Bottlenecks in EA (professional skills) 2021-01-17T19:29:50.197Z
10 Habits I recommend (2020) 2021-01-02T17:15:47.320Z
10 things I bought and recommend (2020) 2020-12-30T14:47:45.353Z
Parenting: Things I wish I could tell my past self 2020-09-12T14:22:01.038Z
Asking for advice 2020-09-05T10:29:12.499Z
I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA 2019-12-04T10:53:43.030Z
Keeping Absolutes in Mind 2018-10-21T22:40:49.160Z
Making Organisations More Welcoming 2018-09-12T21:52:52.530Z
Good news that matters 2018-08-27T05:38:06.870Z
New releases: Global Priorities Institute research agenda and posts we’re hiring for 2017-12-14T14:57:22.838Z
Why Poverty? 2016-04-24T21:25:53.942Z
Giving What We Can is Cause Neutral 2016-04-22T12:54:14.312Z
Review of Giving What We Can staff retreat 2016-03-21T16:31:02.923Z
Giving What We Can's 6 monthly update 2016-02-09T20:19:08.574Z
Finding more effective causes 2016-01-01T22:54:53.607Z
Why do effective altruists support the causes we do? 2015-12-30T17:51:59.470Z
Giving What We Can needs your help this Christmas! 2015-12-07T23:24:53.359Z
Updates from Giving What We Can 2015-11-27T15:04:48.219Z
Giving What We Can needs your support — only 5 days left to close our funding gap 2015-06-25T16:26:31.611Z
Giving What We Can needs your help! 2015-05-26T22:11:33.646Z
Please support Giving What We Can this Spring 2015-04-24T18:22:16.230Z
The role of time in comparing diverse benefits 2015-04-13T20:18:52.049Z
Why I Give 2015-01-25T13:51:48.885Z
Supportive scepticism in practice 2015-01-15T16:35:57.403Z
Should Giving What We Can change its Pledge? 2014-10-22T16:40:35.480Z

Comments

Comment by Michelle_Hutchinson on RyanCarey's Shortform · 2022-07-02T13:16:31.299Z · EA · GW

Thanks for this, and for your work on Felicifia. As someone who's found it crucial to have others around me setting an example for me, I particularly admire the people who basically just figured out for themselves what they should be doing and then starting doing it.

Fwiw re THINK: I might be wrong in this recollection, but at the time it felt like very clearly Mark Lee's organisation (though Jacy did help him out). It also was basically only around for a year. The model was 'try to go really broad by contacting tonnes of schools in one go and getting hype going'. It was a cool idea which had precedent, but my impression was the experiment basically didn't pan out. 

Comment by Michelle_Hutchinson on 80k would be happy to see more projects in the careers space · 2022-06-23T17:00:12.044Z · EA · GW

Sorry I wasn't clear: We not only don't object to replication - we're actively enthusiastic about it.  I think a healthy ecosystem has a bunch of different people trying to do the same thing and seeing how they go. 

Since I run the 1on1 team, I'm not well placed to comment on what 80k as a whole plans to do. 

You're right that the majority, but not all, of people we talk to have some interest in helping others over the longterm as well as present day. I expect that to continue being mostly true, at least over the coming year.

Comment by Michelle_Hutchinson on Critiques of EA that I want to read · 2022-06-20T12:18:40.375Z · EA · GW

Thanks, I found this list really interesting!

Comment by Michelle_Hutchinson on Digital people could make AI safer · 2022-06-13T10:00:31.330Z · EA · GW

Thanks, I found this really interesting. 

Comment by Michelle_Hutchinson on EA and the current funding situation · 2022-05-14T09:44:17.892Z · EA · GW

[I’m an EAIF grant manager, but I wasn’t involved in this particular grant.]

I’m sorry you’ve been having a frustrating time in your community building work. As you say, rejections sting even in the best of circumstances, particularly when it feels counter to the narrative being portrayed of there being funding available. Working hard to help others is difficult enough without feeling that others are refusing to support you in it.

It seems very difficult to me to accurately represent in advance what kinds of community building EAIF is and isn’t keen to fund, because it depends on a lot of details about the place/person/description of activities planned. Having said that, I’m keen to avoid people getting a false impression of our priorities. I wanted to clarify that we are in fact keen to fund full time community builders outside of existing EA hubs

It happens that the majority of past requests we’ve had for full time positions in the US have come from Boston/NY/SF/Berkeley. We’ve received a number of applications for full time community builders in non-hub cities in the rest of the world though. For example we’ve funded full time community builders in Italy, the Philippines, Denmark and the Czech Republic. 

I think a reason it looks like we prefer funding people part time is that we fund quite a bit of university community building. Doing that is often most suited to students at that university, who are therefore only able to do community building part time. 

We’ve tried to keep our application form short to make it feel low cost to apply. I’d be keen for people to put in speculative quick draft applications to see whether you might be a good fit for an EAIF grant!

[Edited for clarity]

Comment by Michelle_Hutchinson on Help Me Choose A High Impact Career!!! · 2022-05-06T09:12:47.661Z · EA · GW

Great work writing this up and putting it out there for feedback! I think it's always difficult to give much of a view as an outsider, but it sounds to me like you've been feeling insecure for a while due to lack of savings, and so taking a paying job sounds like a good idea. It seems like you're not actually keen on doing a masters, and it doesn't seem obvious you need one to do what you're aiming for. So deciding against doing one sounds very reasonable to me. Both your options sound good though!

Comment by Michelle_Hutchinson on Notes From a Pledger · 2022-05-01T10:54:18.974Z · EA · GW

Thank you for sharing this! I just love hearing stories of pledgers around the world, and what's motivated them to pledge and to keep giving. I grew up not knowing anyone who donated this much, and assuming I wouldn't either. It's still kind of incredible to me that there are so many people promising to do this for the rest of their lives, and doing so joyfully rather than out of pure obligation. I'm glad you (rightfully) feel proud of doing it. I cannot wait to live in a world where not a single person needs to die of malaria. 

Comment by Michelle_Hutchinson on FTX/CEA - show us your numbers! · 2022-04-20T13:06:07.271Z · EA · GW

Thanks so much for this comment. I find it incredibly hard not to be unwarrantedly risk averse. It feels really tempting to focus on avoiding doing any harm, rather than actually helping people as much as I can. This is such an eloquent articulation of the urgency we face, and why we need to keep pushing ourselves to move faster. 

I think this is going to be useful for me to read periodically in the future - I'm going to bookmark it for myself.

Comment by Michelle_Hutchinson on Go apply for 80K Advising - (Yes, right now) · 2022-03-28T21:09:02.902Z · EA · GW

Thank you for this prompt Devansh! We really look forward to hearing from people :-)

Comment by Michelle_Hutchinson on Go apply for 80K Advising - (Yes, right now) · 2022-03-28T21:08:27.135Z · EA · GW

That's really nice to hear!

Comment by Michelle_Hutchinson on Managing 'Imposters' · 2022-02-03T13:29:20.024Z · EA · GW

<3

Comment by Michelle_Hutchinson on Managing 'Imposters' · 2022-01-29T17:09:01.497Z · EA · GW

Noted, thanks guys.

Comment by Michelle_Hutchinson on 80,000 Hours is hiring! · 2022-01-20T18:46:46.694Z · EA · GW

Yes, we will be. Thanks for asking!

Comment by Michelle_Hutchinson on Coaches for exploring careers? · 2021-12-19T12:04:20.225Z · EA · GW

I'm not exactly sure of these people's views, but they're all effective altruist coaches, so you might be interested in checking out their websites: 

Tee Barnett
Daniel Kestenholz

Anne Wissemann

80,000 Hours actually doesn't only coach people already focused on a priority career path, though it is more useful for people who have a similar understanding of impact to 80,000 Hours' understanding. We currently talk to around 50% of people who apply for coaching.

Comment by Michelle_Hutchinson on 80,000 Hours wants to talk to more people than ever · 2021-12-18T16:17:02.019Z · EA · GW

I think it probably depends on how much more clarity you think you'll get from thinking solo about it for a bit, and how likely it is you'll find the solo thinking motivating. I think the conversations do tend to be more useful if you have a sense of what you'd most like to get out of them. But thinking through your career is often both difficult and aversive, so chatting to us early in the journey can be most sensible for some people, to get more clarity on how to think through things and what to read in order to make your plan. We're also happy to speak to someone more than once, so you might like to chat to us when you're first starting to think through things and then again when you have more clarity.

Comment by Michelle_Hutchinson on Evidence from two studies of EA careers advice interventions · 2021-09-30T11:02:18.989Z · EA · GW

Thanks for taking the time to do such a rigorous study, and also for writing it up and thinking through the implications for other EAs!

Comment by Michelle_Hutchinson on 80,000 Hours one-on-one team plans, plus projects we’d like to see · 2021-08-03T20:05:08.747Z · EA · GW

Thanks for this feedback. I had a go at rewriting that our 'why wasn't I accepted' FAQ. It now reads: 

Why wasn’t I accepted?

We sincerely regret that we can’t advise everyone who applies. We read every application individually and are thankful that you took the time to apply. It’s really touching reading about people who have come across 80,000 Hours and are excited about using their careers to help others.

We aim to talk to the people we think we can help most. Our not speaking with you does not mean we think you won’t have a highly impactful career. Whether we can be helpful to you sometimes depends on contingent factors like whether one of our advisers happens to know of a role or introduction right now that might be a good fit for you. We also have far less information about you than you do, so we aren’t even necessarily making the right calls about who we can help most.

You’re very welcome to reapply, particularly if your situation changes. If you’re thinking of doing so, it might be worth reading our key ideas series and trying out our career planning process, which we developed to help people think through their career decisions. You can also get involved in our community to get help from other people trying to do good with their careers.

Comment by Michelle_Hutchinson on Undergraduate Making Life-Altering Choices While Sober, Please Advise · 2021-07-10T16:25:56.969Z · EA · GW

This is not quite an answer to your question, but I thought you might get a lot out of this podcast - it at least is vivid evidence that you can have a lot of impact despite finding it hard to get out of ugh fields and suffering from depression. 

Comment by Michelle_Hutchinson on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-07T11:28:23.275Z · EA · GW

I agree the finance example is useful. I would expect that in both our case and the finance case the best implementation isn't actually mutually exclusive funds, but funds with clear and explicit 'central cases' and assumptions, plus some sensible (and preferably explicit) heuristics to be used across funds like 'try to avoid multiple funds investing too much in the same thing'. 

That seems to be both because there will (as Max suggests) often be no fact of the matter as to which fund some particular company fits in, and also because the thing you care about when investing in a financial fund is in large part profit. In the case of the healthcare and tech fund, there will be clear overlaps - firms using tech to improve healthcare. If I were investing in one or other of these funds, I would be less interested in whether some particular company is more exactly described as a 'healthcare' or 'tech' company, and care more about whether they seem to be a good example of the thing I invested in. Eg if I invested in a tech fund, presumably I think some things along the lines of 'technological advancements are likely to drive profit' and 'there are low hanging fruit in terms of tech innovations to be applied to market problems'. If some company is doing good tech innovation and making profit in the healthcare space, I'd be keen for the tech fund to invest in it. I wouldn't be that fussed about whether the healthcare fund also invested in it. Though if the healthcare fund had invested substantially in the company, presumably the price would go up and it would look like a less good option for the tech fund and by extension, for me. I'd expect it to be best for EA Funds to work similarly: set clear expectations around the kinds of thing each fund aims for and what assumptions it makes, and then worry about overlap predominantly insofar as there are large potential donations which aren't being made because some specific fund is missing  (which might be a subset of a current fund, like 'non-longtermist EA infrastructure').  

I would guess that EAF isn't a good option for people with very granular views about how best to do good. Analogously, if I had a lot of views about the best ways for technology companies to make a profit (for example, that technology in healthcare was a dead end) I'd often do better to fund individual companies than broad funds. 

In case it doesn't go without saying, I think it's extremely important to use money in accordance with the (communicated) intentions with which it was solicited. It seems very important to me that EAs act with integrity and are considerate of others

Comment by Michelle_Hutchinson on EA Infrastructure Fund: Ask us anything! · 2021-06-07T08:44:45.494Z · EA · GW

+1

Comment by Michelle_Hutchinson on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-05T15:42:06.094Z · EA · GW

Thanks for finding and pasting Jonas' reply to this concern MichaelA. I don't feel I have further information to add to it. One way to frame my plans: I intend to fund projects which promote EA principles, where both 'promote' and 'EA principles' may be understood in a number of different ways. I can imagine the projects aiming at both the long-run future and at helping current beings. It's hard to comment in detail since I don't yet know what projects will apply. 

Comment by Michelle_Hutchinson on EA Infrastructure Fund: Ask us anything! · 2021-06-04T17:03:38.101Z · EA · GW

Here are a few things: 

  • What proportion of the general population might fully buy in to EA principles if they came across them in the right way, and what proportion of people might buy in to some limited version (eg become happy to donate to evidence backed global poverty interventions)? I’ve been pretty surprised how much traction ‘EA’ as an overall concept has gotten. Whereas I’ve maybe been negatively surprised by some limited version of EA not getting more traction than it has. These questions would influence how excited I am about wide outreach, and about how much I think it should be optimising for transmitting a large number of ideas vs simply giving people an easy way to donate to great global development charities.
  • How much and in which cases research is translated into action. I have some hypothesis that it’s often pretty hard to translate research into action. Even in cases where someone is deliberating between actions and someone else in another corner of the community is researching a relevant consideration, I think it’s difficult to bring these together. I think maybe that inclines me towards funding more ‘getting things done’ and less research than I might naturally be tempted to. (Though I’m probably pretty far on the ‘do more research’ side to start with.) It also inclines me to fund things that might seem like good candidates for translating research into action.
  • How useful influencing academia is. On the one hand, there are a huge number of smart people in academia, who would like to spend their careers finding out the truth. Influencing them towards prioritising research based on impact seems like it could be really fruitful. On the other hand, it’s really hard to make it in academia, and there are strong incentives in place there, which don’t point towards impact. So maybe it would be more impactful for us to encourage people who want to do impactful work to leave academia and be able to focus their research purely on impact. Currently the fund managers have somewhat different intuitions on this question.
Comment by Michelle_Hutchinson on EA Infrastructure Fund: Ask us anything! · 2021-06-04T15:38:20.874Z · EA · GW

Speaking for myself, I'm interested in increasing the detail in my write-ups a little over the medium term (perhaps making them typically more the length of the write up for Stefan Schubert). I doubt I'll go all the way to making them as comprehensive as Max's. 
Pros:

  • Particularly useful for donors to the fund and potential applicants to get to know the reasoning processes grant makers when we've just joined and haven't yet made many grants
  • Getting feedback from others on what parts of my reasoning process in making grants seem better and worse seems more likely to be useful than simply feedback on 'this grant was one I would / wouldn't have made' 

Cons:

  • Time writing reports trades against time evaluating grants. The latter seems more important to me at the current margin. That's partly because I'd have liked to have decidedly more time than I had for evaluating grants and perhaps for seeking out people I think would make good grantees.
  • I find it hard to write up grants in great detail in a way that's fully accurate and balanced without giving grantees public negative feedback. I'm hesitant to do much of that, and when I do it, want to do it very sensitively.

I expect to try to include considerations in my write ups which might be found in write ups of types of opportunity. I don't expect to produce the kind of lengthy write ups that come to mind when you mention reports.

I would guess that the length of my write ups going forward will depend on various things, including how much impact they seem to be having (eg how much useful feedback I get from them that informs my thinking, and how useful people seem to be finding them in deciding what projects to do / whether to apply to the fund etc).

Comment by Michelle_Hutchinson on EA Infrastructure Fund: Ask us anything! · 2021-06-04T14:42:11.814Z · EA · GW

Answering these thoroughly would be really tricky, but here are a few off-the-cuff thoughts: 

1. Tough to tell. My intuition is 'the same amount as I did' because I was happy with the amount I could grant to each of the recipients I granted to, and I didn't have time to look at more applications than I did. Otoh I could imagine if we the fund had significantly more funding that would seem to provide a stronger mandate for trying things out and taking risks, so maybe that would have inclined me to spend less time evaluating each grant and use some money to do active grant making, or maybe would have inclined me to have funded one or two of the grants that I turned down. I also expect to be less time constrained in future because we won't be doing an entire quarter's grants in one round, and because there will be less 'getting up to speed'.

2. Probably most of these are some bottleneck, and also they interact: 
- I had pretty limited capacity this round, and hope to have more in future. Some of that was also to do with not knowing much about some particular space and the plausible interventions in that space, so was a knowledge constraint. Some was to do with finding the most efficient way to come to an answer.
- It felt to me like there was some bottleneck of great applicants with great proposals. Some proposals stood out fairly quickly as being worth funding to me, so I expect to have been able to fund more grants had there been more of these. It's possible some grants we didn't fund would have seemed worth funding had the proposal been clearer / more specific. 
- There were macrostrategic questions the grant makers disagreed over - for example, the extent to which people working in academia should focus on doing good research of their own versus encourage others to do relevant research. There are also such questions that I think didn't affect any of our grants this time but I expect to in future, such as how to prioritise spreading ideas like 'you can donate extremely cost-effectively to these global health charities' versus more generalised EA principles.  

3. The proportion of good applications was fairly high compared to my expectation (though ofc the fewer applications we reject the faster we can give out grants, so until we're granting to everyone who applies, there's always a sense in which the proportion of good applications is bottlenecking us). The proportion of applications that seemed pretty clearly great, well thought through and ready to go as initially proposed, and which the committee agreed on, seemed maybe lower than I might have expected. 

4. I think I noticed some of each of these, and it's a little tough to say because the better the applicant, the more likely they are to come up with good ideas and also to be well calibrated on their fit with the idea. If I could dial up just one of these, probably it would be quality of idea.
 

5. One worry I have is that many people who do well early in life are encouraged to do fairly traditional things - for example they get offered good jobs and scholarships to go down set career tracks. By comparison, people who come into their own later on (eg late in university) are more in a position to be thinking independently about what to work on. Therefore my sense is that community building in general is systematically missing out on some of the people who would be best at it because it's a kind of weird, non-standard thing to work on. So I guess I lean on the side of too few people interested in EA infrastructure stuff.

Comment by Michelle_Hutchinson on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T12:55:46.873Z · EA · GW

No set plans yet.

Comment by Michelle_Hutchinson on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T12:20:29.393Z · EA · GW

Thanks for the feedback! 

I basically agree with the conclusion MichaelA and Ben Pace have below. I think EAIF’s scope could do with being a bit more clearly defined, and we’ll be working on that. Otoh, I see the Lohmar and CLTR grants as fitting fairly clearly into the ‘Fund scope’ as pasted by MichaelA below. Currently, grants do get passed from one fund to the other, but that happens mostly when the fund they initially applied to deems them not to fall easily into their scope, rather than if they seem to fall centrally into the scope of the fund they apply for and also another fund. My view is that CLTR, for example, is good example of increasing the extent to which policy makers are likely to use EA principles when making decisions, which makes it seem like a good example of the kind of thing I think EAIF should be funding. 

I think that there are a number of ways in which someone might disagree: One is that they might think that ‘EA infrastructure’ should be to do with building the EA _community_ specifically, rather than being primarily concerned with people outside community. Another is that they might want EAIF to only fund organisations which have the same portfolio of cause activities as is representative of the whole EA movement. I think it would be worse to narrow the fund’s scope in either of these ways, though I think your comment highlights that we could do with being clearer about it not being limited in that way. 

Over the long run, I do think the fund should aim to support projects which represent different ways of understanding and framing EA principles, and which promote different EA principles to different extents. I think one way in which this fund pay out looks less representative than it felt to me is that there was a grant application for an organisation which was mostly fundraising for global development and animal welfare which didn’t get funded due to getting funding from elsewhere while we were deliberating. 

The scope of the EAIF is likely to continue overlapping in some uneasy ways with the other funds. My instinct would be not to be too worried about that, as long as we’re clear about what kinds of things we’re aiming at funding and do fund. But it would be interesting to hear other people’s hunches about the importance of the funds being mutually exclusive in terms of remit.

Comment by Michelle_Hutchinson on EA Infrastructure Fund: Ask us anything! · 2021-06-04T10:55:53.046Z · EA · GW

Speaking just for myself: I don’t think I could currently define a meaningful ‘minimum absolute bar’. Having said that, the standard most salient to me is often ‘this money could have gone to anti-malaria bednets to save lives’. I think (at least right now) it’s not going to be that useful to think of EAIF as a cohesive whole with a specific bar, let alone explicit criteria for funding. A better model is a cluster of people with different understandings of ways we could be improving the world which are continuously updating, trying to figure out where we think money will do the most good and whether we’ll find better or worse opportunities in the future.

Here are a couple of things pushing me to have a low-ish bar for funding: 

  • I think EA currently has substantially more money than it has had in the past, but hasn’t progressed as fast in figuring out how to turn that into improving the world. That makes me inclined to fund things and see how they go.
  • As a new committee, it seems pretty good to fund some things, make predictions, and see how they pan out. 
  • I’d prefer EA to be growing faster than it currently is, so funding projects now rather than saving the money to try to find better projects in future looks good to me.  

Here are a couple of things driving up my bar:

  • EAIF gets donations from a broad range of people. It seems important for all the donations to be at least somewhat explicable to the majority of its donors. This makes me hesitant to fund more speculative things than I would be with my money, and to stick more closely to ‘central cases’ of infrastructure building than I otherwise would. This seems particularly challenging for this fund, since its remit is a bit esoteric, and not yet particularly clearly defined. (As evidenced by comments on the most recent grant report, I didn’t fully succeed in this aim this time round.)
  • Something particularly promising which I don’t fund is fairly likely to get funded by others, whereas something harmful I fund can’t be cancelled by others, so I want to be fairly cautious while I’m starting out in grant making.
Comment by Michelle_Hutchinson on Asking for advice · 2021-05-22T13:13:16.225Z · EA · GW

It's so hard to implement... Thank you for letting me know! Really good to know what parts of posts people find useful. 

Comment by Michelle_Hutchinson on What gives me hope · 2021-05-22T10:19:41.084Z · EA · GW

I definitely feel lucky to be at such a fortunate time in history when absolute poverty is so much lower than it has been over the millennia past, and when we have the chance for so much less suffering and more happiness than ever before. I also really appreciate being part of the EA community. This year's isolation has really brought home to me how much it means to me to be surrounded every day by kind people working for the same goal. 

I'm less sure of the specific framing of 'EA as an opportunity', because  so much of EA is about preventing suffering or destruction. Without those being a possibility, we wouldn't be able to work to prevent them. But I wouldn't want to imply that their possibility is a good thing. So I guess I might rather say that, given the state of the world, being part of this community feels like a true gift. 

Comment by Michelle_Hutchinson on "Other comments" that make my day: What people have said when signing up to GWWC · 2021-05-20T13:41:23.171Z · EA · GW

<3

Comment by Michelle_Hutchinson on [Link] 80,000 Hours Nov 2020 annual review · 2021-05-16T13:23:36.176Z · EA · GW

Hi anon, Michelle here. I work for 80k. I think 80k probably shouldn’t have a discussion about the career decisions of a particular staff member on the Forum, but I’m happy to share some thoughts on this general issue. 

First off, I hate that our hiring sometimes makes things difficult for others, when we’re all aiming at the same thing and the stakes are so high. As you point out, this is an especially tricky issue for 80k because people are in the habit of listening to our career advice and because our whole mission is to fill roles at other organisations.

I’ll try to address a few different things your question might be gesturing at.

One issue is the fact that hiring an employee away from a high-impact org is in some ways a step away from our mission. The way I think about this is similar to how I think about 80k hiring somebody who doesn’t work at an EA org but is considering other high impact offers in addition to 80k’s. When a potential hire is considering their impact, they have to compare the value of filling a high-impact role themselves against the value of the roles they could cause to be filled while working at 80k. Someone considering leaving another EA org for 80k is in a similar position, just with the added transition cost from leaving their current role.

Another issue is that when we reach out to people, they might be influenced by the fact that we’re a career advising organisation and people are used to relying on our advice. They may interpret our interest in hiring them as us expressing a well-researched view that their impact would be higher working for us. I'd hate for this to happen and we’ve had many internal discussions about how to reduce its likelihood. Many staff try to frequently flag their own bias and uncertainty about the best option, avoid comparing the impact of working at 80k to their current or prospective job (especially if the potential hire hasn’t explicitly asked for this), and to be explicit about when we’re recruiting v. giving career advice. 

Lastly, there’s a question about whether EA orgs more generally should not reach out to potential hires who are already working at high impact organisations.

My views on this have a lot to do with my own experience working at EA orgs.

While most of 80k’s employees did not previously work at other EA organisations, I’m one of the exceptions. Before 80k I spent ~6 years working for Giving What We Can and the Global Priorities Institute. While I was working at GPI, 80k asked me whether I knew any good candidates for an advising position, and I asked if I could apply. The job has been an incredible fit for me. While I don’t regret the previous jobs I’ve had, I think it’s a shame nobody ever suggested that I become an advisor at 80k earlier on. I think I can do more good in this role than in my last one because it’s so much better suited to me. 

My experience makes me feel fairly strongly that people at EA orgs should periodically consider different options. If orgs who’d like to hire someone never reach out, that person will never learn what those options are. It’s in that spirit that I offered Habiba a role.  

Greater transparency in the community about the job opportunities people have will allow people to find jobs which suit them better and they enjoy more. Deciding to leave your role at an EA org because you think a different role would have more impact or because you’d be happier elsewhere can be a really difficult decision. But I think it should be the employee’s decision and we shouldn’t have a norm that prevents them from finding out they even had the option. I do think that organisational stability is really important, but I think we should probably trust staff at EA orgs to incorporate this into their decision making.

Comment by Michelle_Hutchinson on Would anyone be able to help me decide between Economics PhD offers? · 2021-05-14T18:16:21.020Z · EA · GW

Reporting back like this seems really useful for others considering whether to do this. Thanks!

Comment by Michelle_Hutchinson on Best places to donate? · 2021-05-06T19:29:50.026Z · EA · GW

These sound like great places to donate to! Thank you for thinking through so carefully where to donate in order to help others most. Figuring out the most effective place to donate always feels really hard to me. 

Without more details about your situation it's a bit hard to give much comment on whether there are better organisations for you to donate to, but here are a few things you could think about: 

  • Often the overhead on processing a small donation can be fairly high, so it could be worth donating to fewer organisations so that your donations to those you give to are larger. 
  • If you're interested in donating to more speculative things, where the expected value might be higher but there's some chance they have less impact, you could consider donating through EA Funds (I linked to the global development one, but there's also an animal welfare fund).
  • You might consider whether you care as much about the lives of those to come as people already alive, and therefore whether you think it would be effective to donate towards helping those in the long-run future, for example through the Long-Term Future Fund.
Comment by Michelle_Hutchinson on What are your main reservations about identifying as an effective altruist? · 2021-03-31T19:01:37.847Z · EA · GW

I don't feel comfortable saying 'I'm an effective altruist', though if someone asks me if I am one the most truthful answer is clearly 'yes'. I think I'm not that keen on labels in general, though there are some I'm comfortable with, including 'feminist' and 'utilitarian'.  I was one of the participants Jonas mentions. 

This is basically an instinct rather than a thought-through opinion, but at a guess, the biggest reasons for my hesitation are: 
- It feels self-aggrandising to call myself 'an effective altruist'. It feels hard to really know that I'm altruistic (as opposed to doing work I find fulfilling for example), and even harder to know that insofar as I'm altruistic, I'm being effective about it. On the other hand, I understand utilitarian to mean something like 'I think better outcomes are the ones with more wellbeing in, and those are the ones I'm aiming at'. That feels like something I'm happy to claim. 
- Identifying some people as 'effective altruists' feels like it's dividing people unnecessarily. I think most people want to help others, and most people would like to do so in a way that's effective rather than ineffective. Obviously, I really like the idea of there being tools and mechanisms (like this forum) for helping people do that, and also a community of people trying particularly hard to do this and do so in particular ways. And having some label for those does seem useful, so it does seem hard not to do this 'identifying'.  

Comment by Michelle_Hutchinson on EA Funds is more flexible than you might think · 2021-03-05T14:09:12.392Z · EA · GW

Relevant for people trying to get funding for a project: 

People could consider writing up their project as a blog post on the forum see if they get any bites for funding. In general, I think I'd encourage people looking for funding to do more writing up one page summaries of what they would like to get funded. It would include things like: 

  • Problem the project addresses
  • Why the solution the project proposes is the right one for the problem
  • Team and why  they're well suited to work on this

I'd guess if you write a post like this there'd be quite a few people happy to read that and answer if it sounds like something they'd be interested to fund / if they know anyone to pass it on to / what more they'd need to know to fund or pass it on. Whereas my perception is that currently people feeling out a potential project and whether it could get funded are much more likely to approach people to ask to get on a call, which is far more time consuming and doesn't allow someone to quickly answer 'this isn't for me, but this other person might be interested'. 

Comment by Michelle_Hutchinson on Why EA groups should not use “Effective Altruism” in their name. · 2021-02-24T09:52:04.614Z · EA · GW

I love the specificity of your 'How to pick a name' section. I imagine that will be really useful in helping people follow through finding a good name.

Comment by Michelle_Hutchinson on Join our collaboration for high quality EA outreach events (OFTW + GWWC + EA Community) · 2021-02-24T09:49:39.875Z · EA · GW

This sounds like a great idea! 

Comment by Michelle_Hutchinson on 80,000 Hours one-on-one team plans, plus projects we’d like to see · 2021-02-14T16:52:28.627Z · EA · GW

Thanks for this feedback! It's really useful to know that this would make it easier to put yourself out there. We're in the process of changing the application form to connect better with our career planning process, to hopefully make filling it out a commitment mechanism for getting started on making a career plan (since doing so is often aversive). As part of that, we aim to send people a google doc of the relevant answers in a readily shareable format and encourage people to send it to friends and others whose judgement they trust.

I also find it pretty scary to email people out of the blue, even if I know them, particularly to ask them for something. But my hope is that if someone already has a doc they want comments on, and it's been explicitly suggested they send that to friends, it will make it a bit easier to ask for this kind of help. Increasing the extent to which people do that seems good to me, since my impression is that although people find it hard to reach out, most people would actually be happy to give their friends comments on something like this!

Comment by Michelle_Hutchinson on 80,000 Hours one-on-one team plans, plus projects we’d like to see · 2021-02-12T18:58:56.309Z · EA · GW

It is extremely upsetting for people to apply and get turned down, especially if they found 80k materials at some emotional time (releasing they are not satisfied with their current job or studies). It is very hard to not interpret this as "you are not good enough".

I am so sad that we are causing this. It is really tough to make yourself vulnerable to strangers and reach out for help, only to have your request rebuffed. That’s particularly hard when it feels like a judgement on someone’s worth, and more particularly on their ability to help others. And I think there are additional reasons for these rejections being particularly tough:

  • If you’re early on in your career (as most of our readers are) and haven’t yet experienced many rejections, they will hit harder than if you’re more used to them
  • Effective altruism is often experienced as an identity, above and beyond its ideas and the community. This makes a rejection feel particularly sensitive
  • Whenever you’re being judged, it’s hard to keep in mind how little information the person has about you. Our application is far shorter and more informal than, say, university applications. We therefore often have pretty little information about people and so are correspondingly likely to make the wrong call. But since the person filling in the application knows all about themselves, it’s hard for them not to take it as an indictment of them overall.

I do want to highlight that our not talking to someone isn’t a sign we don’t think they will have an (extremely) impactful career; rather it is simply a sign that we don't think we’ll be as helpful to them as we could be to some other people. So while I deeply empathise with the feelings I describe above and I expect I would feel the same way in a similar situation, I don’t think people are actually right to feel like they “are not good enough”.

I realise it’s probably no consolation, but, on a personal note, needing to turn down people who are asking for my help is unquestionably the worst part of my job. We spent a significant part of last year trying to find an alternative model we believe would be as impactful as our current process but wouldn’t involve soliciting and then rejecting so many applications. Unfortunately, we didn’t find one. I think it’s my responsibility to implement the model we think is best, but it’s hard to feel like I’m doing the right thing when I know I’m disappointing so many people. I often only get through reviewing applications by reminding myself of our mission and trying to bring to mind the huge numbers of people in the future who may never get to exist and are entirely voiceless, and for whose sake it is that I have to refuse to help people in front of me today that I care about.

Comment by Michelle_Hutchinson on 80,000 Hours one-on-one team plans, plus projects we’d like to see · 2021-02-11T15:14:55.267Z · EA · GW

By focusing on people "for whom you’ll have useful things to say", you talk to people who do not need additional resources (like guidance or introductions) for increasing their impact. The contrafactual impact is low. For example, testimonials on the website include PhD Student in Machine Learning at Cambridge and the President of Harvard Law School Effective Altruism.

I don’t quite agree here. I was counting ‘additional resources’ like guidance and introductions as ‘things to say’. So focusing on people for whom we have useful things to say should increase rather than decrease the extent to which we talk to people who need these resources to increase their impact.

I agree we’re not always good at figuring out which people could most benefit from our providing resources / introductions. We try to keep calibrating on this from our conversations. That’s clearly easier in the case of noticing people we talk to for whom we couldn’t be that useful than the opposite. To counter that asymmetry, we try to do experiments with tweaking which people we speak to in order to get a sense of how useful we can be to different groups.

With respect to your concrete examples:

The descriptions we’ve given of people on that page is actually from where they’re at a year or two after we speak to them. That’s because it takes a while for us to figure out if the conversation was actually useful to them. For example, I think Cullen wasn’t President of HL EA when we spoke to them.

That aside, on the question of whether we should generally speak to people with these types of profiles:

Being a PhD student in Machine Learning doesn’t seem like an indication of how much someone knows about / has interacted with the effective altruism community. So it doesn’t seem to me like it should count against us talking to them. (Though of course the person might in fact already be well connected to the EA community and not stand to benefit much from talking to us.)

It seems like a hard decision to me whether someone running an EA student group should count in favour of or against our speaking to them. On the one hand, they might well be steeped enough in effective altruism they won’t benefit that much from us recommending specific resources to them. They’re also in a better position to reach out to other EAs to ask for their advice than people new to the community would be. On the other hand, it’s a strong signal that they want to spend their energies improving the world as much as possible, and so our research will definitely be applicable for them. It’s also not a foregone conclusion that someone running a student group has had much opportunity to sound board their career with others who feel equally strongly about helping the world, let alone those with similar values but more experience. So I could imagine us being really useful for EA group leaders, despite the caveats above.

Comment by Michelle_Hutchinson on 80,000 Hours one-on-one team plans, plus projects we’d like to see · 2021-02-11T13:08:34.120Z · EA · GW

A month-long period of reviewing the application is prohibitive and disappointing.

I agree this is too long, and I’m sad that it was actually longer than this at times. Right now I’m mostly managing to review them within a week, and almost always within 2 weeks. I wouldn’t want to promise to always be able to do this, but it’s much easier now we have a team of people working on advising.

I have an impression that 80k accepted a long time ago that that wait time will just have to be pretty long.

I'm actually really keen to avoid us having long wait times. Career decisions are often pretty time sensitive due to application and decision deadlines. Thinking about your overall career also seems pretty aversive to me, so I think it's important to capitalise on people's enthusiasm and energy for doing those occur. Right now we're aiming to have slots available in the next couple of weeks after we've accepted an application, though it might take a few weeks before there are slots that work for a person, particularly if they're in a very different time zone than us.

Comment by Michelle_Hutchinson on 80,000 Hours one-on-one team plans, plus projects we’d like to see · 2021-02-11T12:58:35.793Z · EA · GW

Thanks for sharing your view. It’s useful for us to get an overall sense of whether others think our work is useful in order to sense check our views and continue figuring out whether this is the right thing for us to focus our time on. It's also important to hear detail about what the problems with it are so that we can try to address them. I’ll respond to your points in separate comments so that they’re easier to parse and engage with.

Comment by Michelle_Hutchinson on 80,000 Hours one-on-one team plans, plus projects we’d like to see · 2021-02-11T09:44:25.220Z · EA · GW

I'm afraid I don't really know anything about discord (me and tech are not the best of friends...), but from your description it sounds good! I think there is some EA activity on discord, so maybe you could build off that. I don't know anything about the form it takes or how to find it though unfo - but I'm guessing others on this forum do.

Comment by Michelle_Hutchinson on 80,000 Hours one-on-one team plans, plus projects we’d like to see · 2021-02-10T11:15:00.363Z · EA · GW

This sounds great to me! I'd be tempted to try out things in existing infrastructure first, like trying it out in the careers discussion fbook group, or the open thread you mentioned.

Other options that come to mind:

  • Look at the EA London community directory for people who would likely have relevant comments on your plan and reach out to them. This seems like it might be more likely to get a reply than a general call, and the person might have more relevant comments than a random person would. But they would likely have less time because they're not selected by being keen to look over a career plan.
  • Finding an accountability partner eg through the EA life coaching exchange Facebook group and looking through each other's.
  • Talking to other EAs at your local meet up about looking through your plan, or if there isn't a group in your area joining EA Anywhere

I've been surprised how much people's preferences on how to give comments on career plans differs. For example, I find it takes me ages to read through a plan so I end up putting it off for ages. Whereas I really like talking to people, so I'm much happier to chat to someone for half an hour. By contrast a friend of mine finds answering questions on the spot really stressful, so far prefers reading over things. So it seems worth giving people an option about whether to read through something (and if so how much) or whether to chat.

In the longer run, I think it would be cool to have a facebook group or slack for EA job seekers to keep each other motivated and accountable, because it's so hard to apply for jobs and deal with the uncertainty. That might also be a good place for people sharing and commenting on career plans.

Comment by Michelle_Hutchinson on Are there robustly good and disputable leadership practices? · 2021-01-27T09:56:07.041Z · EA · GW

only principles which are both robustly good and disputable seem worth teaching

This sounds false to me: You might think different kinds of principles work better and worse for different people's styles, and lots of principles are non-obvious. In that case, it seems worth someone learning about a tonne of different principles and testing out to see if they help or hinder their personal style of management.

Comment by Michelle_Hutchinson on My Career Decision-Making Process · 2021-01-23T10:30:02.069Z · EA · GW

Thank you very much for this post! As you say, it's great to have examples of how people think through their careers, what options they chose and why. Useful for others to learn from and also help each other feel less alone in making these hard decisions and going through the frustration of applications. 

I'd be particularly interested in hearing more about why you don't see cybersecurity and formal verification as promising: in particular whether your view is that EAs should be aiming to build up expertise in these, or whether you think they are useful skills for a number of EAs to have, it's just that their use will come in the future (or in a country other than Israel).

Comment by Michelle_Hutchinson on Training Bottlenecks in EA (professional skills) · 2021-01-19T11:15:55.453Z · EA · GW

Thanks, this is all really useful to hear! It makes me think that it's somewhat likely I've just generally not found the right courses / types of training.

I wonder if one thing that's going on is that I'm making the enemy the perfect of the good. The courses I've done like the global health short course at Imperial felt interesting and fun to me, but not very efficient: I learned a bunch of things that I wouldn't use alongside what I would, and the learning per unit time could have been higher. But on the other hand, it's pretty likely that although I could have learned the most useful parts in a shorter time, I wouldn't have, and so it was worth going. 

I may also be biased by enjoying courses, and therefore feeling like they must be a selfish waste of time, rather than what I should do. Or perhaps by the finding of the courses seeming boring, and so not bothering.

Perhaps low staff retention rates make some EA orgs reluctant to invest into the development of their staff because they worry they won't internalize the benefits.

This seems really sad if true, given that you would hope that in EA more than in the commercial world, skilling up staff to contribute elsewhere is still treated as valuable. 

I'd love to hear any advice from how that charity decided which courses would be best for people to do! Also whether there are any specific ones you recommend (if any are applicable in the UK). 

Comment by Michelle_Hutchinson on Training Bottlenecks in EA (professional skills) · 2021-01-19T11:02:46.413Z · EA · GW

Thanks!

Comment by Michelle_Hutchinson on Training Bottlenecks in EA (professional skills) · 2021-01-19T11:00:49.574Z · EA · GW

My learning goals for the year are somewhat intertwined, where one is 'forming more views' and another is 'developing better models of the world'. The things I'm doing are each somewhat focused on each, partly because I only want to form sensible / well informed views, and partly because I think I'll only feel comfortable forming views if I am in some sense conscious of knowing about a topic. The thing most focused on the 'forming views' side is writing - where that doesn't need to be shared with anyone. But a few other things I'm doing: 

Pay more attention to what rhythms/habits I can get into that make broad learning easy. For example: 

  • I love quizzing people about their job / something they know about. So I'm aiming to talk to one person a week who's in an area I want to know more about. 
  • I go for a walk everyday, and my plan is that every day I'll start that by listening to an article on pocket (after that I can just listen to music if I want, but often by then I'm into listening to pocket and continue). This is useful for me because I far prefer listening to things than reading.
  • I organised my bookmarks into better folders of things to read / watch with priorities, which means that so far (cross fingers I can continue!) I've actually been keeping track of things I want to read later rather than leaving the tabs open and hoping I get back to them later.

I also want to have a better sense of knowing that I know about an area, and a ready picture of what the landscape in that area looks like. I think the main thing to do there is to deliberately learn and memorise facts rather than simply reading around areas (so that I know that I read some book but am not sure how much I'd recall about the area unless asked specific questions). That involves reading / watching overviews of an area, and then putting into anki the key things I want to remember. (I think this is the article I found most helpful on using anki.)

My hope is that the combination of the above, and then sitting down to physically write down my overall take on some issue, will help. The final step then would be getting other people's views on my takes, if I get to the point of being happy to share with a colleague or others. 

I'd love to hear other thoughts on how I could improve at this!

Comment by Michelle_Hutchinson on Training Bottlenecks in EA (professional skills) · 2021-01-19T10:39:35.757Z · EA · GW

>Just setting up weekly meetings with someone else who's at roughly the same level of seniority and who also wants more "management"/"mentorship"

I really like this idea. I had a set up like this when I had a hands off manager, with a friend who didn't have a manager. I found it really helpful. For others who are keen on this but don't have a particular friend they'd like to do it with, there's a Facebook group for finding such accountability partners