I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA

post by Michelle_Hutchinson · 2019-12-04T10:53:43.030Z · score: 88 (33 votes) · EA · GW · 55 comments

Contents

  What I work on: 
  My background: 
None
55 comments

I found Will’s [EA · GW] and Buck’s [EA · GW] AMAs really interesting. I’m hoping others follow suit, so I thought I’d do one too.

What I work on:

I’m head of advising (what we used to call ‘coaching’) for 80,000 Hours. That means I chat to people who are in the process of making impact-focused career decisions and help them with those decisions. I also hire people to the team, and manage them - currently we have one other adviser, and we have another joining us next year. Alongside my usual calls, I answer career related questions in other formats, for example on the 80,000 Hours podcast (the episode will come out next year).

My background:

I joined 80,000 Hours from the Global Priorities Institute, which I set up with Hilary Greaves. Before that I ran Giving What We Can and did the operational set up of the Centre for Effective Altruism. I have a philosophy PhD on prioritising in global health. I wrote about how I initially got involved with effective altruism here [EA · GW].

I’ll be answering in a personal capacity so I won’t comment much on 80,000 Hours overall strategy except as it relates to the advising team. I’m very happy to answer questions related to career decisions, and to work I’ve done in the past.

Right now I’m on maternity leave with my first baby, so how fast I respond will depend on how he behaves himself.

55 comments

Comments sorted by top scores.

comment by Pablo_Stafforini · 2019-12-04T13:00:27.385Z · score: 43 (18 votes) · EA(p) · GW(p)

You have been part of the effective altruism movement since its inception. What are some interesting or important ways in which you think EA has changed over the years?

comment by Michelle_Hutchinson · 2019-12-04T14:50:42.980Z · score: 43 (20 votes) · EA(p) · GW(p)

I think we've gotten a bunch more ambitious over the years. It feels like in the early days we thought we'd only get traction encouraging people to take really specific, concrete actions: for example, donating to demonstrably more effective global development interventions. Whereas it seemed that actually people found the broader ideas of effective altruism appealing. Plus EA's research agenda seems to have gotten more ambitious - rather than only trying to figure out what current charities have most effect, EAs are now trying to figure out what we can do that will improve the long run future as much as possible. And again it feels as if the research there has shifted in the more difficult and general direction: how to make sure that transformative AI is developed safely is an incredibly difficult challenge but feels a bit more concrete and contained than how to prevent global power war and how to build the best long-term institutions (though this difference might well be simply how I think of these problems). In the early days these latter problems were discussed, but it didn't feel as if we had much constructive to say about them, beyond having done a bit of research into things like how to become a politician.


comment by Michelle_Hutchinson · 2019-12-04T14:54:48.694Z · score: 28 (14 votes) · EA(p) · GW(p)

We seem to have gotten better at engaging with experts and other communities. In the early days, it felt as if a large part of EA narrative was 'look at all these things the rest of the world is getting wrong'. That might have been partly necessary for carving out a niche, and was usually picking up on something true. But it wasn't a great way of engaging with others. Whereas now it seems like we do a better job of finding out what others are doing really well that we want to learn about and build on (eg with things like speakers at EA global and the interviews on the 80,000 Hours podcast)

comment by Michelle_Hutchinson · 2019-12-04T14:56:53.591Z · score: 23 (12 votes) · EA(p) · GW(p)

I worry that there's a bit more antagonism and unfriendliness in the movement now. I think this is mostly just due to it being bigger: when there are few enough of you, you all know each other somewhat and so are likely to give each other the benefit of the doubt. Whereas when lots of people only know each other online, it feels easier to assume the worst of each other. Plus engaging online rather than in person tends to be less friendly in general. I'm not sure if this is a real effect though.

comment by Peter_Hurford · 2019-12-04T13:37:33.291Z · score: 24 (12 votes) · EA(p) · GW(p)

How do you decide who to advise?

comment by Michelle_Hutchinson · 2019-12-04T14:11:55.932Z · score: 24 (12 votes) · EA(p) · GW(p)

A combination of the people most likely to be able to fill the skill gaps we currently regard as most crucial (so, people who have specific experience or talents - though this may be pretty general) and people I think I can help most. The latter tends to mean people who aren't in-person involved in the effective altruism community, since those people are likely to have already heard a lot of the advice I might give, and to have other people they can usefully chat to about their career decisions.

comment by Peter_Hurford · 2019-12-04T13:35:35.925Z · score: 20 (14 votes) · EA(p) · GW(p)

What's your baby like?! :D

comment by Michelle_Hutchinson · 2019-12-05T11:10:29.917Z · score: 37 (25 votes) · EA(p) · GW(p)

He's super cute, pretty chill, and growing crazily fast - he's gone from 8 pounds to 10 pounds in the 3 and a half weeks he's been alive! And he's already enjoying getting to know the 80,000 Hours team. Photographic evidence of my claims.

comment by Aaron Gertler (aarongertler) · 2019-12-04T13:03:58.972Z · score: 18 (13 votes) · EA(p) · GW(p)

Meta-note from a moderator: I will generally make AMA posts sticky for a few days, since they are unusually valuable for people to see soon after they are posted (vs. later on, after the author is no longer actively responding). This may depend on the level of early activity around the post, however.

comment by Milan_Griffes · 2019-12-05T17:47:02.095Z · score: 16 (5 votes) · EA(p) · GW(p)

Does 80k do longterm follow-up with folks who've attributed an impact-adjusted significant plan change (IASPC) to 80k advice?

I'm imagining following up 12 months later (and also 24 months later, 36 months later, if ambitious), to see:

  • how things are going after the change
  • if they still think the change was a good idea
  • if they still attribute the change to the same factors
  • etc.
comment by Michelle_Hutchinson · 2019-12-06T14:16:58.033Z · score: 7 (4 votes) · EA(p) · GW(p)

Again, I'll give a pretty brief answer since this is a question about broader 80,000 Hours strategy rather than advising specifically.

Around the end of the year we do an annual impact review. At that point we follow up with people we think have made a significant plan change, and people who we knew about from the past who had made large plan changes to see how those panned out. For cases where it seemed we made a particularly large impact we've continued following up for years, in order to update how much impact the plan change had. It's not viable for us to do this with everyone who reports any kind of change (our impact survey gets more than 1000 answers each year), so we just do it with the cases where we seemed to have had the most counterfactual impact on the person's career (as opposed to, say, the people who just say they read something on our website and it made them slightly more likely to follow a different path)

comment by Milan_Griffes · 2019-12-06T15:49:45.893Z · score: 4 (2 votes) · EA(p) · GW(p)

Thanks! Makes sense that 80k would only do this for "rated-1o" and "rated-1oo" plan changes.


For cases where it seemed we made a particularly large impact we've continued following up for years, in order to update how much impact the plan change had.

Is data about these longer term follow-ups publicly available somewhere? Didn't see it in my quick read of the 2018 review.

comment by Michelle_Hutchinson · 2019-12-06T17:20:27.963Z · score: 5 (3 votes) · EA(p) · GW(p)

I would expect not, since it would be hard to give much information which isn't identifiable to individuals. The longer term follow up is factored into our overall impact numbers though, so in that sense it is.

comment by Michelle_Hutchinson · 2019-12-06T17:30:25.619Z · score: 6 (3 votes) · EA(p) · GW(p)

You might be interested in this section though, which says how many plan changes were rated 10 in previous years, but have subsequently been downgraded.

comment by Milan_Griffes · 2019-12-07T00:05:47.008Z · score: 2 (1 votes) · EA(p) · GW(p)

Thanks!

Those are plan changes that have been downgraded after 80k learned more about the situation?

comment by Peter_Hurford · 2019-12-04T13:40:22.034Z · score: 15 (8 votes) · EA(p) · GW(p)

Are you happy with where EA as a movement has ended up? If you could go back and nudge its course, what would you change?

comment by Michelle_Hutchinson · 2019-12-06T15:16:43.179Z · score: 34 (10 votes) · EA(p) · GW(p)

Overall, yes - I think it's truly incredible. I still have trouble believing how it went from an idea between students in a college common room to global movement with thousands of people acting on it in countries around the world.

How I'd nudge it feels like a really difficult question, because for any change I'd make it's hard to know how it would actually end up cashing out. One simple thing I'd say though is that it would have been good if different parts of the movement had coordinated more early on. Nowadays it feels like people travel between different hubs pretty often, and EAG and the leaders forum brings people together at least annually. In the early days it didn't seem like people travelled as much, partly because everyone was trying hard to live frugally. I think that was likely a mistake, because it meant much less communication and coordination between people in the US and UK.

Things I'm less sure about are ones around taking into account what has actually happened sooner. For example, I mentioned above that we've become more ambitious, for example that the broader idea of effective altruism was more appealing to people than we thought. If we had known that earlier, I think we could have focused more on discussing the broader ideas rather than starting with narrower ones. Another way in which we could have been more ambitious is discussing longtermism more earlier on. I think longtermism is a hugely important part of effective altruism. People who haven't been born yet are in an even worse position to claim our attention than those the other side of the world and non-human animals, and there are so many more of them than of the former two groups. Our initial assumption that we wouldn't be able to get others to care about people in the future seems to have been proven wrong by the way the movement is going, and it would have been great if we had realised that sooner.

Also in the spirit of being more ambitious, it might have been good if there had been more concentration on thoroughly and carefully building out the ideas of effective altruism before doing as much mass outreach as we did. Even around 2010, Giving What We Can and 80,000 Hours got quite a bit of press attention. I think in a way, our pushing for growing quickly was down to some lack of ambition - it was a great way to grow to having a community of 1000 or so, but if we had realised how big the movement could grow, maybe we'd have done so more slowly and carefully. It's really difficult to know though - that media attention got some great people involved who otherwise wouldn't have known about it, so maybe in the world with less media attention at an earlier point it wouldn't have been possible to grow this much.

A final thing I might have tried to push is consistency. By virtue of trying to do the most effective thing, as a community we're always reassessing what we're doing, thinking of better things to try, setting up new organisations etc. It seems like there's huge value in having a specific mission and just spending years getting great at carrying that out (as evidenced by GiveWell and OpenPhil's success).

comment by Peter_Hurford · 2019-12-04T13:36:56.333Z · score: 15 (8 votes) · EA(p) · GW(p)

What has been your biggest success? What has been your biggest mistake?

comment by Michelle_Hutchinson · 2019-12-05T12:14:22.091Z · score: 57 (24 votes) · EA(p) · GW(p)

I think setting up GPI with Hilary is probably my biggest success. I think having a well thought of academic institute doing global priorities research and encouraging other academics to do the same is a really important step towards effective altruism getting more in depth answers to questions of how to do the most good, and to these ideas gaining traction in spheres of influence like government. In a sense we were starting from scratch in setting it up, and needed to get buy in from different parts of Oxford university (involving going through around 7 committees), employ great researchers, fundraise and develop a strategy and research agenda. It felt as if it took a while at the time, but looking back going from idea to fully fledged Oxford institute in less than two years feels pretty rewarding.

In terms of mistakes - I'm not sure this is the biggest I've made, but I think it's a significant one that I've made more than once: Insufficiently taking cultural fit and trust into account when building teams. I think making well-functioning organisations of people who work smoothly together and trust each other is decidedly harder than I would have anticipated, and is crucial for people working effectively. One example of this was when Giving What We Can, Effective Altruism Outreach and the Global Priorities Project merged into a single CEA team. I was very much in favour of that merger, because it seemed so inefficient to have multiple teams supporting different local groups, multiple teams doing overlapping research etc. I thought it would be much better to have a unified strategy and plan. At the time, I also thought it would be a good idea if 80,000 Hours merged with those orgs. Looking back, I don't think I anticipated nearly strongly enough how much the different organisations had individual cultures which meant their teams worked well within themselves, and which meant that the amalgamation didn't have a cohesive culture and vision for people to get behind. I now think it's extremely important to have a strong organisational strategy and team culture which is constant over time, and to make sure that new people are thoroughly on board with that before hiring them. That's in no way to say that you should only hire people who agree with every aspect of the strategy, or have similar approaches to problems. But it's crucial for a team to deeply trust each other and be executing on a shared vision.

comment by Risto_Uuk · 2019-12-04T13:00:40.399Z · score: 15 (8 votes) · EA(p) · GW(p)

Why did you decide to move from Global Priorities Institute to 80,000 Hours?

comment by Michelle_Hutchinson · 2019-12-04T14:28:03.758Z · score: 42 (19 votes) · EA(p) · GW(p)

A number of factors, but the biggest was suitedness to the role. I tend to get a lot of energy from talking to other people. My role at GPI was very independent - at the time I was the only operations person there, and academics tend to work fairly individually on their research. By comparison, my current role involves not just talking to people I advise, but the team is also more collaborative in general (for example, it makes sense for the research team and advising team to collaborate quite a bit because advising calls are both how we get out quite a bit of our research and a good way to find out what research we might want to do more of). I also enjoy discussing and having an incentive for keeping more up to date on EA research, which was less the case in a purely operations role. I was actually surprised how much my greater enjoyment of the role has led me to working more hours and therefore being more productive. I found the GPI role engaging and loved the team, so I hadn't thought of myself as not enjoying the job. But my greater enjoyment of and therefore productivity at the 80,000 Hours role has updated me some to the importance of finding a role you really like.

There were also various situational factors: I had done the initial set up of GPI, at which point my role was pretty amorphous and hard to hire for. But by the time I left, when the initial grant I got was running out so we were issuing a new contract anyway, it was a more well defined role so it was easier to find a good replacement. In addition, I had just had a late term stillbirth, and so was emotionally keen on a change.

comment by Milan_Griffes · 2019-12-05T17:51:14.554Z · score: 14 (8 votes) · EA(p) · GW(p)

Has 80k considered partnering with academic researchers to run studies on its impact / the impact of different approaches to advice-giving?

Randomization seems straightforward given that demand for 80k advising is larger than supply.

comment by Michelle_Hutchinson · 2019-12-06T09:24:51.062Z · score: 4 (3 votes) · EA(p) · GW(p)

This is really a broader 80,000 Hours strategy question, which as I said I don't plan to discuss. I think there has been some thought put into such partnerships but I don't know about the details.

comment by Aidan O'Gara · 2019-12-07T04:39:50.290Z · score: 11 (5 votes) · EA(p) · GW(p)

Thanks so much for doing this, your answers are really informative. :)

Here's a bunch of questions, all of them tied together, so feel no obligation to answer all (or any) of them.

Which individual parts of advising do you think are the most and least valuable? You listed these components above, which are most critical?

discussing cause prioritisation, suggesting career options the person hadn't yet considered, helping rank options, providing encouragement to apply for things where the person might be too diffident, making introductions, giving more information / context on specific roles or organisations, recommending particular resources, brainstorming a concrete plan / next steps.

Can you say more about the relative value with advising of you being "a sounding board" and "helping people think through a fundamentally difficult and personal decision", compared to you "hav[ing] a bunch of information [advisees] don't"?

My underlying question is whether I (and other EAs) should spend much more time concretely planning my career than I am. (See here [EA · GW] for my background thoughts.) If advising is valuable because it forces people to sit down and seriously plan their careers, then people could get the same value by planning on their own time. On the other hand, if the value of advising is something unique to 80k - information, insights, abilities, connections - then people probably can't replicate the success of advising alone.

In general, do you think most EAs aren't spending enough time on concrete career planning? In your opinion, how much of the benefit of advising could be achieved by someone independent of 80k by seriously researching and planning for a day?

Do you actually use the A/B/Z career planning tool described here? Is that out of date? Do you think that's a very good way to plan your career, or might you suggest others?

comment by Michelle_Hutchinson · 2019-12-10T17:30:34.666Z · score: 13 (7 votes) · EA(p) · GW(p)
Do you actually use the A/B/Z career planning tool described here? Is that out of date? Do you think that's a very good way to plan your career, or might you suggest others?

We still endorse the general gist of 'come up with an A/B/Z plan', but no longer use that specific tool. Our more up to date framework is here.

I think the idea of doing an A/B/Z plan is a really good one. My impression is that because applying for jobs is so aversive, people often minimise the number of things they apply for both by not aiming as high as they could and by not considering what they would do if things really went worse than they're expecting. Hiring processes seem to contain quite a lot of randomness, and even when they don't are hard to predict from the outset. That means it seems worth both shooting for things that you have only a small chance of getting but would be excellent if you do get them, and worth making sure you know what your back up would be if things go much worse than you expect.

One thing to say about these is that people sometimes read 'plan A' as 'the role I most want' and 'plan B' as 'another role, which is easier to get'. In fact, 'plan A' is intended to be some type of role - for example, going to grad school - so would itself involve applying to a whole range of specific options of differing levels of competitiveness.

comment by Aidan O'Gara · 2019-12-10T18:58:18.761Z · score: 6 (4 votes) · EA(p) · GW(p)

Thanks a ton for these responses Michelle, very helpful. Hopefully I'll be able to get back to you soon with some more questions and clarifications.

comment by Michelle_Hutchinson · 2019-12-07T09:35:09.024Z · score: 9 (5 votes) · EA(p) · GW(p)

> Which individual parts of advising do you think are the most and least valuable? You listed these components above, which are most critical?

From the advising sessions I've done, the cases where I've been able to add most value seem to be the ones where I knew about some specific organisation / role / project that the person wasn't aware of and would be a good fit for, which I could tell them about and encourage them to apply for. I actually think this is rather unfortunate, because I'd like EAs to be exploring broadly and getting involved in many different sectors and organisations. For this reason, I think the work Maria is doing on expanding our job board is really important - it means being able to discuss concretely roles at many different foundations, specific roles to get research assistant experience etc.

From looking through past cases where people made large impactful plan changes based on talking to the team a couple of things seemed to come out as particularly significant: recommending particular resources and providing encouragement. (Note that the number of plan changes I was looking over here wasn't super long - it was only the ones that were most significant, and for which we had enough information that I could put together a pretty comprehensive story of what caused them to change their plans.) 'Encouragement' sometimes here meant providing an outside view that the person's plan seemed sensible and plausibly impactful despite being non traditional, and sometimes meant making clear that the person was very welcome in the EA community and that it was worth their applying to various specific opportunities even though they might feel that they were underqualified. Another which seemed useful was making introductions, though that is less dependable, because while there are usually useful resources to point a person to on whatever they'd be interested to know more about, it's more hit and miss whether we happen to know someone it would be sensible for them to be introduced to.

It's a bit more difficult to say which things are least valuable - there are various things which came up in fewer cases of people making impactful career changes, but I didn't notice ones where I thought it would be very useful to people and then they never came up as useful. All the others I mentioned came up sometimes but not frequently as being useful. I think discussing cause prioritisation might be something that is less useful than I would have intuitively thought, where my guess at why is that it requires a lot of thought, not just a couple of minutes conversation.

For some components it seems particularly tough to figure out whether or not they're useful - in the case of helping someone to form a concrete plan, or simply getting the person to think seriously about their long term career, it's really hard to figure out whether the session made any difference or whether they would have done that themselves anyway. It seems pretty likely the person themselves doesn't know the answer to this counterfactual.

comment by Michelle_Hutchinson · 2019-12-09T09:41:21.609Z · score: 8 (4 votes) · EA(p) · GW(p)
"Can you say more about the relative value with advising of you being "a sounding board" and "helping people think through a fundamentally difficult and personal decision", compared to you "hav[ing] a bunch of information [advisees] don't"?
My underlying question is whether I (and other EAs) should spend much more time concretely planning my career than I am. (See here for my background thoughts.) If advising is valuable because it forces people to sit down and seriously plan their careers, then people could get the same value by planning on their own time. On the other hand, if the value of advising is something unique to 80k - information, insights, abilities, connections - then people probably can't replicate the success of advising alone.
In general, do you think most EAs aren't spending enough time on concrete career planning? In your opinion, how much of the benefit of advising could be achieved by someone independent of 80k by seriously researching and planning for a day?"

As much as possible, we try to write up or discuss on the podcast information which we think would help people with career decisions, so in a way you might expect that the vast majority of the benefit of advising should be coming from things like being a sounding board. It is of course hard to find specifically the information that applies to you amongst all the information available, so that's something I'd expect to be able to continue to help with. And people often have specific gaps in their knowledge where they haven't come across some specific concept / possible role yet. But overall I do think it's the case that a lot of the benefit is coming from people taking the time to sit down and think seriously about their career in a way they might not have otherwise. Some evidence for this is the fact that people fairly often report that simply filling in the preparation document for the call is useful for them. (It asks: what options are you considering and why; what kinds of roles are you most suited for; what are your key uncertainties)

I very much agree with the comment you linked to, and I'm really glad to hear you're thinking of turning it into a top level post. I guess I don't know how much time most EAs spend planning their career, but I would expect that for most people they could get a lot of benefit by doing more of it. Thinking through what the best types or roles apply for would be, researching specific roles and then applying to competitive things you think you only have a small chance of getting are all really aversive. Standard careers advice doesn't tend to give a terribly helpful framework for doing these in a way that will be most impactful. So I think there are good reasons why almost all of us put too little time into this, and why having specific time allotted to it and a person to talk it through with would be helpful. I definitely think people can put themselves in a good position to make these decisions though. This article gives an outline of the process people could follow to make a career decision. Once you have some of these thoughts written down, getting a friend to give comments on it and discuss it through with you seems useful. They might also be able to be an accountability buddy, to help you apply widely even when that feels frustrating and time consuming. For many people this can get quite a bit of the benefit of our advising. That's particularly true of those who have already read widely about EA topics and know others in the community. Having done this before doing advising with us is also really helpful because it means we can tell better which people we'll be most useful for, and with them focus on the the parts that we can add that the person couldn't as easily do themselves (like more in depth information, or making introductions).

comment by EdoArad (edoarad) · 2019-12-04T12:42:38.797Z · score: 10 (9 votes) · EA(p) · GW(p)

Regarding GPI, I guess it could have ended up different than it currently is. What were some major decisions related to how GPI is currently structured?

comment by Michelle_Hutchinson · 2019-12-04T15:23:51.965Z · score: 33 (16 votes) · EA(p) · GW(p)

A couple of things I think have been really significant to its success are it being part of Oxford University and it having a very well respected and talented director (Hilary Greaves). I think this gave it a credibility from the get go which has allowed it to hire really top researchers. Doing that seems incredibly important if global priorities research is going to become a well respected field.

Fundraising-wise, we started off by applying for academic grants, since they have lower opportunity cost than being funded by EA funders. We had some early success with a small grant, but didn't get any of the larger ones we applied for. We decided that that wasn't worth the time commitment, since to get them we needed large time input from talented researchers, and EA funders actually preferred to pay for the time of those researchers to go towards actual research. In addition to the time commitment for fundraising from EA donors being smaller, it was extremely useful to get the input of those donors - they tended to have excellent advice, which they might not have had time to give us had we not been fundraising from them.

Two important ways in which GPI roles differ from many academic roles is that they require people only to do research (rather than teaching or admin) and that the institute functions pretty collaboratively - it has a central research agenda, mandatory seminars and an aim of research working together on papers. The former acts as an incentive to get top researchers to want to work there, as well as being a more valuable use of their time. The latter aims to make the research produced more goal oriented and impactful (central agenda) and to make the most of the fact that different people have different comparative advantages (some are great at coming up with ideas, others at meticulously working through a problem in detail).

One challenge we faced was getting economists as well as philosophers on board - our network was far more philosophy heavy, and Oxford's Philosophy department is much stronger than its Econ one, so it's harder to get economists to want to move there. We tried fairly hard to make it interdisciplinary from the get go, but I think it's still something they're keen to do more on.

comment by Peter_Hurford · 2019-12-04T13:37:53.965Z · score: 9 (8 votes) · EA(p) · GW(p)

What is the future of 80K's approach to advising?

comment by Michelle_Hutchinson · 2019-12-05T11:28:49.541Z · score: 24 (10 votes) · EA(p) · GW(p)

We seem to have fairly good evidence for the cost-effectiveness of the current model (one-off conversations of around 45 minutes, gathering some information from people beforehand, and with some email follow up afterwards). So we expect that model won't change a huge amount over the next year.

One thing we'll be focusing on early in the new year is getting a more fine grained sense of where the value in advising comes from. There are a whole bunch of different things we do in an advising call, with different people getting value from different parts. This includes: discussing cause prioritisation, suggesting career options the person hadn't yet considered, helping rank options, providing encouragement to apply for things where the person might be too diffident, making introductions, giving more information / context on specific roles or organisations, recommending particular resources, brainstorming a concrete plan / next steps. We have some sense of which of these are more commonly helpful and for which people, but not yet as much understanding as we'd like of what parts we should be focusing on. Learning more about this seems important to do while the team is small, because the answer is likely to affect what kinds of hires we make in the future, and because changing strategy is harder to do with a larger team.

There are a few other specific things we're likely to want to experiment with over the next year. One is thinking through ways to make our advising process more efficient, for example by writing up bits of advice we find ourselves often giving. We did a bit of that by producing a podcast episode on advising. Another is thinking through how much we should be a team of specialists (in the way that Niel is a US AI policy specialist) versus generalist advisers (which Jenna and me currently are). At the moment, we sometimes get feedback that it would be helpful if the person an advisee talked to had a more in depth knowledge of some field, but on the other hand we also frequently talk to people who are considering a broad range of options and would like to discuss and compare all of them.

comment by shaybenmoshe · 2019-12-07T09:44:24.783Z · score: 1 (1 votes) · EA(p) · GW(p)

Thank you for this answer (and the rest of them!). Could you link to that podcast episode on advising?

comment by Michelle_Hutchinson · 2019-12-08T10:25:40.359Z · score: 3 (2 votes) · EA(p) · GW(p)

I'm afraid it's not out yet. It will come out in the new year, likely when I'm back at work.

comment by shaybenmoshe · 2019-12-08T12:35:32.829Z · score: 1 (1 votes) · EA(p) · GW(p)

Oh I see, I misunderstood you.

Thanks, looking forward to the episode to come out.

comment by Peter_Hurford · 2019-12-04T13:36:24.487Z · score: 8 (7 votes) · EA(p) · GW(p)

What have you changed your mind on recently?

comment by Michelle_Hutchinson · 2019-12-05T14:23:06.685Z · score: 26 (11 votes) · EA(p) · GW(p)

I always find this a bit hard to answer, because I often feel really uncertain about the big / important questions, such that it's hard to know what my view exactly is or how it changes. Here are a few things I've been surprised about recently though:

  • The research in 'Destined for War' on how one superpower overtaking another has usually led to a war throughout history, and its conclusion that the US and China need to work hard at avoiding that trap. I don't think I would otherwise have thought about war as being in some sense the 'default' outcome of China's growth surpassing that of the US. It definitely made me more worried about that relationship.
  • I would have thought there were a lot of resources going into peace building, and that many of those would be focused on avoiding great power war. But talking to people about this area it seems like there is less work being done on it than I would have thought. (That's not to say we should work on it, it maybe means that only governments can really do anything on this, and there's no point philanthropists putting money towards it.)
  • I had thought that if you wanted to work in a particular area in government in the UK as a civil servant (eg on biotechnology) you should aim to get a job in the most relevant department for that and stay there. But from talking to people it seems that it's actually well thought of to have experience with multiple departments, and it can be easier to move up quickly by moving around. That doesn't feel intuitive to me compared to getting more specific expertise.
  • I've been finding it interesting seeing what biases I've had from being around academia so much. One that was particularly noticeable to me was that coming out of a philosophy PhD onto the philosophy job market you expect to apply for at least 10s of jobs, plausibly over 100, and to get almost none (or in fact none) of them. Obviously, that makes applying for jobs a gruelling and thankless task, but I think there's some sense of it being less personal because everyone is applying to tonnes of things and getting rejected by the vast majority of them. That culture seems pretty different from what some people experience coming out of university.
comment by Peter_Hurford · 2019-12-04T13:36:08.352Z · score: 8 (7 votes) · EA(p) · GW(p)

What do you think the typical EA Forum reader is most likely wrong about?

comment by Michelle_Hutchinson · 2019-12-05T17:53:17.059Z · score: 74 (33 votes) · EA(p) · GW(p)

One thing I often see on the forum is a conflation of 'direct work' and 'working at EA orgs'. These strike me as two pretty different things, where I see 'working at EA orgs' as meaning 'working at an organisation that explicitly identifies itself as EA' and 'direct work' as being work that directly aims to improve lives as opposed to aiming to eg make money to donate. My view is that the vast majority of EAs should be doing direct work but not at EA orgs - working in government, at the think tanks, in foundations and in influential companies. Conflating these two concepts seems really bad because it encourages people to focus on a very narrow subset of 'direct impact' jobs - those that are at the very few, small organisations which explicitly identify with the EA movement.

A trap I think a lot of us fall into at some time or other is thinking that in order to be a 'good EA' you have to do ALL THE THINGS: have a directly impactful job, donate money to a charity you deeply researched, live frugally, eat vegan etc. When, inevitably, you don't live up to a bunch of these standards, it's easy to assume others will judge you. That has usually turned out to be wrong in my experience. People greatly differ in how much of a sacrifice specific things are to them, and how comfortable they are with different levels of sacrifice. I felt very guilty about eating meat for years, without succeeding in really changing my eating habits at all, until a colleague (who was vegan) donated to ACE on my behalf as an off-set and told me to quit spending emotional energy on my diet and get back to work. Another colleague, on joining the organisation, was worried people would be judging her ring for being a waste of money (which was an artificial diamond, and so cheaper than it appeared), but not a single person had noticed it except to think how pretty it was. In my experience, people we meet in this community are all trying hard to help others, and while doing that they're appreciating the great work of those around them doing the same, regardless of what form that takes. 10 years into Giving What We Can's life, it still blows me away that there are so many people willing to give away 10% of their incomes to make the world a better place. It's great that we're all pushing ourselves to do more, but I hope people feel appreciated rather than judged by the larger community.

comment by jessica_mccurdy · 2020-01-04T16:50:30.792Z · score: 1 (1 votes) · EA(p) · GW(p)

Could you possibly share how much the ACE off-set was? I have been having trouble finding a good number for this when people ask me about it.

comment by Michelle_Hutchinson · 2020-01-06T13:13:14.456Z · score: 4 (3 votes) · EA(p) · GW(p)

I think he donated £25 for that year, but I'm not sure how he picked that number and I have to admit I haven't been very systematic since then. I think the following year I donated £100 to ACE, then missed a year, then for 2 years did 10% of my annual donations to the animal welfare EA fund (I'm a member of Giving What We Can, so that's 1% of my salary).

I'm not sure I have a reasoned case for donating to animal welfare charities as offsets, since the animals that are helped are different to those I harm and consequentially it would surely be best to make all my donations to the organisation I think will help sentient beings most. But it seems pretty good to remember that I think it's important and impactful to help various groups to whom I don't give the lions share of my donations, and it seems plausibly good to show to others that I care about them by doing something concrete. With those considerations in mind it simply seems important for the donation to be an amount that feels non-negligible to me and others, rather than an amount exactly equal to the harm I'm doing. (That may simply be a rationalisation though, because I would rather not know exactly how much harm I'm causing and it would be a hassle to figure it out.)

comment by Robert_Wiblin · 2020-01-17T12:33:13.508Z · score: 3 (2 votes) · EA(p) · GW(p)

I Jessica, IIRC the main problem you'll likely encounter is that some naïve cost-effectiveness estimates will give you a really low figure, like donating $1 to corporate campaigns is as effective as being vegan a whole year. (Not exactly, but that order of magnitude.)

Given that I'm inclined to just make it the lowest amount that feels substantial and like it would actually plausibly be enough to make someone else veg*n for a year — for me that means about $100 a year.

comment by EdoArad (edoarad) · 2019-12-04T12:37:34.650Z · score: 7 (7 votes) · EA(p) · GW(p)

What are some things you learned on the job that helped you become better at giving career advice?

comment by Michelle_Hutchinson · 2019-12-05T17:21:37.297Z · score: 14 (7 votes) · EA(p) · GW(p)

One is thinking more about how to make social interactions go well. For example, I have a tendency in 'work' settings to want to jump straight to business. So that was my initial inclination on advising calls. But it's actually important when discussing career decisions with people that they feel at ease and like you find it easy to communicate with each other. So I've tried to increase the friendliness of our initial interaction, rather than jumping straight in with really specific questions.

Most of the cases where I've been most helpful to people are ones where I had really specific things to recommend: a particular job they seemed suited to, an internship they could immediately apply for, or just a specific person likely to know of a research project they could get started on. Whether I happen to know of an opportunity really suited to the person is of course often a matter of luck, but it's made me more likely to check our job board before I talk to someone for things that might match their background. It's also made me appreciate how important it is to talk in concrete terms about the specific next steps the person might take after the conversation (including things like when they might take those steps).

Providing people with encouragement is surprisingly often useful. Job hunting is really stressful and time consuming, which makes people pretty keen to apply for fewer options than might be ideal. Also, trying to go for the most impactful job often means taking a less traditional route, or doing something that doesn't follow naturally from your background. So you could easily be in a position where your family and classmates all think that the option you're considering is pretty weird, which makes it very natural to question whether you could really be right in pursuing it. Getting an outside view from someone who has the same values as you and can look at your situation more dispassionately than you can often be surprisingly helpful for counteracting both these effects.


comment by vaidehi_agarwalla · 2019-12-05T20:18:00.986Z · score: 5 (4 votes) · EA(p) · GW(p)

Sort of tangential, but on the topic of encouragement during the job hunt process, I found that after doing a number of interviews with people in the midst of a career change process (from a wide range of backgrounds), a good number of people felt energized/encouraged just by having the chance to talk about their situation. This was a context where we mostly did active listening and asked some guiding questions. I think being able to explicitly think through a career change and take a bird's eye perspective might be very valuable for some people.

comment by BrianTan · 2019-12-06T00:03:05.381Z · score: 6 (6 votes) · EA(p) · GW(p)

What would be your advice to people who want to do EA career advising in their local community? Would 80,000 Hours be willing to release a guide, or train people on how to do this, so that other EAs can advise a lot more people than 80K can?

comment by Michelle_Hutchinson · 2019-12-06T16:55:28.927Z · score: 23 (11 votes) · EA(p) · GW(p)

I think a key piece of advice I'd have is to think of what you're doing more as sound boarding rather than as trying to convey information. People asking for careers advice are often pretty keen to get 'answers', and tend to assume that others have a bunch of information they don't. I often feel I fall into this trap, of thinking that others must know the answer to 'how impactful is x', when usually they have little more information than me. So I think it's important to push back on the idea that we can give people answers to what role will be most impactful for them, and make clear that what we're doing is helping them think through a fundamentally difficult and personal decision. I think reading articles like our one on making tough careers decisions might be helpful for doing that. I think the community as a whole will do better if we try to get lots of people working on the hard problem of what's most impactful, than if we expect to get answers from just a few people and then try to propagate them (also because doing the latter means the information is likely to get distorted if people aren't thinking it through for themselves).

I'd also try to be well versed in what resources there are around on different causes, career paths, jobs etc. You're usually only talking to someone for an hour, but if you can use that hour to suggest a bunch of articles/books/podcasts/videos to them, they might end up spending many hours on those.

Coordination amongst local communities seems like it's really valuable - particularly if you can find other local groups that are particularly similar to yours, or have also experienced some particular problem that you're currently having. There's an EA groups slack, with a career planning 1-on-1s channel, which seems very useful for getting people working together. This seems all the more valuable for new local group leaders getting up to speed.

Unfortunately we don't have capacity to write a guide on this, or to train people on how to do it. We might in the future, but unfortunately it won't be soon. My impression is that the Oxford local group has written a brief guide on it which they're considering sharing with others.

comment by Milan_Griffes · 2019-12-05T17:56:08.794Z · score: 5 (4 votes) · EA(p) · GW(p)

What feels most limiting to your advising work at 80k?

(i.e. what things are most keeping your work from being what you'd like it to be in the ideal case?)

comment by Michelle_Hutchinson · 2019-12-06T15:41:32.747Z · score: 14 (7 votes) · EA(p) · GW(p)

One thing that's making my work less valuable is how well the EA community is growing - the fact that for a lot of the kinds of information I might give people, many people are already coming across it from other means! (Which is why we tend to talk to people who have had less contact with the EA community so far.)

More seriously: I think the two main things that feel most limiting are information and time/capacity (which also interrelate, because if we had more time we could gather more information). On the former, I both mean that I feel limited by not knowing as much as I'd like to about specifically what parts of advising are the most useful for people and that I'd like to know more about different careers - their impact, how to get into them etc. One specific thing I'd like to know more about is concrete organisations and roles that seem really high impact, because it's so much more actionable for a person to have specific things suggested that they could apply for than to discuss how they could go out and research the organisations in a particular sector. I think this is one of the reasons that effective altruists tend to talk as if working for organisations that identify as effective altruist is the best thing to aim for - these orgs are few in number and therefore easily identifiable, whereas (for example) the UK government is huge and there are lots of choices to make about departments you might work for and specific types of roles to apply to.

With regard to time, I would appreciate more time to be able to talk to more people, to be able to talk to people for longer, and to be able to work on a greater number of projects - for example working with local groups on their giving careers advice. This object level work also trades off against spending time on building up the capacity of the team so that in future we'll have more time for object level work (and of course against improving my advice in other ways, such as learning more!).

In terms of which of these limiting factors seem best to work on: For now, I'm keen not to decrease the cost-effectiveness of advising, which means likely not spending more time per person we talk to (for example). I'm aware that I could always learn more in order to finesse the advice I give, so while I want to continue working on this, I try to prioritise what seems most important to learn about. On balance, the most limiting thing after having time to do work (since I'm on maternity leave) probably seems to me to be having an accurate enough understanding of what parts of advising are most valuable to scale the team up further (for example, knowing whether we should be hiring specialist advisers in specific areas, or more generalists). I'll be trying to work more on that in the new year when I'm back to work (along with getting through our waitlist).

comment by Milan_Griffes · 2019-12-07T00:07:20.923Z · score: 3 (2 votes) · EA(p) · GW(p)

Thanks for this thorough answer :-)

comment by EdoArad (edoarad) · 2019-12-04T12:36:01.177Z · score: 4 (4 votes) · EA(p) · GW(p)

I'm assuming that you are somewhat risk averse in the 80K career advice, in that you avoid suggesting speculative cause areas and speculative career paths and other speculative suggestions.

Is that the case? If so, what are some examples of career advice that you intuitively guess that you probably should give? (perhaps, advice that someone else should give)

comment by Michelle_Hutchinson · 2019-12-06T16:33:38.687Z · score: 11 (6 votes) · EA(p) · GW(p)

I'm not exactly sure of the extent to which I'm risk averse. I don't tend to have super strong views about the kinds of advice I'm giving people, which means that usually I feel able to give my actual view along with how uncertain I am about it. That has the advantage that I can usually be totally open and candid, though the disadvantage that it's obviously a bit less useful to get an answer along the lines of 'here's a reason to think A is higher impact, here's a reason B is higher impact, on balance I might go for B, but I think there's a strong case for each...' than 'B seems much better'. I also tend to be naturally risk averse, which means that my natural inclination is to suggest people go for the safer of different routes. Eg I'm very hesitant to suggest someone drop out of a degree, and hesitant to recommend someone quitting a job to take time to study or similar, rather than only quitting when they have another lined up. (I'm decidedly more on the risk averse end of the spectrum than some of my colleagues, for example)

There are probably a few cases where I feel the need be extra risk averse though:

  • In cases where someone would go into debt in order to pursue some course of action, I feel very hesitant to advise them doing that (particularly if they themselves are clearly worried about it). I find this a difficult trade off to make, because particularly for undergraduate it seems really important to me to go to a top university, and I'd definitely expect people to be able to pay off debt they go into in order to go those. But at the same time, I'd never want to push someone into taking on more debt than they're comfortable with.
  • I worry about cases where a particular role seems like it might be very stressful, and even possibly lead to burn out. It's really hard to know for a specific person whether a role will be stressful for them, even if there's reason to think it would be for some people. It feels emotionally much worse to me to have nudged someone towards a role that ends up being very stressful for them than to have nudged them away from a role that turns out to work really well for them. Some part of that seems right in expected value terms - leading someone to burn out is likely much worse for them than their alternative job, while it's likely there's only so much better the role would be for them than the alternative. But I also think as an adviser I have some extra duty of care to make sure I don't give advice that leads to harm.
  • Not quite to do with 'risk aversion' but related is that I don't always push people as hard on their values as I might naturally. People don't typically expect when going into a careers advising session that they'll end up talking much about what their values are and what they care most about, but that's critical for determining what the most impactful role by their lights is. For that reason, I usually start advising sessions with a discussion of values - what causes they care most about and why. That mostly involves me trying to work out what a person's values are, rather than trying to change them. Since my PhD is in philosophy, my natural inclination would be to be a bit more opinionated than that. In particular, when someone says that they don't care at all about people who aren't alive yet, I'd usually be interested in pushing more on that (since that was the topic of my thesis). For example, by discussing hypotheticals like 'if we shouldn't care about those as yet unborn, does that mean I shouldn't donate to AMF if it will take a while for a malaria net to be bought with the money, and so it will save the life of a baby who isn't born yet'. But these are huge questions which people need to read about and think through over a long period of time, rather than trying to argue through in 10 minutes of a 40 minute call.
comment by Michelle_Hutchinson · 2019-12-05T17:24:41.683Z · score: 4 (3 votes) · EA(p) · GW(p)

Could you clarify the question? Is it 'what are things you would say if you didn't need to be risk averse?'?

comment by anonymous_ea · 2019-12-05T17:38:45.605Z · score: 5 (3 votes) · EA(p) · GW(p)

Even if that's not what edoard meant, I would be interested in hearing the answer to 'what are things you would say if you didn't need to be risk averse?'!

comment by EdoArad (edoarad) · 2019-12-05T20:35:53.594Z · score: 4 (3 votes) · EA(p) · GW(p)

Sorry, yes.

There are two ways to use "risk averse" here.

Reducing the risk of saying the wrong advice or giving advice for safer career path.

I meant the first - What are things you would say if you didn't fear giving wrong advice?