80,000 Hours: Mistakes people make when deciding what work to dopost by Aaron Gertler (aarongertler) · 2019-12-27T02:16:46.349Z · score: 11 (6 votes) · EA · GW · None comments
This is a link post for https://80000hours.org/2019/12/anon-answers-what-to-work-on/
My notes on the article The article What mistakes do people most often make when deciding what work to do? Doing something that they don’t enjoy at all Not focusing on becoming really good at something Following the paths of similar people Not thinking about roles that don’t exist yet Working on the most interesting puzzles Not thinking enough about developing skills Not spreading out among many different fields Assuming direct work is best Valuing breadth over depth Thinking that their cause is “the one true cause” None No comments
My notes on the article
- I think people have generally internalized "move along quickly" within EA orgs, but that may be the most underutilized advice in the article among people working in non-EA roles (many of whom are members of this community)
- In my experience, many EA organizations give new hires a lot of room to define their roles if those hires have the necessary skills/experience for their desired job descriptions. The "roles that don't exist yet" advice has been relevant to me at multiple times in my career.
- I strongly second the suggestion that becoming world-class at something has a lot of value that can't necessarily be matched by being "okay" at a high-priority position. You can see more on the subject in my response to a world-class composer of obscure music [EA · GW]who asked about how that work might compare to a traditional EA career. This is also one reason that I continue to put work into being a world-class Magic: the Gathering player; even if I don't win much money, being a respected author and streamer within that community could open up a lot of opportunities for public messaging and connections to interesting people in other fields.
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.
The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.
This is the seventh in this series of posts with anonymous answers. Other entries have asked:
- “Is there any career advice you’d be hesitant to give if it were going to be attributed to you?”
- “How have you seen talented people fail in their work?”, and
- “What’s the thing people most overrate in their career?”
- “If you were 18 again, what would you do differently this time around?” And other personal career reflections.
- How risk averse should talented young people be?
- What bad habits do you see among people trying to improve the world?
What mistakes do people most often make when deciding what work to do?
Doing something that they don’t enjoy at all
I don’t think you should just do what you’re passionate about, but I also don’t think you should do whatever has the highest expected value to the world if it’s something that you’re going to hate.
When I started out in my career, the things I was told to work on needed to be done — but they were incredibly draining for me personally. And even though there were other things I could have been working on that I love to do, things that others might find unbearable — there was no attempt to work out what I enjoyed or what I was good at.
I think organisations generally need to become more receptive to finding specific tasks for specific people — something that they’ll be good at and at least somewhat motivated by.
For individuals, it’s hard to get sample data on what a job will actually be like — but not impossible. You can talk to a whole bunch of people working in that area, find out which specific things they’re working on that sound exciting to you. Read as much as you can about the area, and figure out what you think the most effective positions are.
Critically, once you do take a new job — immediately start thinking “is there something else that’s a better fit?” There’s still a taboo around people changing jobs quickly. I think you should maybe stay 6 months in a role just so they’re not totally wasting their time in training you — but the expectation should be that if someone finds out a year in that they’re not enjoying the work, or they’re not particularly suited to it, it’s better for everyone involved if they move on. Everyone should be actively helping them to find something else.
Doing something you don’t enjoy or aren’t particularly good at for 1 or 2 years isn’t a tragedy — but doing it for 20 or 30 years is.
You can get a sense for what the most effective role is from outside, but you’ll only find out if you enjoy it by actually doing it — so people should be way more willing to jump ship.
Not focusing on becoming really good at something
One of my biggest disagreements with effective altruism (EA) conventional wisdom relates to my belief in how incredibly valuable it is to achieve a high level of general status and career accomplishment.
If you can be an ‘okay’ AI safety researcher, assuming that’s the best thing to be, versus being truly world-class and remarkable at some other job that might not be very important in itself, I think the second one is probably the better call.
When you really do well at something you become in demand. You get a lot of opportunities. I see a lot of people right now who have incredible opportunities to do important things in the AI world, and those opportunities will never exist for an ‘okay’ AI safety researcher. The reason they have those opportunities is because they got to the top of something. And now, they know people. And they have impressive abilities that are in demand and hard to find. And so OpenAI, or DeepMind, or the government is interested in them, they want their help.
You also get friends and connections that no one else has. I think in a lot of cases, the best way to get truly outstanding people interested in doing effective altruism-focused things is to actually be friends with them. And having something you’re extremely good at can put you in a good position to make those kinds of connections.
So one side is: how valuable is it to be really good at something? And the other side is: how hard is it?
I think people underestimate how hard it is. I wonder if people get a weird, misleading model of this from college. Because in college, everything is on a four-year clock at most, often more like a few-month clock. So a lot of times you can get your A in your class, or you can be president of your club, by just working unsustainably hard for a short period of time.
But in the real world, I think to be really great at something, to be better than other people — you probably need to be a crazily good fit, and you need to put in a really crazy number of total hours that don’t have any relationship to how many hours you put into anything for a few months in college. As in you might need something like 30-80 good hours a week for 45-50 weeks a year for at least 5 years, often 10-20 years, in order to get to this world-class level.
This is why I often put more weight than other people put on things like being really excited about a job, working on things you naturally like to work on, and picking things you think play to your strengths. I think that generally those kinds of things are really important, and so usually my advice to someone early in their career is to really focus on learning about yourself, and what you like, and becoming better at things, and finding something that you can do really well. Don’t pick a job because a website told you to pick the job.
Following the paths of similar people
I think people often try to figure out the question “where do people like me work?” Rather than asking “where could my skills be most useful?”
This means that collectively we can be slow to get new things going.
Say there’s some social science problem, and what’s really needed to help solve the problem is people with really strong technical skills.
But the technically skilled people look at different fields and think “the people who seem impressive to me are professors in maths, or physics, or people going into AI”. Then they look at the social scientists and think “these people don’t seem like they’re good technically, I don’t want to be like them” — not noticing that if they went into that field they wouldn’t be like that.
Not thinking about roles that don’t exist yet
Being really focused on what roles are available now. Take the example of someone who’s an excellent communicator — maybe you tell them they should aim to be a communication officer at an effective altruism themed org. But they say “well, there aren’t any roles open for that”. Okay, that might be literally true. But it’s the wrong mindset; if you become excellent at a broad skill, like writing, public speaking or management, a skill that’s in demand in the effective altruism community — people will fight over you. They’ll fight over you like crazy.
Just thinking about what roles are open right now is only relevant if you literally need to get a job in the next few months — and even then, you should try to find out whether there’s a role that could be created for you, or non-public openings.
If you’re a young person starting out, just go and develop the skills you need outside of the effective altruism community for a few years. And then come back once you’re a superstar writer, speaker, or manager.
Working on the most interesting puzzles
I don’t know if this is a mistake, but it’s something that a lot of people do — they work on whatever happens to pique their interest. I think if you are naturally a problem solver, there are lots of super interesting puzzles in the world, and it can be really hard to resist these. But a lot of the most interesting puzzles in the world are not that critical to solve, while many of the largest problems in the world aren’t necessarily the most interesting puzzles.
I think if you have a mind of someone who likes proving things, you also want a definitive solution to a problem that was very difficult to solve. And yet a lot of the biggest problems in the world don’t come in that form; they’re partially empirical and involve a lot of uncertainty, and so you can’t always get a definitive solution.
At the same time, it’s very hard to work on something that you’re not intellectually interested in. So the advice is probably: try to find the intersection of problems that you find intellectually stimulating, and problems that are really important.
I see a lot of people optimising only for things that are intellectually interesting. A lot of the most brilliant minds I know are working in super abstract areas of logic and mathematics.
I totally see how these are interesting problems, and that they have great minds for these, but I wish I could see what kind of progress they could make on one of these much messier, more important problems.
But the opposite is also true. If you find that your work isn’t intellectually stimulating, then it’s going to be really hard to work on it, and it’s probably going to make you sad — you haven’t found the intersection in those cases.
Not thinking enough about developing skills
There’s insufficient attention paid to the question of “what skills will I learn at this job?”
E.g. if you think you have the seeds of being a great manager, is this a place where you’ll develop that skill? People can be really vague about career capital… think about what skills you actually want to develop, check if you will develop them in the job you’re considering.
Not spreading out among many different fields
For effective altruism to be successful as a community, it needs to have work happening in lots of different areas. You might keep hearing about how AI is the cool thing to work on, but it would be a pretty big mistake if everyone actually worked on AI.
Doing a variety of work is not only important for those different fields, but it’s also really valuable in terms of building a healthy and functional intellectual community — compared to a group of people having one swing at one particularly important problem, assuming their world-view turns out to be right.
Assuming direct work is best
I think there’s too much binary thinking around direct work. Often either people think they’re just automatically doing the right thing by working directly, or if they’re not working at an ‘EA org’ — they think their work is useless to the community.
There’s also too much of a focus on “what cause area am I working on?” and not enough on “what kinds of skills am I developing as an early career person trying to do good in the long-term?”
Valuing breadth over depth
I think people put too much emphasis on breadth rather than depth. People are generally not going to reward you for breadth — compared to specialising in an area and going really in depth. It makes sense to think that going for breadth is going to be more interesting, but people probably have the wrong expectations about who else might value it. Others will generally value expertise in a very specific area.
Thinking that their cause is “the one true cause”
There are a lot of people who think that their cause area is “the one true cause area.” Strategically, this isn’t a good place to start a conversation with someone who has different goals. It also just lacks perspective.
There are existential risk focused people who think “how could you work on animal welfare instead of the millions of future generations of people who might exist?.” There are animal welfare people who think “how could you work on global poverty when there are 9 billion chickens who suffer every day?”.
But some people aren’t going to be attracted to saving millions of future generations, and some people aren’t going to be attracted to preventing the suffering of factory farmed animals. And that should be okay.
Comments sorted by top scores.