My current impressions on career choice for longtermists

post by Holden Karnofsky (HoldenKarnofsky) · 2021-06-04T17:07:29.979Z · EA · GW · 24 comments

Contents

  Some longtermism-relevant aptitudes
    building, running, and boosting" aptitudes[1]
      Examples:
      How to try developing this aptitude:
      On track?
    and bureaucratic aptitudes
      Examples:
      How to try developing this aptitude:
      On track?
    and empirical research on core longtermist topics" aptitudes
      Examples:
      How to try developing these aptitudes:
      On track?
    aptitudes
      Examples:
      How to try developing these aptitudes:
      On track?
    aptitude[4]
      Examples:
      How to try developing this aptitude:
      On track?
    building" aptitudes
      Examples:
      How to try developing this aptitude:
      On track?
    engineering aptitude
      Examples:
      How to try developing this aptitude:
      On track?
    security aptitudes
      Examples:
      How to try developing this aptitude:
      On track?
    
      Examples:
      How to try developing this aptitude:
      On track?
    aptitudes
  Hybrid aptitudes
  Aptitude-agnostic vision: general longtermism strengthening
  How to choose an aptitude
  Some closing thoughts on advice
None
24 comments

This post summarizes the way I currently think about career choice for longtermists. I have put much less time into thinking about this than 80,000 Hours, but I think it's valuable for there to be multiple perspectives on this topic out there.

While the jobs I list overlap heavily with the jobs 80,000 Hours lists, I organize them and conceptualize them differently. 80,000 Hours tends to emphasize "paths" to particular roles working on particular causes; by contrast, I emphasize "aptitudes" one can build in a wide variety of roles and causes (including non-effective-altruist organizations) and then apply to a wide variety of longtermist-relevant jobs (often with options working on more than one cause). Example aptitudes include: "helping organizations achieve their objectives via good business practices," "evaluating claims against each other," "communicating already-existing ideas to not-yet-sold audiences," etc.

(Other frameworks for career choice include starting with causes (AI safety, biorisk, etc.) or heuristics ("Do work you can be great at," "Do work that builds your career capital and gives you more options.") I tend to feel people should consider multiple frameworks when making career choices, since any one framework can contain useful insight, but risks being too dogmatic and specific for individual cases.)

For each aptitude I list, I include ideas for how to explore the aptitude and tell whether one is on track. Something I like about an aptitude-based framework is that it is often relatively straightforward to get a sense of one's promise for, and progress on, a given "aptitude" if one chooses to do so. This contrasts with cause-based and path-based approaches, where there's a lot of happenstance in whether there is a job available in a given cause or on a given path, making it hard for many people to get a clear sense of their fit for their first-choice cause/path and making it hard to know what to do next. This framework won't make it easier for people to get the jobs they want, but it might make it easier for them to start learning about what sort of work is and isn't likely to be a fit.

I’ve tried to list aptitudes that seem to have relatively high potential for contributing directly to longtermist goals. I’m sure there are aptitudes I should have included and didn’t, including aptitudes that don’t seem particularly promising from a longtermist perspective now but could become more so in the future.

In many cases, developing a listed aptitude is no guarantee of being able to get a job directly focused on top longtermist goals. Longtermism is a fairly young lens on the world, and there are (at least today) a relatively small number of jobs fitting that description. However, I also believe that even if one never gets such a job, there are a lot of opportunities to contribute to top longtermist goals, using whatever job and aptitudes one does have. To flesh out this view, I lay out an "aptitude-agnostic" vision [EA · GW] for contributing to longtermism.

Some longtermism-relevant aptitudes

"Organization building, running, and boosting" aptitudes[1]

Basic profile: helping an organization by bringing "generally useful" skills to it. By "generally useful" skills, I mean skills that could help a wide variety of organizations accomplish a wide variety of different objectives. Such skills could include:

Examples:

Beth Jones (Open Philanthropy Director of Operations); Max Dalton and Joan Gass at CEA; Malo Bourgon at MIRI. (I focused on people in executive roles and gave only a small number of examples, but I could've listed a large percentage of the people currently working at longtermism-focused organizations, as well as people working at not-explicitly-longtermist organizations doing work that's important by longtermist lights. In general, my examples will be illustrative and focused on relatively simple/"pure" cases of someone focusing on a single aptitude; I don't think people should read into any "exclusions.")

How to try developing this aptitude:

There are many different specializations here. Each can generally be developed at just about any organization that has the corresponding need.

In many cases, early-career work in one specialization can give you some exposure to others. It's often possible to move between the different specializations and try different things. (The last three listed - communications, finance/accounting, and law - are probably the least like this.)

I'm especially positive on joining promising, small-but-growing organizations. In this sort of organization, you often get a chance to try many different things, and can get a rich exposure to many facets of helping an organization succeed. This can be an especially good way to get experience with people management and project management, which are often very generally applicable and in-demand skills across organizations. Coming into such a company in whatever role is available, and then being flexible and simply focused on helping the company succeed, can be a good learning experience that helps with both identifying and skilling up at good-fit aptitudes.

On track?

As a first pass, the answer to "How on track are you to develop a longtermism-relevant aptitude?" seems reasonably approximated by "How generically strong is your performance?" Raises, promotions, and performance reviews are all data points here. I think one of the best indicators of success would be that the people you work most closely with are enthusiastic about you and would give you a glowing reference - combined with those people (and the organization you work for) being themselves impressive.

People working on this aptitude might sometimes have feelings like "I'm performing well, but I don't feel I'm contributing to a great mission." In early career stages, for this aptitude, I think performing well is more important than being at an organization whose mission you're enthusiastic about, assuming the work is overall reasonably enjoyable and sustainable. Later on, when you have a relatively stable sense of your core competencies and aren’t growing rapidly, I think it’s good to give the mission more weight.

Political and bureaucratic aptitudes

Basic profile: advancing into some high-leverage role in government (or some other institution such as the World Bank), from which you can help the larger institution make decisions that are good for the long-run future of the world.

While organization-supporting aptitudes [EA · GW] are mostly (in the long run) about helping some organization whose mission you're aligned with accomplish its existing goals, political and bureaucratic aptitudes are more about using a position of influence (or an influential network) to raise the salience and weight of longtermist goals within an institution.

Essentially any career that ends up in an influential position in some government (including executive, judicial, and legislative positions) could qualify here (though of course some are more likely to be relevant than others).

Examples:

Richard Danzig (former Secretary of the Navy, author of Technology Roulette); multiple people who are pursuing degrees in security studies at Georgetown and aiming for (or already heading into) government roles.

How to try developing this aptitude:

First, you should probably have a clear idea of what institution (or set of institutions) could be a good fit. A possible question to ask yourself: "What's an institution where I could imagine myself being relatively happy, productive, and motivated for a long time while 'playing by the institution's rules?'" I'd suggest speaking with later-career people at the institution to get as detailed a sense as possible of how long it will take to reach the kind of position you're hoping for; what your day-to-day life will be like in the meantime; and what you will need to do to succeed.

Then, you can try for essentially any job at this institution and focus on performing well by the institution's standards. Others who have advanced successfully should be able to give a good guide to what these are. In general (though not universally), I would expect that advancing along any track the institution offers is a good start, whether or not that track is directly relevant to longtermism.

Sometimes the best way to advance will involve going somewhere other than the institution itself, temporarily (e.g., law school, public policy school, think tanks). Graduate schools present the risk that you could spend a long time there without learning much about the actual career track itself, so it may sometimes make sense to try out a junior role, see how it feels, and make sure you're expecting a graduate degree to be worth it before going for the graduate degree.

On track?

As a first pass, the answer to "How on track are you?" seems reasonably approximated by "How quickly and impressively is your career advancing, by the standards of the institution?" People with more experience (and advancement) at the institution will often be able to help you get a clear idea of how this is going (and I generally think it’s important to have good enough relationships with some such people to get honest input from them - this is an additional indicator for whether you’re “on track”). If you’re advancing and performing well generally, the odds seem reasonably good that you’ll be able to advance in some longtermism-relevant part of the institution at some point.

I think one of the main questions for this sort of aptitude is "How sustainable does this feel?" This question is relevant for all aptitudes, but especially here - for political and bureaucratic roles, one of the main determinants of how well you advance is simply how long you stick with it and how consistently you meet the institution's explicit and implicit expectations.

"Conceptual and empirical research on core longtermist topics" aptitudes

Basic profile: helping to reach correct substantive conclusions on action-relevant questions for effective altruists, such as:

I discuss this one at some length because I know it fairly well. However, I think it's one of the hardest aptitudes to succeed at at the moment, as it tends to require very high levels of self-directedness.

Examples:

Note that some people in this category do mostly conceptual/philosophical work, while some do mostly empirical work; some focus on generating new hypotheses, while others focus on comparing different options to each other. The unifying theme is of focusing on reaching substantively correct conclusions, not on better communicating conclusions others have reached.

How to try developing these aptitudes:

One starting point would be a job at an organization specifically focused on the type of question you're interested in. So if you want to look for crucial considerations [? · GW], you might try for a job at FHI; if you want to work on questions about grantmaking, you might try for a job at Open Philanthropy.

I think other jobs are promising as well for developing key tools, habits, and methods:

I also think there are opportunities to explore and demonstrate these aptitudes via self-study and independent work - on free time, and/or on scholarships designed for this (such as EA Long-Term Future Fund grants, Research Scholars Program, and Open Philanthropy support for individuals working on relevant topics).

I think these aptitudes currently require a lot of self-direction to do well, no matter where you're doing them, so trying them on your own seems like a reasonable test (although given the difficulty, I'd suggest a frame of "seeing whether this is enjoyable/interesting/useful" rather than "actively pushing for a job").

The basic formula I see for trying out these aptitudes for self-study is something like:

Some example approaches:

It could also be beneficial to start with somewhat more concrete, tractable versions of this sort of exercise, such as:

In general, I think it's not necessary to obsess over being "original" or having some new insight. In my experience, when one tries to simply write up one's current understanding in detail - even when one's understanding is a very "vanilla" or widely accepted story - points of confusion and uncertainty often come into relief, and one often can learn a lot and/or notice underappreciated points this way. I think it's ideal to write up underappreciated points when one has them in mind, but I also see a lot of value in straightforward, detailed explanations and critical assessments of existing arguments.

On track?

Some example milestones you could aim for while developing these aptitudes:

My very rough impression/guess is that for people who are an excellent good fit for this aptitude, a year of full-time independent effort should be enough to mostly reach these milestones, and that 2-3 years of 20%-time independent effort (e.g., one day per week) should also suffice.[3] (For this kind of role, I think there's a lot of important "background processing" of ideas, so I'd expect a 20%-time year to be more than 1/5 as productive as a full-time year.) I would generally consider this "clock" to start as soon as someone is carving out time and forming an intent to try this work (I wouldn't wait until they are successfully spending time on it, since this is one of the most challenging things about the work, as noted above).

Contrast with research "paths." Rather than aiming to work on a particular topic such as AI governance or cause prioritization, I'm suggesting starting with whatever topics you have the energy and interest to write about, and I think that someone who succeeds by the above criteria has a good shot at building a career around research on some topic in the general vicinity. Because of this, it should be possible to try/explore this aptitude without needing a particular job offer in a particular area (although again, I think the success rate will generally be low).

"Communicator" aptitudes

Basic profile: helping to communicate key, substantively well-grounded messages and ideas to particular audiences. The audiences could be very general (e.g., writing for mass-market media) or more specialized (e.g., writing for policymakers on particular issues). Example messages could be the importance of global catastrophic risks, the challenge of AI alignment, the danger of covert state bioweapons programs, the general framework of effective altruism, and many more.

Examples:

How to try developing these aptitudes:

First, you should have some idea of what sort of target audience you’d like to communicate with. A possible question to ask yourself: "What's a type of person that I understand and communicate with, better than most EAs / longtermists do?"

Then, you can try to get any job that involves communicating with this audience and getting feedback on a regular basis - whether or not the communication is about EA/longtermist topics. The main aptitude being built is general ability to communicate with the audience (although understanding of EA/longtermist topics will be important at some point as well). So if you’re interested in communicating with fairly general/widespread audiences, most jobs in journalism, and many in public relations and corporate communications, would qualify.

I also think there's a lot of opportunity to build this sort of aptitude through independent work, such as blogging, tweeting, podcasting, etc. I expect that some of the people with the greatest potential as communicators are those who find it relatively easy to create large amounts of content and connect with their target audience naturally. (Though for anyone doing independent public work, I'd advise taking some measures to avoid publishing something unintentionally offensive, as this could affect your career prospects for a long time even if the offense is the result of a misunderstanding.)

On track?

As a first pass, the answer to "How on track are you to develop a longtermism-relevant 'communicator' aptitude?" seems reasonably approximated by "How generically successful are you by the standards of the (communications-focused) career track you're on?" The more successful, the better position you'll likely be in at some point to find ways to communicate important longtermist ideas to your target audience.

Building a following via independent content creation would also be a clear sign of promise.

In both cases, it seems realistic to get a pretty good read on how you're doing within 2-3 years.

"Entrepreneur" aptitude[4]

Basic profile: founding, building, and (at least for some time) running an organization that works on some longtermist goal. Some people found organizations primarily as a way to have independence for their research or other work; here I am instead picturing someone who is explicitly aiming to invest in hiring, management, culture- and vision-setting, etc. with the aim of building an organization that can continue to function well if they leave.

(Not all organizations are founded by someone who is explicitly focused this way; sometimes an organization is founded by one person, but a lot of the "entrepreneur" work ends up done by people who come in later and take the top executive role.)

Examples:

Some pretty clean examples (with the organization that was founded in parentheses, regardless of whether the person is still there) would be Ben Todd (80,000 Hours); Jason Matheny (CSET); Elie Hassenfeld and myself (GiveWell). Many other longtermist organizations had a fair amount of early turnover at the top (leaving it somewhat unclear who did the bulk of the "entrepreneur" work) and/or are academic centers rather than traditional organizations.

How to try developing this aptitude:

Entrepreneurship tends to require juggling more duties than one can really learn how to do "the right way." It crucially relies on the ability and willingness to handle many things "just well enough" (usually with very little training or guidance) and focus one's energy on the few things that are worth doing "reasonably well."

With this in mind, I generally think the person best-suited to found an organization is the person who feels such strong conviction that the organization ought to exist (and can succeed) that they can hardly imagine working on anything else. This is the kind of person who tends to have a really clear idea of what they're trying to do and how to make the tradeoffs gestured at above, and who is willing and able to put in a lot of work without much reliable guidance.

So my general approach to entrepreneurship would be: if there's no organization you have a burning desire to create (or at least, a strong vision for), it's probably not time to be an entrepreneur. Instead it could make more sense to try for a job in which you're learning more about parts of the world you're interested in, becoming more aware of how organizations work, etc. - this could later lead to identifying some "gap in the market" that you’re excited to fill.

I do think that if you have any idea for an organization that you think could succeed, and that you'd be extremely excited to try to create, giving this a shot could be a great learning experience and way of building a general "entrepreneur" aptitude. This is true even if the organization you have in mind does not do longtermist-focused work (for example, if it's a conventional tech startup). Though it's worth keeping in mind that it could take a long time (several years, sometimes >10 years) to get a successful organization to the point where one can responsibly step away and move onto something else.

On track?

In the first couple of years, I think you’re doing reasonably well if your organization is in a reasonable financial position, hasn't had any clear disasters, and has done pretty well at attracting talent. Beyond that, I think it tends to be a big judgment call how an organization is doing.

"Community building" aptitudes

Basic profile: bringing together people with common interests and goals, so that they form a stronger commitment to these interests and goals and have more opportunities and connections to pursue them. This could be via direct networking (getting to know a lot of people and introducing them to each other); meetups and events; explicit recruiting;[5] etc. Referring new people to resources and helping them learn more is also an important component.

Examples:

People organizing local, university, etc. EA groups, organizing EAGx's, etc., as well as many of the people at the Centre for Effective Altruism.

How to try developing this aptitude:

There is likely some community you're a part of, or set of people you know, that you can immediately start working with in this way: networking and making introductions; organizing meetups and other events, etc. This can initially be done on free time; if you start to build a thriving mini-community, I'd suggest looking for funding to transition into doing the work full-time, and looking into whether you can expand the population you're working with.

On track?

I find it a bit harder to articulate "on track" conditions for this aptitude than for most of the others in this piece, but a couple of possibilities:

Software engineering aptitude

Basic profile: I think software engineering can be useful for longtermist goals in multiple ways:

Examples:

Catherine Olsson and Tom Brown have both done software engineering at OpenAI, Google Brain, and Anthropic.

How to try developing this aptitude:

Software engineering is a relatively well-established career path. You can start with something like App Academy or Lambda School. (For roles at e.g. DeepMind and OpenAI specifically, one probably needs to be in the top few percent of people in these programs.) Just about any software engineering job is probably a good way to build this aptitude; the more talented one's peers, the better.

On track?

See the "On track" section of Organization building, running and boosting [EA · GW].

Information security aptitudes

(In this case, there isn't as much difference between an "aptitude" and a "path." The same applies to the next section, as well.)

Basic profile: working to keep information safe from unauthorized access (or modification). This could include:

This post by Claire Zabel and Luke Muehlhauser [EA · GW] states, "Information security (infosec) expertise may be crucial for addressing catastrophic risks related to AI and biosecurity ... More generally, security expertise may be useful for those attempting to reduce [global catastrophic risks], because such work sometimes involves engaging with information that could do harm if misused ... It’s more likely than not that within 10 years, there will be dozens of GCR-focused roles in information security, and some organizations are already looking for candidates that fit their needs (and would hire them now, if they found them) ... If people who try this don’t get a direct work job but gain the relevant skills, they could still end up in a highly lucrative career in which their skillset would be in high demand."

I broadly agree with these points.

Examples:

Unfortunately, there aren't many effective altruists with advanced information security careers as of now, as far as I know.

How to try developing this aptitude:

Working on information security for any company - or working in any field of information security research - could be a good way to build this aptitude. I would guess that the best jobs would be ones at major tech companies for whom security is crucial: Amazon, Apple, Microsoft, Facebook, and (especially) Google.

On track?

See the "On track" section of Organization building, running and boosting [EA · GW].

Academia

Basic profile: following an academic career track likely means picking a field relatively early, earning a Ph.D., continuing to take academic positions, attempting to compile an impressive publication record, and ultimately likely aiming for a role as a tenured professor (although there are some other jobs that recruit from academia). Academia is a pretty self-contained career track, so this is a case where there isn't a lot of difference between an "aptitude" and a "path" as defined in the introduction of this post.

Being an academic could be useful for longtermist goals in a few ways:

Many academic fields could potentially lead to these sorts of opportunities. Some that seem particularly likely to be relevant for longtermists include:

Examples:

Hilary Greaves at Global Priorities Institute; Stuart Russell at Center for Human-Compatible AI; Kevin Esvelt.

How to try developing this aptitude:

The academic career path is very well-defined. People entering it tend to have fairly robust opportunities to get advice from people in their field about how to advance, and how to know whether they're advancing.

In general, I would encourage people to place high weight on succeeding by traditional standards - both when picking a field and when picking topics and projects within it - rather than trying to optimize too heavily for producing work directly relevant to longtermist goals early in their careers.

On track?

My answer here is essentially the same as for political and bureaucratic aptitudes [EA · GW].

Other aptitudes

There are almost certainly aptitudes that have a lot of potential to contribute directly to longtermist goals, that I simply haven’t thought to list here.

Hybrid aptitudes

Sometimes people are able to do roles that others can't because they have two (or more) of the sorts of aptitudes listed above. For example, perhaps someone is a reasonably strong software engineer [EA · GW] and a reasonably strong project/people manager [EA · GW], which allows them to contribute more as a software engineering manager than they could as either a software engineer or a nontechnical manager. In the effective altruism community, "conceptual and empirical research" [EA · GW] often goes hand in hand with "communicator [EA · GW]" (as with Nick Bostrom writing Superintelligence).

I think it's good to be open to building hybrid aptitudes, but also good to keep in mind that specialization is powerful. I think the ideal way to pursue a hybrid aptitude is to start with one aptitude, and then notice an opportunity to develop another aptitude that complements it and improves your career options. I wouldn't generally recommend pursuing multiple aptitudes at once early in one's career.

Aptitude-agnostic vision: general longtermism strengthening

I think any of the above aptitudes could lead to opportunities to work directly on longtermist goals - at an AI lab, EA organization, political institution, etc. And I think there are probably many other aptitudes that could as well.

However, some people will find themselves best-suited for an aptitude that doesn't lead to such opportunities. And some people will develop one of the above aptitudes, but still not end up with such opportunities.

I think such people still have big opportunities to contribute to longtermist goals, well beyond (though including) "earning to give," by doing things to strengthen longtermism generally. Things that have occurred to me in this category include:

I would think anyone who’s broadly succeeding at many of the above things - regardless of what their job is - is having a large expected longtermist impact. I think being successful and satisfied in whatever job one has probably helps on all of these fronts.

How to choose an aptitude

I imagine some people will want a take on which of these aptitudes is "highest impact."

My main opinion on this is that variance within aptitudes probably mostly swamps variance between them. Anyone who is an outstanding, one-of-a-kind talent at any of the aptitudes I listed is likely having enormous expected impact; anyone who is successful and high-performing is likely having very high expected impact; anyone who is barely hanging onto their job is likely having less impact than the first two categories, even if they're in a theoretically high-impact role.

I also believe that successfully building an aptitude - to the point where one is "professionally in demand" - generally requires sticking with it and putting a lot of time in for a long time. Because of this, I think people are more likely to succeed when they enjoy their work and thrive in their work environment, and should put a good deal of weight on this when considering what sorts of aptitudes they want to build. (I think this is particularly true early in one's career.)[6]

With these points in mind, I suggest a couple rules of thumb that I think are worth placing some weight on:

  1. "Minimize N, where N is the number of people who are more in-demand for this aptitude than you are." A more informal way of putting this is "Do what you'll succeed at."
  2. "Take your intuitions and feelings seriously." A lot of people will instinctively know what sorts of aptitudes they want to try next; I think going with these instincts is usually a good idea and usually shouldn't be overridden by impact estimates. (This doesn't mean I think the instincts are usually "correct." I think most good careers involve a lot of experimentation, learning that some sort of job isn't what one pictured, and changing course. I think people learn more effectively when they follow their curiosity and excitement; this doesn't mean that their curiosity and excitement are pointing directly at the optimal ultimate destination.)

I do believe there are some distinctions to be made, in terms of impact being higher for a given level of success at one aptitude vs. another. But any guesses I made on this front would be pretty wild guesses, quite sensitive to my current views on cause prioritization as well as the current state of the world (which could change quickly). And I think there's potential for enormous expected longtermist impact within any of the listed aptitudes - or just via aptitude-agnostic longtermism strengthening. [EA · GW]

Some closing thoughts on advice

Throughout this piece, I've shared a number of impressions about how to build an aptitude, how to tell whether you're on track, and some general thoughts on what rules of thumb might help to be successful and have impact.

I've done this because I think it's helped me try to get across a general framework/attitude for career choice that I think is worth some weight, and can help complement other frameworks that longtermists use.

But I'm generally nervous about giving career advice to anyone, even people I know well, because career choice is such a personal matter and it's so easy for an advice-giver to be oblivious to important things about someone's personality, situation, etc. I'm even more nervous about putting advice up on the internet where many people in many situations that I know very little about might read it.

So I want to close this piece by generally discouraging people from "taking advice," in the sense of making a radically different decision than they would otherwise because of their interpretation of what some particular person would think they should do. Hopefully this piece is useful for inspiration, for prompting discussion, and by raising points that one can consider on the merits and apply their own personal judgment to. Hopefully it won't be taken as any sort of instruction or preference about a specific choice or set of choices.

I'll also link to this page which contains a fair amount of "anti-advice advice," including quotes from me here (“A career is such a personal thing”), here (“When you’re great at your job, no one’s advice is that useful”), and here (“Don’t listen too much to anyone’s advice”).



  1. Some of the content in this section overlaps with that of 80,000 Hours's content on working at effective altruist organizations, particularly with respect to how one might prepare oneself for a role at such organizations. However, my section excludes research-based and other "idiosyncratic" roles at such organizations; it is about jobs based on "generally useful" skills that could also be used at many non-effective-altruist organizations (some of them giving an opportunity to have longtermist impact despite not being explicitly effective-altruist). In other words, this section takes a frame of "building aptitudes that can be useful to help many organizations, including non-effective-altruist ones doing important work" rather than "going to a non-effective-altruist organization in order to build skills for an effective-altruist organization." ↩︎

  2. I'd expect most investigations of this form to "balloon," starting with a seemingly straightforward question ("What are the odds of nuclear winter this century?") that turns out to rely on many difficult sub-questions ("What are the odds there will be a nuclear war at all? How much particulate matter does a typical nuke kick into the air? Are there bigger nukes that might be deployed, and how much bigger?") It can be very difficult to stay focused on a broad question, handling sub-questions pragmatically and giving a reasonable amount of depth to each. But allowing oneself to switch to answering a narrower and narrower subquestion could make the work more tractable. ↩︎

  3. This takes into account the fact that this kind of work can be very hard to put a lot of hours into. I'd expect even people who are a great fit for it to frequently struggle with maintaining focus and to frequently put in less time than they intended; nonetheless, I'd expect such people to achieve roughly the kind of progress I outline on the calendar time frames discussed. ↩︎

  4. This section is similar to 80,000 Hours's discussion of "nonprofit entrepreneur,", with the main difference being my emphasis that entrepreneurship experience with a non-effective-altruist organization (including a for-profit) can be useful. ↩︎

  5. For example, "online organizing" - asking people to take relatively small actions on compelling, immediate topics, resulting in their becoming more engaged and reachable on broader topics. ↩︎

  6. Also see my comments here (under “Not focusing on becoming really good at something”), which were made anonymously but which I'm now attributing. ↩︎

24 comments

Comments sorted by top scores.

comment by Linch · 2021-06-07T21:51:16.776Z · EA(p) · GW(p)

I think this is a really well-written piece, and personally I've shared it with my interns and in general tentatively think I am more inclined to share this with my close contacts than most 80k articles for "generic longtermist EA career advice" (though obviously many 80k articles /podcasts have very useful details about specific questions).

2 things that I'm specifically confused about:

  1. As Max_Daniel noted, an underlying theme in this post is that "being successful at conventional metrics" is an important desiderata, but this doesn't reflect the experiences of longtermist EAs I personally know. For example, anecdotally, >60% of longtermists with top-N PhDs regret completing their program, and >80% of longtermists with MDs regret it.

    (Possible ways that my anecdata is consistent with your claims:
    • These people are  often in the "Conceptual and empirical research on core longtermist topics" aptitudes camp, and success at conventional metrics is a weaker signal here than in other domains you listed.
    • Your notions of "success"/excellence are a much higher bar than completing a PhD at a top-N school.
    • My friends are wrong to think that getting a PhD/MD was a mistake.
  2. You mention that a crazy amount of total hours is necessary to become world-class excellent at things. I agree with the sentiment that  a) fit/talent is very important and b) college and other "normal/default" practices acclimate people to wrongly believing that success is easier and hard work is less critical than is true. But when I think about things that matter for longtermist EAs (rather than success at well-defined prestige ladders in fields with established paradigms), I think a lot of outlier success comes from extremely (arguably unsustainably) intense periods with relatively small calendar time or total hours invested. eg
    • A lot of success in early cryptocurrency trading (pre-2019, say) comes from people a) very talented b) willing to see the opportunity and c) be willing to make radical life changes to realize the once-in-a-decade event and immediately jump on it.
    • This seems to have happened a bunch during the pandemic. Eg amateur short-term forecasts, Youyang Gu's modeling, patio11's work with VaccinateCA, etc, all seemed to have been broadly better in most cases than similar work by established experts.
      • Obviously there are important exceptions like the mRNA vaccines and a lot of the testing/sequencing work.
    • My impression (from outside the field) is that a lot of the most important [AF · GW] work in AI Safety is done by people who are fairly junior, and without a lot of experience in the field.
    • I imagine "crunch time" for EAs in longtermist causes to look a lot more like "be a generically competent  person who has your shit together + is willing to drop everything to work on the hard things that aren't really your specialty but somebody has to do it and nobody else well" than "prepare for 10-20 years working very hard for the exact thing you prepared for, and then emergency times will look pretty close to your specialty so you're well-placed."
      • Perhaps a crux here is that either your mental image of "crunch time" looks more like the latter scenario?
      • or maybe more that "crunch time" is just much less important relatively speaking?
    • A caveat here is that I do agree that a) excellence is very important and b) many EAs (myself included) are perhaps not working hard enough to achieve true excellence.
    • I also agree that a) general excellence and b) hard work specifically is somewhat transferable (eg many of the successful crypto people were great finance traders or great programmers before crypto trading, at least one of which requires insane hours, patio11 was a world-class software evangelizer before his covid work). But I think the importance of being world-class here is "just" building a) the general skillset of becoming world-class and b) the mental fortitude, flexibility, etc of willingness to sacrifice other things when the stakes are high enough, rather than either the direct benefits of your expertise or the network advantages of being around other prestigious/high-status/etc. people.
      • One way in which our models cash out to different actions:
        • If my intuition/heuristic is correct, this points to sometimes doing crunch-time work in less important eras as being the right way to prepare, rather than steadily climbing towards excellence in very competitive domains.
          • Being in "crunch mode" all the time may be actively bad, to the extent that it makes you miss out on great opportunities because you're too zoned into your specific work.
        • On the other hand, if we assume most of the benefits of excellence comes from the networking, etc, benefits of steady excellence, this points much more towards "spend 5-20 years becoming world-class at something that society thinks of as hard and important."

An obvious caveat to these points is that you have much more experience with excellence than I do, and your "closing thoughts on advice" aside, I'm mostly willing to defer to you if you think my heuristics/intuitions here are completely off.

Replies from: Max_Daniel, Max_Daniel
comment by Max_Daniel · 2021-06-07T23:46:55.547Z · EA(p) · GW(p)

As Max_Daniel noted, an underlying theme in this post is that "being successful at conventional metrics" is an important desiderata, but this doesn't reflect the experiences of longtermist EAs I personally know. For example, anecdotally, >60% of longtermists with top-N PhDs regret completing their program, and >80% of longtermists with MDs regret it.

Your examples actually made me realize that "successful at conventional metrics" maybe isn't a great way to describe my intuition (i.e., I misdescribed my view by saying that). Completing a top-N PhD or MD isn't a central example - or at least not sufficient for being a central example, and certainly not necessary for what I had in mind.

I think the questions that matter according to my intuition are things like:

  • Do you learn a lot? Are you constantly operating near the boundaries of what you know how to do and have practiced?
  • Are the people around you impressed by you? Are there skills where they would be like "off the top of my head, I can't think of anyone else who's better at this than <you>"?

At least some top-N PhDs will correlate well with this. But I don't think the correlation will be super strong: especially in some fields, I think it's not uncommon to end up in a kind of bad environment (e.g., advisor who isn't good at mentoring) or to be often "under-challenged" because tasks are either too easy or based on narrow skills one has already practiced to saturation or because there are too few incentives to progress fast. 

[ETA: I also think that many of the OP's aptitudes are really clusters of skills, and that PhDs run some risk of only practicing too small a number of skills. I.e., being considerably more narrow. Again this will vary a lot by field, advisor, other environmental conditions, etc.]

What I feel even more strongly is that these (potential) correlates of doing a PhD are much more important than the credential, except for narrow exceptions for some career paths (e.g., need a PhD if you want to become a professor).

I also think I should have said "being successful at <whatever> metric for one of these or another useful aptitude" rather than implying that "being successful at anything" is useful.

Even taking all of this into account, I think your anecdata is a reason to be somewhat more skeptical about this "being successful at <see above>" intuition I have.

Replies from: Linch
comment by Linch · 2021-06-08T01:09:39.895Z · EA(p) · GW(p)

Completing a top-N PhD or MD isn't a central example - or at least not sufficient for being a central example

As an aside, if you're up for asking your friends/colleagues a potentially awkward question, I'd be interested in seeing how much of my own anecdata about EAs with PhDs/MDs replicates in your own (EA) circles (which is presumably more Oxford-based than mine). I think it's likely that EAs outside of the Bay Area weigh the value of a PhD/other terminal degrees more, but I don't have a strong sense of how big the differences are quantitatively. 

comment by Max_Daniel · 2021-06-07T23:56:04.163Z · EA(p) · GW(p)

I find your crypto trading examples fairly interesting, and I do feel like they only fit awkwardly with my intuitions - they certainly make me think it's more complicated.

However, one caveat is that "willing to see the opportunity"  and "willing to make radical life changes" don't sound quite right to me as conditions, or at least like they omit important things. I think that actually both of these things are practice-able abilities rather than just a matter of "willingness" (or perhaps "willingness" improves with practice). 

And in the few cases I'm aware of it seems to me the relevant people were world-class excellent at some relevant inputs, in part clearly because did spend significant time "practicing" them. 

The point is just that these inputs are broader than "ability to do cryptocurrency trading". On the other hand, they also don't fit super neatly into the aptitudes from the OP, though I'd guess the entrepreneurial aptitude would cover a lot of it (even if it's not emphasized in the description of it).

Replies from: Linch
comment by Linch · 2021-06-08T01:05:28.595Z · EA(p) · GW(p)

However, one caveat is that "willing to see the opportunity"  and "willing to make radical life changes" don't sound quite right to me as conditions, or at least like they omit important things.

I agree with this! Narrowly,"chance favors the prepared mind" and being in either quant trading or cryptography (both competitive fields!) before the crypto boom presumably helps you see the smoke ahead of time, and like you some of the people I know in the space were world-class at an adjacent field like finance trading or programming.  Though I'm aware of other people who literally did stuff closer to fly a bunch to Korea and skirt the line on capital restrictions, which seems less reliant on raw or trained talent. 

Broadly, I agree that both seeing the opportunity (serendipity?) and willingness to act on crazy opportunities are rare skillsets that are somewhat practicable rather than just a pure innate disposition. This is roughly what I mean by 

But I think the importance of being world-class here is "just" building a) the general skillset of becoming world-class and b) the mental fortitude, flexibility, etc of willingness to sacrifice other things when the stakes are high enough, rather than either the direct benefits of your expertise or the network advantages of being around other prestigious/high-status/etc. people.

But I also take your point that maybe this is its own skillset (somewhat akin to/a subset of "entrepreneurship")  rather than a general notion of excellence.

Replies from: Max_Daniel
comment by Max_Daniel · 2021-06-09T16:25:48.686Z · EA(p) · GW(p)

Narrowly,"chance favors the prepared mind" and being in either quant trading or cryptography (both competitive fields!) before the crypto boom presumably helps you see the smoke ahead of time, and like you some of the people I know in the space were world-class at an adjacent field like finance trading or programming.  Though I'm aware of other people who literally did stuff closer to fly a bunch to Korea and skirt the line on capital restrictions, which seems less reliant on raw or trained talent. 

(I agree that having knowledge of or experience in adjacent domains such as finance may be useful. But to be clear, the claim I intended to make was that the ability to do things like "fly a bunch to Korea" is, as you later say, a rare and somewhat practiceable skillset.

Looking back, I think I somehow failed to read your bullet point on "hard work being somewhat transferable" etc. I think the distinction you make there between  "doing crunch-time work in less important eras" vs. "steadily climbing towards excellence in very competitive domains" is very on-point, that the crypto examples should make us more bullish on the value of the former relative to the latter, and that my previous comment is off insofar as it can be read as me arguing against this.)

Replies from: Linch
comment by Linch · 2021-06-09T17:31:15.527Z · EA(p) · GW(p)

(I agree that having knowledge of or experience in adjacent domains such as finance may be useful. But to be clear, the claim I intended to make was that the ability to do things like "fly a bunch to Korea" is, as you later say, a rare and somewhat practiceable skillset.

Got it!

and that my previous comment is off insofar as it can be read as me arguing against this

Thanks for the clarification! Though I originally read your comment as an extension of my points rather than arguing against them, so no confusion on my end (though of course I'm not the only audience of your comments, so these clarifications may still be helpful).

comment by Max_Daniel · 2021-06-05T18:33:50.293Z · EA(p) · GW(p)

I'm curious what you (and others) think about the following aptitude.

(I don't have a particular reason to think my intuition that this is "a thing" is right, and know barely anything about the careers I'm talking about, so I advise not taking this seriously at all as actual advice for now.)

"Negotiation and navigating conflicting interests" aptitude.

This involves things like:

  • Knowing what your own interests regarding some contested issue are.
  • Understanding where other people are coming from when approaching some issue, including in ways that go beyond what they are able to state explicitly.
  • Helping others understand what their interests regarding some contested issue are, and helping them to communicate them well to others, and/or being good at that kind communication oneself. This includes the ability to translate between different vocabularies and cultural codes.
  • Coming up with creative and original options for how to settle a conflict.

Examples [??]:

  • US top politicians who have a track record at getting bipartisan policies passed, e.g., Joe Biden
  • "Sherpas" and other political staffers involved in the nitty-gritty of international agreements
  • Roger Fisher and William L. Ury
  • Top executives and lawyers dealing with mergers and acquisitions
  • Some aspects of what HR departments to in companies 
  • Machiavelli [???]
  • Robert Moses [???]

How to try to develop this aptitude [?]: 

  • Embark on and become conventionally successful in one of the above careers
  • Constructively contribute to the resolution of conflicts that happen around you (there usually is an abundance of them ...)
  • Model United Nations conferences and similar things [???]

On track [???]:

  • You find neither being directly involved in, nor helping to mediate conflicts, stressful or unpleasant, and there are several examples of when you've clearly contributed to finding significant Pareto improvements.
  • You are respected by a wide range of different people, and you often find yourself in situations where two people or groups can't have a good conversation with each other, but you get along will with both, pass their "Ideological Turing Tests", and can "talk in their language".
  • If you're doing this professionally, your achievements are recognized by your peers and bosses, you get promoted, and you take on "bigger" cases involving more responsibility etc.

Why do I think this might be important?

  • Depending on the path, I think there are significant synergies with the "organization building etc.", political/bureaucratic, community building, and entrepreneur aptitudes.
  • However, I think there may also be a case for viewing this as a potential 'central' aptitude in its own rights. Here's a straw argument for why: 
    • Suppose that in 2050, longtermist-aligned MyAICompany makes a research breakthrough that makes them think they would have a decent shot at building a 'transformative AI system' if they had access to 10x-100x their current resources (e.g. compute). They're wondering if and how to talk about this to their main investors, the US government, the Chinese government, big tech companies, etc.
    • The aptitudes from the OP cover: MyAICompany being founded; it being run well operationally; it having good software infrastructure; it having access to sound bottom-line conclusions on relevant research questions; a good supply of longtermist-aligned talent; various other actors (e.g., parts of the US government) being more sympathetic to, or at least having a better understanding of, its goals; no-one stealing their AI research, or being able to undesirably eavesdrop on their subsequent negotations.
    • However, the aptitudes from the OP conspicuously do not cover (at least not centrally - the relevant capabilities don't seem to be emphasized in any of the other aptitudes): How to structure the conversations with these other actors? How to achieve a good outcome even though there will be a bunch of conflicting interests? 
  • (Secondarily, and anecdotally, I think that a lack of this aptitude has also contributed to at least some EA organizations not always having been "well run" in a generic sense.)
  • I am concerned that due to founder effects and skewed intellectual foundations (e.g. commitment to philosophical views that hide the relevance of de-facto conflicting interests by instead emphasizing how ideal reasonsers would converge to shared beliefs and goals) the current prevalence of this aptitude in the EA community is low, and that it is underappreciated.
comment by JP Addison (jpaddison) · 2021-06-04T20:12:52.550Z · EA(p) · GW(p)

Thanks for writing this! I like the aptitudes framing.

With respect to software engineering, I would add that EA orgs hiring web developers have historically had a hard time getting the same level of engineering talent as can be found at EA-adjacent AI orgs.* I have a thesis that as the EA community scales, the demand for web developers building custom tools and collaboration platforms will grow as a percentage of direct work roles. With the existing difficulty in hiring and with most EAs not viewing web development as a direct work path, I expect the shortage to continue.

Also as practical career advice, I'd recommend many people who already know how to code somewhat to try get a software engineering job at ~any tech company / startup. That company will spend months training you and the problems you'll be solving will be much more useful for learning than the toy problems offered by a bootcamp.

* This is not so much to cast aspersions on myself and my colleagues, as to agree with the post that the level of engineering talent in AI labs is very high.

Replies from: AppliedDivinityStudies
comment by AppliedDivinityStudies · 2021-06-04T20:41:32.194Z · EA(p) · GW(p)

I mostly agree, though I would add: spending a couple years at Google is not necessarily going to be super helpful for starting a project independently. There's a pretty big difference between being good at using Google tooling and making incremental improvements on existing software versus building something end-to-end and from scratch. That's not to say it's useless, but if someone's medium-term goal is doing web development for EA orgs, I would push working at a small high-quality startup. Of course, the difficulty is that those are harder to identify.

comment by MichaelA · 2021-06-06T09:34:55.805Z · EA(p) · GW(p)

I'd be quite interested to hear one or more people from 80k share their thoughts on this post, e.g. on questions like:

  • To what extent do they think there are "disagreements" between their advice/framework and this one, vs something more like this just being a different framework and providing different emphases (which might still therefore lead readers in different directions, especially if readers engage quickly)?
  • To what extent do they think it'd be good if someone thinking about career choice swapped out reading some 80k content for reading this?
    • E.g.,would 80k staff think it'd be best to read their full key ideas article, their full career planning process article, and then this? Or maybe read this earlier? Or maybe read this only after reading some problem profiles, career reviews, etc.?
    • E.g., how does this differ between different types of readers?
Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-06-07T19:20:10.876Z · EA(p) · GW(p)

Hi Michael,

Just some very quick reactions from 80k:

  • I think Holden’s framework is useful and I’m really glad he wrote the post.

  • I agree with Holden about the value of seeking out several different sources of advice using multiple frameworks and I hope 80k’s readers spend time engaging with his aptitude-based framing. I haven’t had a chance to think about exactly how to prioritise it relative to specific pieces of our content.

  • It’s a little hard to say to what extent differences between our advice and Holden’s are concrete disagreements v. different emphases. From our perspective, it’s definitely possible that we have some underlying differences of opinion (e.g. I think all else equal Holden puts more weight on personal fit) but, overall, I agree with the vast majority of what Holden says about what types of talent seem most useful to develop. Holden might have his own take on the extent to which we disagree.

  • The approach we take in the new planning process overlaps a bit more with Holden’s approach than some of our past content does. For example, we encourage people to think about which broad “role” is the best fit for them in the long-term, where that could be something like “communicator”, as well as something narrower like “journalist”, depending on what level of abstraction you find most useful.

  • I think one weakness with 80k’s advice right now is that our “five categories” are too high-level and often get overshadowed by the priority paths. Aptitudes are a different framework from our five categories conceptually, but seem to overlap a fair amount in practice (e.g. government & policy = political & bureaucratic aptitude). However, I like that Holden’s list is more specific (and he has lots of practical advice on how to assess your fit), and I could see us adapting some of this content and integrating it into our advice.

comment by MichaelPlant · 2021-06-05T16:31:43.619Z · EA(p) · GW(p)

Thanks for writing this up! I found the overall perspective very helpful, as well as lots of the specifics, particularly (1) what it means to be on track and (2) the emphasis on the importance of 'personal fit' for an aptitude (vs the view there being a single best thing).

Two comments. First, I'm a bit surprised that you characterised this as being about career choice for longtermists.  It seems that the first five aptitudes are just as relevant for non-longtermist do-gooding, although the last two - software engineering and information security - are more specific to longtermism. Hence, this could have been framed as your impressions on career choice for effective altruists, in which you would set out the first five aptitudes and say they applied broadly, then noted the two more which are particular to longtermism. 

In the spirit of being a vocal customer, I would have preferred this framing. I am enthusiastic about effective altruism, but ambivalent about longtermism - I'm glad some people focus on it, but it's not what I prioritise - and found the narrower framing somewhat unwelcoming, as if non-longtermists aren't worth considering. (Cf if you had said this was career advice for women even though gender was only pertinent to a few parts.)

Second, one aptitude that did seem conspicuous by its absence was for-profit entrepreneurship - the section on the "entrepreneur" aptitude only referred to setting up longtermist organisations. After all, the Open Philanthropy Project, along with much of the rest of the effective altruist world, only exists because people became very wealthy and then gave their money away. I'm wondering if you think it is sufficiently easy to persuade (prospectively) wealthy people of effective altruism(/longtermism) that becoming wealthy isn't something community members should focus on; I have some sympathy with this view, but note you didn't state it here. 

Replies from: MichaelA
comment by MichaelA · 2021-06-06T07:52:43.424Z · EA(p) · GW(p)

Two small things on your final paragraph:

comment by AppliedDivinityStudies · 2021-06-04T19:54:53.122Z · EA(p) · GW(p)

Thanks for the writeup Holden, I agree that this is a useful alternative to the 80k approach.

On the conceptual research track, you note "a year of full-time independent effort should be enough to mostly reach these milestones". How do you think this career evolves as the researcher becomes more senior? For example, Scott Alexander seems to be doing about the same thing now as he was doing 8 years ago. Is the endgame for this track simply that you become better at doing a similar set of things?

comment by MichaelA · 2021-06-05T10:58:19.306Z · EA(p) · GW(p)

With this in mind, I generally think the person best-suited to found an organization is the person who feels such strong conviction that the organization ought to exist (and can succeed) that they can hardly imagine working on anything else. This is the kind of person who tends to have a really clear idea of what they're trying to do and how to make the tradeoffs gestured at above, and who is willing and able to put in a lot of work without much reliable guidance.

So my general approach to entrepreneurship would be: if there's no organization you have a burning desire to create (or at least, a strong vision for), it's probably not time to be an entrepreneur. Instead it could make more sense to try for a job in which you're learning more about parts of the world you're interested in, becoming more aware of how organizations work, etc. - this could later lead to identifying some "gap in the market" that you’re excited to fill.

This sounds probably right to me, and also aligns with advice I've heard elsewhere. On the other hand, it seems to me that Charity Entrepreneurship-incubated charities have had more total success, and more consistently gone at least fairly well, than I might've expected or than this general advice would seem to predict. 

So I currently feel fairly uncertain about this matter, and I'm fairly open to the hypothesis that that general advice just doesn't apply if there's a very well run incubation program (including good ideas for what to found, good vetting, good training, etc.) and a very strong pool of applicants to it, or something.

For roughly this reason, I'm also more optimistic about the Longtermist Entrepreneurship Fellowship [? · GW] than that general advice might suggest (which also seems in line with Claire Zabel's view, given the grant that was provided to that project).

All that said, I haven't looked super closely into any of this, so these are just tentative views. 

comment by Max_Daniel · 2021-06-05T17:35:34.333Z · EA(p) · GW(p)

This is great. I emphatically agree with almost all of it - and I expect I will send this post to many people going forward. 

It's very unclear if I have good intuitions about how to do career choice well, and so unclear if me agreeing should anyone make more than negligibly more willing to act on this advice - but at the very least I strongly suspect I could have avoided many career choice mistakes if I had read such a post in, say, 2016.

Replies from: Max_Daniel
comment by Max_Daniel · 2021-06-05T17:41:12.754Z · EA(p) · GW(p)

Some things that ring particularly true to me relative to what I perceive to be common attitudes among young people interested in EA:

  • Focus a lot on achieving success by conventional metrics.
  • When things are not working well by typical lights (e.g. when judged against things like in the "on track?" sections), quit and try something else. No matter whether you're on a path, or at an organization, that is typically considered to be "high-impact".
  • "Research vs. operations" is not a great question to ask [I'm aware you're not saying this directly in the post], and people are often better off replacing both "research" and "operations" with more fine-grained categories when thinking about their careers.
  • When making career decisions, put more weight on intuitions and gut feelings of excitement (in particular when based on actual experience, e.g., a representative work trial of the job you're considering) - and less on impact estimates.
  • Put less weight on advice when making concrete job decisions, especially advice from members of the effective altruism community who don't have much context on you and the options you're deciding between.
  • This: "I'd guess that anyone who is succeeding at what they do and developing aptitudes that few can match, while being truly prepared to switch jobs if the right opportunity comes up, has - in some sense - quite high expected longtermist impact (over the long run) via direct work alone. I think this expected impact will often be higher than the expected impact of someone who is in a seemingly top-priority longtermist career now, but isn't necessarily performing excellently, sustainably or flexibly."
     
Replies from: MichaelA
comment by MichaelA · 2021-06-06T07:26:53.479Z · EA(p) · GW(p)

When making career decisions, put more weight on intuitions and gut feelings of excitement (in particular when based on actual experience, e.g., a representative work trial of the job you're considering) - and less on impact estimates.

I think you probably mean in relation to types of work, activity, organisation, mindsets, aptitudes, etc., and not in relation to what cause areas or interventions you're focusing on, right? 

I.e., I think I'd often suggest people do focus mostly on impact estimates when choosing cause areas and maybe also interventions, but focus more on comparative advantage (using intuitions and gut feelings of excitement as some proxies for that) when choosing specific jobs, orgs, roles, paths, etc. Would you agree?

Replies from: Max_Daniel
comment by Max_Daniel · 2021-06-06T12:27:47.838Z · EA(p) · GW(p)

I think you probably mean in relation to types of work, activity, organisation, mindsets, aptitudes, etc., and not in relation to what cause areas or interventions you're focusing on, right? 

Basically yes. But I also think (and I understand Holden to say similar things in the OP) that "what cause area is most important" is perhaps less relevant for career choice, especially early-career, than some people (and 80k advice [ETA: though I think it's more like my vague impression of what people including me perceive 80k advice to say, which might be quite different from what current 80k advice literally says if you engage a lot with their content]) think.

comment by MichaelA · 2021-06-05T10:41:18.732Z · EA(p) · GW(p)

Thanks for this post! 

I think I disagree or at least feel hesitant about some specific things, but overall I think this seems like a really useful framework, it provides a bunch of good specific ideas and tips, and it's easy to work out concrete what you mean by each thing and how to apply these ideas (particularly due to the "Examples:" and "On track?" sub-sections). And I've already sent a link to the "Political and bureaucratic aptitudes" section to someone I spoke to recently who I think would find it useful.

The section "Some closing thoughts on advice" also made me think the following two links may be useful for some readers:

comment by yiyang · 2021-06-15T13:41:33.825Z · EA(p) · GW(p)

This might just be an extension of the "community building" aptitudes [EA · GW], but here's another potential aptitude.

"Education and training" aptitudes

Basic profile: helping people absorb crucial ideas and the right skills efficiently, so that we can reduce talent/skills bottlenecks in key areas.

Examples:

Introductory EA program, in-depth EA fellowship, The Precipice reading group, AI safety programmes, alternative protein programmes, operations skills retreat, various workshops organised in EAGs/EAGxs, etc

How to try developing this aptitude:

I'll split these into three areas: (a) pedagogical knowledge, (b) content knowledge, and (c) operations.

(a) Pedagogical knowledge

This specific knowledge you learn and skills you develop to teach effectively or help others learn more effectively. Examples: breaking down learning objectives into digestible chunks, how to design effective engaging learning experience, creating and presenting content, (EDIT) how to measure whether your students are actually learning .

This could be applied to classroom/workshop settings, reading and discussion groups, career guides, online courses, etc

You can pick up knowledge and skills either 
- formally: teaching courses, meta-learning courses, teaching assistant 
- or informally: helping others learn

(b) Content knowledge

This is knowledge specific to the domain you want others to learn. If you're teaching English alphabets, you need to know what it is (symbols that you can rearrange to create meanings and associations with physical or abstract things), why it's relevant (so you have a similar language with others to learn and communicate with), and how to apply it ("m"+"o"+"m" is mom!).

It's sometimes not necessary that you're an expert in this, but it helps a lot if you are above average at it.

(c) Operations
A big (but sometimes forgotten) part of organising classrooms, discussion groups, or workshops is that it needs to smooth (or within an expected parameter) to reduce any friction in the learning experience. It also helps that you understand the different trade-offs of running an education project (i.e. quality of learning vs. student's capacity vs. educator's capacity vs. financial cost).

You can pick up knowledge and skills either 
- formally: operations courses, project management courses, productivity books
- or informally: learning from "that friend who usually get things done and is generally reliable"

On track?

It's hard to generalise since there's so many different models (e.g. classroom, online courses, discussion groups) of how to educate/train a person, and each different model requires a different way of thinking.  Here's my rough take on this: 

Level 1: you get positive feedback from others when you had to explain and teach a certain topic informally (e.g. with friends over dinner, homework group, helping students as a teaching assistant during office hours).

Level 2: you get positive feedback when facilitating discussions.

Level 3: you get positive feedback when teaching a workshop.

Level 4 (you're likely on track here): you get positive feedback when teaching and running a course, online course, or lecture series with more than 50 participants

comment by MichaelA · 2021-06-05T10:52:51.991Z · EA(p) · GW(p)

There isn't currently an "obvious" and arbitrarily scalable place for longtermists to donate, analogous to GiveWell's top charities. But if one doesn't have particular donations they're excited to make, I think it makes sense to simply save/invest - ideally following best investing practices for longtermist values (e.g., taking the optimal amount of risk for money intended to benefit others over long time horizons, and using charitable vehicles to reduce taxes on money that's intended for charitable purposes - I hope there will be writeups on this sort of thing available in the future). There are debates about whether this is better than giving today, but I think it is at least competitive.

I was a little surprised that you didn't mention the EA Long-Term Future Fund as one competitive option for such donors? I'm not saying that giving to the LTFF is definitely better than investing to give later - I'm currently pretty open to the latter strategy [EA · GW] - but it seems to me that giving to the LTFF is (like donor lotteries) one competitive option. (See also The Long-Term Future Fund has room for more funding, right now [EA · GW].)

Also, I do think there are some writeups on that sort of thing available, some of which can be found via the investing [? · GW] tag (and presumably there are other writeups available elsewhere too). But this an area I've read a lot on, and I do expect there'd be value in additional, better, or more thorough writeups.

(As usual, this comment expresses my personal opinions only.)

comment by MichaelA · 2021-06-05T10:43:20.947Z · EA(p) · GW(p)

The section "Aptitude-agnostic vision: general longtermism strengthening" reminded me of the post Illegible impact is still impact [EA · GW]. I liked that post, but/and also think think that the specific examples you give in your section might be better examples to point to than the examples given in that post. 

Here are some excerpts from that post:

In the case of impact, legible impact is that which can be measured easily in ways that a model predicts is correlated with outcomes. Examples of legible impact measures for altruistic efforts include counterfactual lives saved, QALYs, DALYs, and money donated; examples of legible impact measures for altruistic individuals include the preceding plus things like academic citations and degrees, jobs at EA organizations, and EA Forum karma.

Some impact is semi-legible, like social status among EAs, claims of research progress, and social media engagement. [...]

Illegible impact is, by comparison, invisible, like helping a friend who, without your help, might have been too depressed to get a better job and donate more money to effective charities or filling a seat in the room at an EA Global talk such that the speaker feels marginally more rewarded for having done the work they are talking about and marginally incentives them to do more. Illegible impact is either hard or impossible to measure or there's no agreed upon model suggesting the action is correlated with impact. And the examples I gave are not maximally illegible because they had to be legible enough for me to explain them to you; the really invisible stuff is like dark matter—we can see signs of its existence (good stuff happens in the world) but we can't tell you much about what it is (no model of how the good stuff happened).

The alluring trap is thinking that illegible impact is not impact and that legible impact is the only thing that matters. If that doesn't resonate, I recommend checking out the links above on legibility to see when and how focusing on the legible to the exclusion of the illegible can lead to failure.

[...] To me the first step is acknowledging that illegible impact is still impact. For example, to me all of the following activities are positively impactful to EA such that if we didn't have enough of them going on then the EA movement would be less effective and less impactful and if we had more of them going on then EA would be more effective and more impactful, yet all of them produce impact of low legibility, especially for the person performing the action:

  • Reading the EA Forum, LessWrong, the Alignment Forum, EA Facebook, EA Reddit, EA Twitter, EA Tumblr, etc.
  • Voting on, liking, and sharing content on and from those places
  • Helping a depressed/anxious/etc. (EA) friend
  • Going to EA meetups, conferences, etc. to be a member in the audience
  • Talking to others about EA
  • Simply being counted as part of the EA community

The "general longtermism strengthening" section also reminded me of the EA Wiki entry scalably using labour [? · GW] and various posts with that tag.