How to PhD
post by eca
Some general things
Factoring a PhD
What factors should you prioritize?
A simple case
A more complicated case 
Planning your approach
Many thanks to Andrew Snyder-Beattie, Howie Lempel, Alex Norman and Noga Aharony for thoughtful feedback. Mistakes are mine.
Status: Some obviously right stuff. Some spicy takes. In places I'm trying to illustrate a pattern of thinking rather than an explicit recipe. Aimed at a particular audience, YMMV.
These are a few thoughts on how to approach graduate school effectively.
This is not a guide or anything of the sort. Just an attempt to write down a set of considerations I use when thinking about my own grad school, and what seems to be helpful from convos I’ve had with other EA PhD-seekers. I have not tried to make this generally applicable. So some background facts in case you are looking for something else:
- I am a grad student at MIT
- I work on catastrophic risks from biology
- My background is synthetic biology/ bioinformatics/ deep learning
- I have most personal experience with synthetic biology academia
- My favorite theory of change for addressing these risks goes substantially through EAs taking on a lot more object level work— founding organizations, engineering systems, making scientific progress— than I expect is the median view
- I still think policy-ish stuff is important; a substantial part of the reason I’m doing my PhD is to be credible to fancy people types
- I’m not inside-view excited about young longtermist EAs pursuing faculty positions, basically at all. Others I think are reasonable do argue for this, so I’ve tried to include a relevant example.
Some general things
Academic incentives are nefarious and horrible and will poison your brain. This happens to the best people. You can become a status monster unless you know this in your bones and remind yourself of it every day. Recognize it now, and inoculate yourself by knowing what you want before the poison seeps in. If you want to do anything that isn’t optimizing for academic prestige, like spending some of your PhD research time on publishing directly impactful papers or developing directly impactful technologies, or doing these things later in your career, you will need to have a strategy for managing the ways academic incentives push you to waste your brilliance.
This probably involves maintaining and strengthening your EA-adjacent network. This is also broadly important, IMO. If you are doing a PhD for EA career development, remember that 3-7 years is a long time and that you will not just be sacrificing direct impact in that time, but also relationships and EA-specific knowledge and context. Despite the intention to develop career capital, you could come out of a PhD stupider and less useful than you went in if you lose track of what is impactful and are 5 years behind everyone on the best mental models.
A PhD is also in many ways a prolongation of the perpetual childhood instantiated in western education systems. I notice that between people of the same age, one of whom just completed a PhD and the other who has been doing direct work for that duration, the PhD is a little “less grown up” on average (this is a comment about the average and not some claim about strict dominance! I love you, all my PhD bearing friends and colleagues). Most PhDs do not teach you many of the life-lesson-y adult-y things you actually need to be effective. E.g.: taking sole ownership and responsibility for solving a real problem rather than optimizing for fake metrics like impact factor and having your PI to fall back on, leading and managing others, communicating with people who are in a very different place than you, robustness in the face of a wide range of challenges instead of narrow specialization on a few, knowing when something is or isn’t worth your time and developing a palpable urgency, learning how effective organizations work, being held accountable for all aspects of your epistemics instead of domain-specific, etc.
IMO, the above things point to either radically minimizing time spent in a PhD or being exceedingly deliberate with what PhD incentives you conform to vs. actively and persistently push against (or perpendicular to). This means I recommend spending time thinking about what you want out of a degree and crafting your strategy accordingly, where the primary decision points are to which schools you apply, how you reach out to PIs, how you choose to rotate if applicable, how you pick a lab to work in, and how you choose your projects and collaborators. I don’t have time to talk about any of those really, because they are entirely context specific. But I’d encourage you to try and operationalize your high-level goals into tactics around these types of decisions.
In the next section I’ll try to break a PhD down into component “goods”. The hope is that you can think about which of these components mean more to you, in your situation, and which are less important.
Factoring a PhD
PhDs can be good for very different reasons. Know your reasons. A rough factorization:
- Skill: How much do you care about developing a specific technical skillset, or domain-specific knowledge, which is otherwise hard to learn?
- Process: How much do you want to “learn to do research” in a more general sense, in such a way that it is useful for valuable technical EA work? Don’t focus on this if you already feel comfortable with the full research cycle - from novel idea to completed project/ publication if applicable, and can execute this autonomously. Do consider focusing on it if you don’t resonate with the above. Being able to make progress on hard research problems in a generalist-y kind of way is a very in-demand skill. There are certainly many domain-dependent components of this, but I believe there is a transferable component which explains why some people are able to answer questions across different fields.
- Network (of technical peeps): How much do you care about developing a network of other researchers who you can rely on down the line? Relevant for hiring if you plan to found a company, for downstream technical work which depends on tacit knowledge shared throughout your group (as in synthetic biology). Not very relevant if you want to do policy but are doing a technical PhD; in that case it's usually more important to network with other policy people at conferences or fellowships (like ELBI).
- Credential: How much do you care about obtaining a piece of paper that says “P.h.D” and your name? This also includes things like whether your school is name-brand, name-brand awards and fellowships, and other types of honors which are recognized to be prestigious outside a narrow field. Relevant for credentialist industries like old-school pharma and doing anything policy/ advocacy/ public facing.
- Publication: How much do you care about seeming technically impressive to other technical people? Relevant for careers in academia or non-credentialist industries like Software/ AI and for technical EA work.
- Urgency: How much do you care about having an impact sooner rather than later? Could look like picking a lab whose research is directly good and taking a while, or going as fast as you possibly can.
Think about how much each of these statements apply to you, and which you would entirely forgo for which of the others. Your PhD strategy will depend on which of these you prioritize in which amounts. I think different contexts often have optimal strategies which are radically divergent, so it's worth thinking about this first.
One more thing: I’d encourage you to avoid being “greedy”. A type of person I occasionally meet is someone who is doing a PhD to preserve optionality *across everything* and therefore wants a PhD that gives them all of the above things. This is tempting, especially because PhDs are pitched as the optionality-preserving career move. But from what I can see, even if you don’t end up making a particular compromise, it is hugely useful to know what you would give up if push came to shove.
What factors should you prioritize?
I obviously can’t answer this in general, because its all context-dependent. I’ll try to illustrate with two fake examples in longtermist biosecurity (which I know more about). You should consider skipping this section if you already feel like you know what kind of factors you care about.
A simple case
Kevin is a longtermist who thinks he’s a good fit for biosecurity work. He helped lead his undergrad EA chapter and for the past year since graduating with a bioengineering degree has been working at a young EA org doing a mix of research and ops. After this experience, he believes he’s noticed a gap in the community around people who can both run organizations and have the technical know-how to make good strategic decisions. His first choice would be to land a job at an EA org, or if he finds a good opportunity, start an organization himself. His advisors believe he has the skills to be a good candidate for these, but warn him that these types of leadership positions might require interactions with policy makers and other fancy people who would not take him as seriously with only a BA. They also encourage him to find good backup options. Kevin thinks his mix of organizational/ people skills and bioengineering background would also make him a good fit in climbing the policy career ladder, and, putting these two ideas together, decides to do a PhD.
Kevin should straightforwardly focus on Urgency and Credentials. It really doesn’t matter to him exactly what skills he gets. None of the fancy people will look at his publications, and he should care more about building an EA and/or policy network rather than a network of technical people. He doesn’t care about being good at the research process. His opportunity cost is high enough that he shouldn’t do a degree unless it can be done quickly or is especially credential-heavy, e.g. at a name-brand school.
Kevin should prioritize something like 70% Urgency, 30% Credentials. (these numbers are made up)
A more complicated case
Anita is a longtermist working in biosecurity. Both her and her advisors believe her best shot at having an impact is to become an academic. Her hypothesis is that the community is undersupplied in people who can lead research programs on specific therapeutic countermeasures. She knows that the academic track is extremely competitive, but has had a remarkably successful independent undergraduate research track record in bioinformatics, including a first-author publication in Nature. She plans to use this comparative advantage to try her luck at a young faculty position. If this fails she plans to join an existing lab doing related work and encourage them to work directly on the technology she believes is important.
Both Anita and her previous mentors agree that she is quite comfortable in the research process; having led a project from conception to a Nature publication is ample evidence of this. However, all her previous research was computational, whereas the countermeasure tech is going to require substantial wet lab work.
What factors should Anita care about? Let’s start by ruling some things out. Given Anita’s background she probably doesn’t need to worry much about learning the research Process. She obviously will need to graduate, but cares a lot more about the way technical people would evaluate her work than the types of Credentials salient to e.g. policy folks. With all the other things she cares about doing, she probably can’t also do her degree in 3 years, and should instead compromise on speed even if she feels the Urgency.
Anita needs to think more carefully to decide what her top priorities are. Without Publications, she is SOL on the academia front. However, she also thinks her backup plan is quite good EV, and is concerned about getting caught in some academic niche which isn’t related to the countermeasure tech. She might end up being obligated by academic incentives to continue publishing in some less useful field in order to remain competitive with other would-be faculty. If push came to shove, she would give up an academic career for her second best option if the alternative was working on a useless technology .
On top of this, Anita’s bioinformatic work was only distantly connected to the countermeasure tech, and she has heard that the type of methods required to do the most cutting-edge projects involve a lot of tacit knowledge. She either needs to learn to do this work herself, or develop strong and ongoing connections with wet lab collaborators. This leads her to conclude top priority should be working in a lab which has the specialist Skills and domain expertise she needs to learn. If she ends up being unsuccessful in the wet lab, she plans to double down on bioinformatics and focus on fostering the best Network of collaborators. Only then will she optimize for Publications, taking the bet that her existing publication track record and confidence leading projects from start to high-impact publication can carry her through. It helps in this case that bioinformatics moves a lot faster than wet-lab work, so Anita believes she can push out enough papers to make the academic cut even if she doesn’t develop the “experimental touch”.
Anita should shoot for something like 70% Skill + Network, 30% Publications. (these numbers are made up).
Planning your approach
The hope is that once you know what you care about getting out of your degree, you can make better decisions + plans. If this is indeed possible, it’s obviously also context dependent.
So again, here are a couple very rough sketches to give a sense of how I think about different strategies. They are all made up but are pointing at the kind of thing I could imagine coming up with; some are closer to real strategies/ patterns that seem to work than others. Here I say thing1 + thing2 to mean prioritizing these and being willing to sacrifice all the others:
- Skill + Process: Focus on finding a mentor and group of peers working on the niche thing you care about learning skillset of with a record of doing things that are very solid rather than very flashy. Premium on ability to do a rotation, or equivalent opportunities to interact with multiple labs before locking in. Take as much time as you need in the program. Don’t pick a fancier sounding university over better lab focus, mentor, and peers. Can get info on a mentor and lab vibe by reaching out to lab members through your network or cold-emailing current and former lab members, or even members of other labs at the same institute. In conversations with prospective mentors, ask lots of questions about previous mentoring relationships they have had. Good sign- previous junior mentees have initiated their own projects and gotten first authorship. Bad sign- no mentee driven projects, or younger mentees are never in first author positions.
- Credential + Urgency, fast version: Choose the program on the intersection of “shortest num years required” and “least time investment needed to satisfy class and publishing requirements” and "has a brand name". Go hard satisfying your publication requirement as fast as possible, put in almost no effort into classes if applicable (whatever is required to graduate with no concern for grades or the impression you give to others). Once you finish your pubs, or while you are working on them if they are wall-clocked constrained, spend almost all of your time volunteering for EA projects that seem highest impact.
- Credential + Urgency, slow version: Select a program which might last a long time but has an advisor who is willing to let you entirely do the thing you (almost) would have wanted to do anyway. For example, if you think it would be good for there to be more papers outlining the case against some dual use technology, find an advisor who wants these to exist as well and will make room for you to focus on them. Academia brings many benefits for writing papers like this, most especially credibility. Take as many years as feels like you are still basically doing the best thing directly, then graduate. BE VERY CAREFUL NOT TO GET SUCKED INTO HORRIBLE PUBLISHING INCENTIVES.
- Urgency + Process: Pick a program based on the quality of your research mentor and whether it shares *structural* features with the domain you would like to do work directly in. For example, how paradigmatic vs. confusing and new? How much can you make progress by thinking vs. reading what other people have said in textbooks vs. digging through the most recent publications to find secret nuggets vs. sitting at a bench and pipetting? Typically helpful to seek out EA research projects you can do on the side which are closer to the eventual kind of thing you care about, in order to make an urgent impact and confirm you are learning a research process that works on the real problems. Typically makes sense to shortcut everything besides spending as much time as possible thinking real hard about things as close *in structure* to your eventual goals. Program length does not matter if you keep a keen eye out for compelling opportunities and pre-commit to dropping out if they come by.
- Publication + Urgency: Consider RAing instead of grad school, if you have the opportunity to do so with freedom to operate rather than grunt work. Optimize almost exclusively for compelling publications; for some specific goals these will need to be high-impact publications. Do weak filtering of project ideas to minimize acceleration/ dual use potential but otherwise select only on publishability. Only prioritize Network or Process instrumentally, ie if you need to know lots of tacit experts or need mentorship to learn good research process (if the latter, maybe you should be focusing on that tho!). If you are constrained by not having ideas for what would make a good domain publication, ask other EAs who have published good papers if they are sitting on any publication-worthy ideas that they don’t plan on getting around to. I think you’ll find that some people have more ideas than time and would be happy to share them with you and help you spot how to present the ideas in a way which is compelling for journals.
- Network + Skill: Apply to the lab which has alums that do the coolest stuff, even if the PI is notoriously bad or absentee. Focus on forming tight friendships and working relationships with your lab mates; collaborate extensively with whoever seems coolest. Side projects with cool people are worth it. Probably worth trying to introduce some labmates to EA if the circumstances are right. You get skill by working with the best people rather than self teaching.
Comments sorted by top scores.
comment by AdamGleave ·
2021-03-30T20:20:52.508Z · EA(p) · GW(p)
Thanks for writing this post, it's always useful to hear people's experiences! For others considering a PhD, I just wanted to chime in and say that my experience in a PhD program has been quite different (4th year PhD in ML at UC Berkeley). I don't know how much this is the field, program or just my personality. But I'd encourage everyone to seek a range of perspectives: PhDs are far from uniform.
I hear the point about academic incentives being bad a lot, but I don't really resonate with it. A summary of my view is that incentives are misaligned everywhere, not just academia. Rather than seeking a place with good (in general) incentives, first figure out what you want to do, and then find a place where the incentives happen to be compatible with that (even if for the "wrong" reasons).
I've worked in quant finance, industry AI labs, and academic AI research. There were serious problems with incentives in all three. I found this particularly unforgivable in quantitative finance, where the goal is pretty clear: make money. You can even measure day to day if you're making money! But getting the details right is hard. At one place I'm aware of, people were paid based on their group's profitability, divided by how risky their strategies were. This seems reasonable: profit good, risk bad. The problem was, it measured the risk of your strategy in isolation -- not how it affected the whole firm's risk levels. So different groups colluded to swap strategies, which made each of them seem less risky in isolation (so they could paid more), without changing the firm's overall strategy at all!
Incentivizing research is an unusually hard problem. Agendas can take years to pay off. The best agendas are often really high variance, so someone might fail several times but still be doing great (in expectation) work. Given this backdrop, a PhD actually seems pretty reasonable.
It's pretty hard to get fired doing a PhD, and some (by no means all) advisors will let you work on pretty much whatever you want. So, you have a 3-5 year runway to just work on whatever topics you think are best. At the end of those 3-5 years, you have to convince a panel of experts (who you get to hand-pick!) that you did something that's "worth" a PhD.
As far as things go, this is incredibly flexible, and is evidenced by the large number of people who goof of during their PhD. (This is the pitfall of weak incentives.) It also seems like a pretty reasonable incentive. If after 5 years of work you can't convince people what you did was good, it might be that it's incredibly ahead of it's time, but more likely you either need to communicate it better or the work just wasn't that great by the standards of the field.
The "by the standards of the field" is the key issue here. Some high impact work just doesn't fit well into the taste of a particular field. Perhaps it falls between disciplinary boundaries. Or it's more about distilling existing research, so isn't novel enough. That sucks, and academic research is probably the wrong venue to be pursuing this in -- but it doesn't make academic incentives bad per se. Just bad for that kind of research.
I think the bigger issue are the tacit social pressures to publish and make a name for yourself. These matter a fair bit for the job market, so it's a real pressure. But I think analogous or equal pressures exist outside of academia. If you work at an industry lab, there might be a pressure to deliver flashy results of products. If you work as an independent researcher, funders will want to see publications or other signs of progress.
I'd love to see better incentives, but I think it's important to acknowledge that mechanism design for research is a hard problem, not just that academia is screwing it up uniquely badly.
Replies from: eca
↑ comment by eca ·
2021-03-31T20:15:00.954Z · EA(p) · GW(p)
This is an excellent comment, thanks Adam.
A couple impressions:
- Totally agree there are bad incentives lots of places
- I think figuring out what existing institutions have incentives that best serve your goals, and building a strategy around those incentives, is a key operation. My intent with this article was to illustrate some of that type of thinking within planning for gradschool. If I was writing a comparison between working in academia and other possible ways to do research I would definitely have flagged the many ways academic incentives are better than the alternatives! I appreciate you doing that, because it's clearly true and important.
- In that more general comparison article, I think I may have still cautioned about academic incentives in particular. Because they seem, for lack of a better word, sneakier? Like, knowing you work at a for-profit company makes it really transparently clear that your manager (or manager's manager's) incentives are different from yours, if you want to do directly impactful research. Whereas I've observed folks, in my academic niche of biological engineering, behave as if they believe a research project to be directly good when I (and others) can't see the impact proposition, and the behavior feels best explained by publishing incentives? In more extreme cases, people will say that project A is less important to prioritize than project B because B is more impactful, but will invest way more in A (which just happens to be very publishable). I'm sure I'm also very guilty of this, but its easier to recognize in other people :P
-I'm primarily reporting on biology/ bioengineering/ bioinformatics academia here, though consume a lot of deep learning academias output. FWIW, my sense is there is actually a difference in the strength and type of incentives between ML and biology, at least. From talking with friends in DL academic labs, it seems like there is still a pressure to publish in conferences but there are also lots of other ways to get prestige currency, like putting out a well-read arxiv paper or being a primary contributor to an open source library like pytorch. In biology, from what I've seen, it just really really really matters that you publish in a high impact factor journal, ideally with "Science" or "Nature" on the cover.
- It also matters a whole lot who your advisor is, as you mention. Having an advisor who is super bought in to the impact proposition of your research is a totally different game. I have the sense that most people are not this lucky by default, and so would want to optimize for the type of buy-in or, alternatively, laissez-faire management which I pattern match to the type of research freedom you're describing.
All of this said, I think my biggest reaction is something like "there are ways of finding really good incentives for doing research"! Instead of working in existing institutions-- academic, for-profit research labs, for-profit company-- come up with a good idea for what to research and how, and just do it. More precisely: ask an altruistic funder for money, find other people to work with, make an organization if it seems good. There are small and large versions of this. On the small scale you can apply for EA grants or another org which grants to individuals, and if you're really on to something you ask for org-scale funding. I'm not claiming that this is always a better idea: you will be missing lots of resources you might otherwise have in e.g. academia.
But compared to working with a funder who, like you, wants to solve the problem and make the world be good, any of the other institutions mentioned including academia look extremely misaligned. And IMO its worth making it clear that relative to this, almost any lab/ institute's academic incentives suck. Once this DIY option is on the table I think it is possible to make better choices about whether you like the compromise of working at another institution or whether you will use this option to get specific resources that will make the "forge your own way" option more tractable. E.g.: don't have any good ideas for a research agenda? Great, focus on figuring this out in your PhD. Don't know any good people you might recruit for your project? Great, focus on building a good network in your PhD. Etc etc
I'm curious if you still feel like incentives are misaligned in this world, or whether it feels too impractical to be included in your list, or disagree with me elsewhere?
Thanks again :)
Replies from: AdamGleave
↑ comment by AdamGleave ·
2021-05-30T14:31:57.070Z · EA(p) · GW(p)
Sorry for the (very) delayed reply here. I'll start with the most important point first.
But compared to working with a funder who, like you, wants to solve the problem and make the world be good, any of the other institutions mentioned including academia look extremely misaligned.
I think overall the incentives set up by EA funders are somewhat better than run-of-the-mill academic incentives, but I think the difference is smaller than you seem to believe, and I think we're a long way from cracking it. I think this is something we can get better at, but it's something that I expect will take significant infrastructure and iteration: e.g. new methods for peer review, experimenting with different granter-grantee relationships, etc.
Concretely, I think EA funders are really good (way better than most of academia or mainstream funders) at picking important problems like AI safety or biosecurity. I also think they're better at reasoning about possible theories of change (if this project succeeds, would it actually help?) and considering a variety of paths to impact (e.g. maybe a blog post can have more impact than a paper in this case, or maybe we'd even prefer to distribute some results privately).
However, I think most EA funders are actually worse at evaluating whether the research agenda is being executed well than the traditional academic structure. I help the LTFF evaluate grants, many of which are for independent research, and while I try to understand people's research agenda and how successful they've been, I think it's fair to say I spend at least an order of magnitude less time on this per applicant than someone's academic advisor.
Even worse, I have basically zero visibility into the process -- I only see the final write-up, and maybe have an interview with the person. If I see a negative result, it's really hard for me to tell if the person executed on the agenda well but the idea just didn't pan out, or if they bungled the process. Whereas I find it quite easy to form an opinion on projects I advise, as I can see the project evolve over time, and how the person responds to setbacks. Of course, we can (and do) ask for references, but if they're executing independently they may not have any, and there's always some CoI on advisors providing a reference.
Of course, when it comes to evaluating larger research orgs, funders can do a deeper dive and the stochasticity of research matters less (as it's averaged over a longer period of time). But this is just punting the problem to those who are running the org. In general I still think evaluating research output is a really hard problem.
I do think one huge benefit EA has is that people are mostly trying to "play fair", whereas in academia there is sadly more adversarial behavior (on the light side, people structuring their papers to dodge reviewer criticism; on the dark side, actual collusion in peer review or academic fraud). However, this isn't scalable, and I wouldn't want to build systems that rely on it.
In that more general comparison article, I think I may have still cautioned about academic incentives in particular. Because they seem, for lack of a better word, sneakier?
This is a fair point. I do think people kid themselves a bit about how much "academic freedom" they really have, and this can lead to people in effect internalizing the incentives more.
I've observed folks [...] behave as if they believe a research project to be directly good when I (and others) can't see the impact proposition, and the behavior feels best explained by publishing incentives.
Believing something is "directly good" when others disagree seems like a classic case of wishful thinking. There are lots of reasons why someone might be motivated to work on a project (despite it not, in fact, being "directly good"). Publication incentives are certainly a big one, and might well be the best explanation for the cases you saw. But in general I think it could also be that they just find that topic intellectually interesting, have been working on it for a while and are suffering from sunk cost fallacy, etc.
comment by basil.halperin (bhalperin) ·
2021-03-30T01:33:45.407Z · EA(p) · GW(p)
I like this writeup a lot, but I would say to anyone who's actually reading this should ignore the advice to not go into academia.
If you're reading this, you're probably selected (!) to be someone who is atypical and has a decent shot at succeeding in academia. (See also: SSC on 'reversing all advice you hear'.) i.e.: if you're someone who's taking the time out of your day to read this, you're probably (probably!) similar to "Anita" here.Replies from: eca, rhaps0dy
↑ comment by eca ·
2021-03-30T15:04:52.549Z · EA(p) · GW(p)
Ugh. Shrug. That isn't supposed to be the point of this post. All my comments on this are to alert the reader that I happen to believe this and haven't tried to stop it from seeping into my writing. It felt disingenuous not to.
But since you raised, I feel like making it clear, if it isn't already, that I do not recommend reversing this advice. At least if you are considering cause areas/ academic domains that I might know about (see my preamble). I have no idea how applicable this is outside of longtermist technical-leaning work.
If you think you might be an exception to this, feel free to DM me. Exceptions do exist, I just highly doubt you (the reader) are one. THIS DOES NOT MEAN I AM NOT EXCITED ABOUT YOUR IMPACT!! I think there are much better opportunities than becoming a professor out there :)
As I said a lot of smart people disagree with me on this, but here is some of my thinking:
- Most people overestimate their chances for the obvious reasons
- I've advised at least 10 smart, excellent EAs interested in pursuing PhDs and none of them are in "Anita's" reference class. A first author Nature paper in undergrad is really extremely unique. The only exceptions here are people who are already in early-track faculty positions at good schools, and even then I worry about the counterfactual value. (these are not the people reading this, I imagine)
- Having a "good story" for becoming a faculty is a huge part luck. I've been interacting with grad students and post docs from top labs at Harvard and MIT since maybe 2015 and for every faculty position people get there are maybe 5 people who are equally or more talented whose research was equally or more compelling in principle; the difference is whether certain parts of their high-risk research panned out in a certain compelling way and whether they were good at "selling it".
- You approximately can't get directly useful/ things done until you have tenure. I think this should be obvious but some people seem to believe a fairy tale where they are both winning the rat race and doing lots of direct good.
- Given the above, academia is a 10-15 year crapshoot. (PhD, postdoc or multiple, 5-ish years as a junior faculty)
- It's not clear to me what you get even after all of this. I think its hard to argue that academia is clearly better than working in a private research org if you want to do direct technology development. This leaves some kind of pulpit/ spokesperson effect. Is this really worth it? Most people who could actually get a tenured faculty position could also write 3 excellent books in the time it takes to do a PhD and post-doc. Are we sure this alternative, as one example among many possible, isn't a faster way of establishing spokesperson credibility?
- Unless you have worked in top labs with EA-minded people, I don't think it is possible to really understand how bad academic incentives are. You will find yourself justifying the stupidest shit on impact grounds, and/or pursuing projects which directly make the world worse. People who are much better than you will also do this. This just gets worse with time, and needs to be accounted for as a reduction in expected impact when considering an opportunity that only pays off 12 years after steeping in the corrupting juices.
- Obviously, academia looks a whole lot worse if you believe lots of things need to happen right now, as opposed to 15 years from now. For my part, I would happily trade work hours 15 years from now for more time now, at a roughly 2:1 premium.
- Another risk you are taking, related to the above, is that the field of research you picked has any relevance 15 years from now. Obviously you can change as you go, but switching your "story" around has a big penalty in the academic job market, from what I've heard.
- If we think we need more professors as a movement, it could be the case that its way more efficient to just reach out to people who already have faculty positions (or are just one step away, in a highly enriched pool). For example, I know of instances where students have influenced their PIs on research directions and goals, in a direction more aligned with longtermist objectives. It might be that targeted outreach and coalition building among academics is just way higher bang for buck. It's also not clear that we need the most aligned people in faculty positions, rather than people who are allies. Have we ruled this out? Seems like any person considering mortgaging 15 years of their impact might want to spend 1 year testing this hypothesis first.
Putting these random points together, it just feels like a really uphill battle to make academia look good from an impact perspective. I think you need to believe some combination of 1) problems are not urgent 2) academic incentives are actually good (?)/ there is some other side benefit of working toward a faculty position that is really worth having 3) there aren't many other opportunities for people who could be faculty in a technical domain or 4) we are specifically constrained on something professors have, maybe credible spokespeople, AND there are no more efficient ways to get those resources.
OR you might believe that academia is exciting from a personal fit perspective. I think a lot of people are very motivated by the types of status incentives in academia, which is good I guess if you have trouble finding motivation elsewhere. I'd just want to separate this from the impact story.
My spicy take is that advice to go into academia has arisen through some combination of A) EA being a movement grown out of academia in many ways, B) a lack of better career ideas, C) too much distance from the urgency and concreteness of problems on the ground and D) the same mind destroying publishing and status incentives I have mentioned a number of times here, which lead to a certain kind of self-justification.
So where all this caches out for me is finding it plausible that it is worth preserving some optionality for academia, but being very strategic (as I tried to demonstrate in this post). This includes knowing what you actually are optimizing for, and being willing to leave academic optionality if push comes to shove and there is something better. This is why I wrote the Anita case study this way.
I'm very happy to be shown where I'm wrong.
Replies from: Sebastian_Oehm, AdamGleave, antimonyanthony
↑ comment by Sebastian_Oehm ·
2021-03-31T08:58:05.897Z · EA(p) · GW(p)
I'm not convinced that academia is generally a bad place to do useful technical work. In the simplest case, you have the choice between working in academia, industry or a non-profit research org. All three have specific incentives and constraints (academia - fit to mainstream academic research taste; industry - commercial viability; non-profit research - funder fit, funding stability and hiring). Among these, academia seems uniquely well-suited to work on big problems with a long (10-20 year) time horizon, while having access to extensive expertise and collaborators (from colleagues in related fields), EA and non-EA funding, and EA and non-EA hires.
For my field of interest (longtermist biorisk), it appears that many of the key past innovations that help e.g. with COVID now come from academic research (e.g. next-generation sequencing, nanopore sequencing, PCR and rapid tests, mRNA vaccines and other platform vaccine tech). My personal tentative guess is that our split should be something like 4 : 4 : 1 between academia, industry and non-profit research (academia to drive long-term fundamental advances, industry/entrepreneurship to translate past basic science advances into defensive products, and non-profit research to do work that can't be done elsewhere).
Crux 1 is indeed the time horizon - if you think the problem you want to work on will be solved in 20 years/it will be too late, then dropping 'long-term fundamental advances' in the portfolio would seem reasonable.
Crux 2 is how much academia constrains the type of work you can do (the 'bad academic incentives'). I resonate with Adam's comment here. I can also think of many examples of groundbreaking basic science that looks defensive and gets published very well (e.g. again sequencing innovations, vaccine tech; or, for a recent example, several papers on biocontainment published in Nature and Science).
Replies from: eca
↑ comment by eca ·
2021-03-31T20:51:08.527Z · EA(p) · GW(p)
Thanks Seb. I don't think I have energy to fully respond here, possibly I'll make a separate post to give this argument its full due.
One quick point relevant to Crux 2:
"I can also think of many examples of groundbreaking basic science that looks defensive and gets published very well (e.g. again sequencing innovations, vaccine tech; or, for a recent example, several papers on biocontainment published in Nature and Science)."
I think there are many-fold differences in impact/dollar between the tech you build if you are trying to actually solve the problem and the type of probably-good-on-net examples you give here.
Other ways of saying parallels of this point:
- Things which are publishable in nature or science are just definitively less neglected, because you are competing against everyone who wants a C/N/S publication
- The design space of possible interventions is a superset of, and many times larger than the design space of interventions which also can be published in high impact journals
- We find power-laws in cost effectiveness lots of other places, and AFAIK have no counter-evidence here. Given this, even a small orthogonal component between what is incentivized by academia and what is actually good will lead to a large difference in expected impact.
↑ comment by AdamGleave ·
2021-03-30T20:33:52.128Z · EA(p) · GW(p)
You approximately can't get directly useful/ things done until you have tenure.
At least in CS, the vast majority of professors at top universities in tenure-track positions do get tenure. The hardest part is getting in. Of course all the junior professors I know work extremely hard, but I wouldn't characterize it as a publication rat race. This may not be true in other fields and outside the top universities.
The primary impediment to getting things done that I see is professors are also working as administrator and teaching, and that remains a problem post-tenure.
Replies from: eca, rhaps0dy
↑ comment by eca ·
2021-03-31T20:30:41.128Z · EA(p) · GW(p)
This is interesting and also aligns with my experience depending on exactly what you mean!
Replies from: AdamGleave
- If you mean that it seems less difficult to get tenure in CS (thinking especially about deep learning) than the vibe I gave, (which is again speaking about the field I know, bioeng) I buy this strongly. My suspicion is that this is because relative to bioengineering, there is a bunch of competition for top research talent by industrial AI labs. It seems like even the profs who stay in academia also have joint appointment in companies, for the most part. There isn't an analogous thing in bio? Pharma doesn't seem very exciting and to my knowledge doesn't have a bunch of PI-driven basic research roles open. Maybe bigtech-does-bio labs like Calico will change this in the future? IMO this doesn't change my core point because you will need to change your agenda some, but less than in biology.
- If you mean that once you are on the Junior Faculty track in CS, you don't really need to worry about well-received publications, this is interesting and doesn't line up with my models. Can you think of any examples which might help illustrate this? I'd be looking for, e.g., recently appointed CS faculty at a good school pursuing a research agenda which gets quite poor reception/ crickets, but this faculty is still given tenure. Possibly there are some examples in AI safety before it was cool? Folks that come to mind mostly had established careers. Another signal would be less of the notorious "tenure switch" where people suddenly change their research direction. I have not verified this, but there is a story told about a Harvard Econ professor who did a bunch of centrist/slightly conservative mathematical econ who switched to left-leaning labor economics after tenure.
↑ comment by AdamGleave ·
2021-05-30T13:36:42.453Z · EA(p) · GW(p)
If you mean that once you are on the Junior Faculty track in CS, you don't really need to worry about well-received publications, this is interesting and doesn't line up with my models. Can you think of any examples which might help illustrate this?
To clarify, I don't think tenure is guaranteed, more that there's significant margin of error. I can't find much good data on this, but this post surveys statistics gathered from a variety of different universities, and finds anywhere between 65% of candidates get tenure (Harvard) to 90% (Cal State, UBC). Informally, my impression is that top schools in CS are the higher end of this: I'd have guessed 80%. Given this, the median person in the role could divert some of their research agenda to less well-received topics and still get tenure. But I don't think they could work on something that no one in the department or elsewhere cared about.
I've not noticed much tenure switch in CS but have never actually studied this, would love to see hard data here. I do think there's a significant difference in research agendas between junior and senior professors, but it's more a question of what was in vogue when they were in grad school and shaped their research agenda, than tenured vs non-tenured per se. I do think pre-tenure professors tend to put their students under more publication pressure though.
↑ comment by Adrià Garriga Alonso (rhaps0dy) ·
2021-03-31T12:54:36.614Z · EA(p) · GW(p)
I don't see how this is a counterargument. Do you mean to say that, once you are on track to tenure, you can already start doing the high-impact research?
It seems to me that, if this research is too diverged from the academic incentives, then our hypothetical subject may become one of these rare cases of CS tenure-track faculty that does not get tenure.
↑ comment by antimonyanthony ·
2021-03-30T23:47:25.424Z · EA(p) · GW(p)
You will find yourself justifying the stupidest shit on impact grounds, and/or pursuing projects which directly make the world worse.
Could you be a bit more specific about this point? This sounds very field-dependent.
Replies from: eca
↑ comment by eca ·
2021-03-31T20:41:40.488Z · EA(p) · GW(p)
I bet it is! The example categories I think I had in mind at time of writing would be 1) people in ML academia who want to be doing safety instead doing work that almost entirely accelerates capabilities and 2) people who want to work on reducing biological risk instead publish on tech which is highly dual use or broadly accelerates biotechnology without deferentially accelerating safety technology.
I know this happens because I've done it. My most successful publication to date (https://www.nature.com/articles/s41592-019-0598-1) is pretty much entirely capabilities accelerating. I'm still not sure if it was the right call to do this project, but if it is, it will have been a narrow edge revolving on me using the cred I got from this to do something really good later on.
comment by AdamGleave ·
2021-03-30T20:26:21.909Z · EA(p) · GW(p)
One important factor of a PhD that I don't see explicitly called out in this post is what I'd describe as "research taste": how to pick what problems to work on. I think this is one of if not the most important part of a PhD. You can only get so much faster at executing routine tasks or editing papers. But the difference between the most and mediam importance research problems can be huge.
Andrej Karpathy has a nice discussion of this:
When it comes to choosing problems you’ll hear academics talk about a mystical sense of “taste”.
It’s a real thing. When you pitch a potential problem to your adviser you’ll either see their face contort, their eyes rolling, and their attention drift, or you’ll sense the excitement in their eyes as they contemplate the uncharted territory ripe for exploration. In that split second a lot happens: an evaluation of the problem’s importance, difficulty, its sexiness, its historical context (and possibly also its fit to their active grants). In other words, your adviser is likely to be a master of the outer loop and will have a highly developed sense of taste for problems. During your PhD you’ll get to acquire this sense yourself.
Clearly we might care about some of these criteria (like grants) less than others, but I think the same idea holds. I'd also recommend Chris Olah's exercises on developing research taste.
Replies from: rhaps0dy, eca
↑ comment by Adrià Garriga Alonso (rhaps0dy) ·
2021-03-31T15:40:53.312Z · EA(p) · GW(p)
You can get research taste by doing research at all, it doesn't have to be a PhD. You may argue that PIs have very good research taste that you can learn from. But their taste is geared towards satisfying academic incentives! It might not be good taste for what you care about. As Chris Olah points out, "Your taste is likely very influenced by your research cluster".Replies from: eca
↑ comment by eca ·
2021-03-31T20:54:16.552Z · EA(p) · GW(p)
Strong +1 to this. I think I have observed people who have really good academic research taste but really bad EA research taste
↑ comment by eca ·
2021-03-31T20:53:29.910Z · EA(p) · GW(p)
Taste is huge! I was trying to roll this under my "Process" category, where taste manifests in choosing the right project, choosing the right approach, choosing how to sequence experiments, etc etc. Alas, not a lossless factorization
These exercises look quite neat, thanks for sharing!
comment by AllAmericanBreakfast ·
2021-03-28T23:26:26.687Z · EA(p) · GW(p)
Just to clarify, it sounds like you are:
- Encouraging PhD students to be more strategic about how they pursue it
- Discouraging longtermist EA PhD-holders from going on to pursue a faculty position in a university, thus implying that they should pursue some other sector (perhaps industry, government, or nonprofits)
I also wanted to encourage you to add more specific observations and personal experiences that motivate this advice. What type of grad program are you in now (PhD or master's), and how long have you been in it? Were you as strategic in your approach to your current program as you're recommending to others? What are some specific actions you took that you think others neglect? Why do you think that other sectors outside academia offer a superior incentive structure for longtermist EAs?Replies from: eca
↑ comment by eca ·
2021-03-30T15:18:44.780Z · EA(p) · GW(p)
I am doing 1. 2 is an incidental from the perspective of this post, but is indeed something I believe (see my response to bhalperin). I think my attempt to properly flag my background beliefs may have led to the wrong impression here. Or alternatively my post doesn't cover very much on pursuing academia, when the expected post would have been almost entirely focused on this, thereby seeming like it was conveying a strong message?
In general I don't think about pursuing "sectors" but instead about trying to solve problems. Sometimes this involves trying to get a particular government gig to influence a policy, or needing to write a paper with a particular type of credibility that you might get from an academic affiliation or a research non-profit, or needing to build and deploy a technical system in the world, which maybe requires starting an organization.
I'd encourage folks to work backwards from problems, to possible solutions, to what would need to happen on an object level to realize those solutions, to what you do with your PhD and other career moves. "Academia" isn't the most useful unit of analysis in this project, which is partly why I wasn't primarily trying to comment on it.
Regarding specific observations and personal experiences: I agree this post could be better with more things like this. Unfortunately, I don't feel like including them. Open invite to DM me if you are thinking about a PhD or already in one and want to talk more, including about my strategy.
Replies from: AllAmericanBreakfast
↑ comment by AllAmericanBreakfast ·
2021-03-30T17:23:51.914Z · EA(p) · GW(p)
That makes sense. I like your approach of self-diagnosing what sort of resources you lack, then tailoring your PhD to optimize for them.
One challenge with the "work backwards" approach is that it takes quite a bit of time to figure out what problems to solve and how to solve them. As I attempted this planning my own immanent journey into grad school, my views gained a lot of sophistication, and I expect they'll continue to shift as I learn more. So I view grad school partly as a way to pursue the ideas I think are important/good fits, but also as a way to refine those ideas and gain the experience/network/credentials to stay in the game.
The "work backwards" approach is equally applicable to resource-gathering as finding concrete solutions to specific world problems.
I think it's important for career builders to develop gears-level models of how a PhD or tenured academic career gives them resources + freedom to work on the world problems they care about; and also how it compares to other options.
Often, people really don't seem to do that. They go by association: scientists solve important problems, and most of them seem to have PhDs and academic careers, so I guess I should do that too.
But it may be very difficult to put the resources you get from these positions to use in order to solve important problems, without a gears-level model of how those scientists use those resources to do so.Replies from: eca
↑ comment by eca ·
2021-03-31T20:17:33.219Z · EA(p) · GW(p)
"Working backwards" type thinking is indeed a skill! I find it plausible a PhD is a good place to do this. I also think there might be other good ways to practice it, like for example seeking out the people who seem to be best at this and trying to work with them.
+1 on this same type of thinking being applicable to gathering resources. I don't see any structural differences between these domains.
comment by Adrià Garriga Alonso (rhaps0dy) ·
2021-03-30T10:10:12.772Z · EA(p) · GW(p)
Thank you for the write-up. I wish I had this advice, and (more crucially) kept reminding myself of it, during my PhD. As you say, academic incentives did poison my brain, and I forgot about my original reasons for entering the programme. I only realised one month ago that it had been happening slowly; my brain is likely still poisoned, but I'm working on it.
I'm curious about your theory of change, if you have time to briefly write about it. You wrote that
addressing these risks goes substantially through EAs taking on a lot more object level work— founding organizations, engineering systems, making scientific progress— than I expect is the median view
and that you don't think gunning for a faculty position is a good thing. What kind of job is the right one to "make scientific progress", then? I thought that the best way to do that is to run a lab, managing a bunch of smart PhD students and postdocs, and steering them towards useful research directions.
My impression is that PIs manage the same or more people than the equivalent seniority position in industry, at least in machine learning; but that they have freedom to set research priorities, instead of having to follow a boss. (On the flipside, they have to pander to grant givers, but that seems to give more freedom in research direction).
In summary, what do you think is the kind of job where you can make the most scientific progress?Replies from: eca
↑ comment by eca ·
2021-03-30T16:00:47.092Z · EA(p) · GW(p)
Appreciate your comment! I probably won't be able to give my whole theory of change in a comment :P but if I were to say a silly version of it, it might look like:
"Just do the thing"
So, what are the constituent parts of making scientific progress? Off the cuff, maybe something like:
- You need to know what questions are worth asking / problems are worth solving
- You need to know how to decompose these questions in sub-questions iteratively until a subset are answerable from the state of current knowledge
- You need to have good research project management skills, to figure out what order it makes sense to tackle these sub-questions and most quickly make progress toward the goal which is where all the impact is
- You need people to have smart ideas to guess the answers to sub-questions and generate hypotheses
- You need people to do or build things, like run experiments, code, or fab physical objects
- You need operations and logistics to turn money into materials and people, and to coordinate the materials and people
- You need managers to foster productive environments and maintain healthy relationships
- You need advisors to hold you accountable to the actual goal
- You often need feedback loops with the actual goal, in case you've decomposed the problem incorrectly or something else in the system has gone awry.
- You need money
I'm making this up, but do you see what I mean?
Then my advice would be to figure out which subset of these are so constraining that you can't start the business of doing the thing, and to solve those constraints e.g. by cultivating instrumental resources like research ability. Otherwise, set yourself up with the set of 1-10 which maximize your likelihood of succeeding at the thing, and start doing the thing. Figure the rest out as you go.
It's totally conceivable that an academic lab is the best place available to you. But I would want you to come to that conclusion after having thought hard about it, working backward from the actual goal.
Assuming the aspects of 1-10 which are research skills are covered, my object level sense is that academia goes wrong on 1,3,5,6,7,8,9.
All told my algorithm might be something like:
- What other existing entities/ groups look good on these inputs to the scientific progress machine? These might be existing companies, labs, random people on the internet, non-profits, whatever. Would also include looking for academic opportunities that look better on the above. Don't think about made up categories like "non-profit" when doing this. Just figure out what it would look like to work at/with this entity to accomplish the goal.
- What levers do I have to tweak things such that my list of existing places looks even better?
- What would it look like for me to make my own enterprise to directly do the thing? What resources am I missing?
- What opportunities do I have to pursue instrumental goods/ resources that don't look like doing the thing?
- With bias toward doing the thing, see which of working with existing collections of people, pushing existing collections of people to be different in some way, starting your own thing, and gathering instrumental resources you are missing looks like it will lead to the best outcomes.
- Do that thing. Periodically reevaluate.
This probably isn't very helpful, but I don't know of any tricks! I could say more stuff about "industry" vs. "academia" but for the most part I think those conversations are missing the point unless you can drill way more into the specifics of a situation.
Good luck :) remember that lots of other people are trying to figure the same kind of thing out. In my experience they are the best people to learn from
comment by Charles He ·
2021-03-29T00:09:56.574Z · EA(p) · GW(p)
This is so well written, so thoughtful and so well structured.
BE VERY CAREFUL NOT TO GET SUCKED INTO HORRIBLE PUBLISHING INCENTIVES.
This theme or motif has come up a few times. It seems important but maybe this particular point is not 100% clear to the new PhD audience you are aiming for.
For clarity, do you mean:
- On an operational or "gears-level", avoid activity due to (maybe distorted) publication incentives? E.g. do not pursue trends, fads or undue authority, or perform busy work that produces publications. Maybe because these produce bad habits, infantilization, distractions.
- Do not pursue publications because this tends to put you down a R1 research track in some undue way, perhaps because it's following the path of least resistance.
Also, note that "publications" can be so different between disciplines.
A top publication in economics during a PhD is rare, but would basically be worth $1M in net present value over their career. It's probably totally optimal to tag such a publication, even in business, because of the signaling value.Replies from: eca
Note that my academic school is way below you in academic prestige/rank/productivity. It would be interesting to know more about your experiences at MIT and what it offers.
↑ comment by eca ·
2021-03-30T15:27:16.656Z · EA(p) · GW(p)
Thanks Charles! I think of your two options I most closely mean (1). For evidence I don't mean 2:
"Optimize almost exclusively for compelling publications; for some specific goals these will need to be high-impact publications."
My attempt to restate my position would be something like: "Academic incentives are very strong and its not obvious from the inside when they are influencing your actions. If you're not careful, they will make you do dumb things. To combat this, you should be very deliberate and proactive in defining what you want and how you want it. In some cases this might involve pushing against pub incentives, in other cases it might involve optimizing for following them really really hard. What you want to avoid is telling yourself the reason for doing something is A, while the real reason is B, where B is usually something related to academic incentives. Publishing good papers is not the problem, deluding yourself is."
Replies from: AdamGleave
↑ comment by AdamGleave ·
2021-03-30T20:37:36.678Z · EA(p) · GW(p)
Publishing good papers is not the problem, deluding yourself is.
Big +1 to this. Doing things you don't see as a priority but which other people are excited about is fine. You can view it as kind of a trade: you work on something the research community cares about, and the research community is more likely to listen on (and work on) things you care about in the future.
But to make a difference you do eventually need to work on things that you find impactful, so you don't want to pollute your own research taste by implicitly absorbing incentives or others opinions unquestioningly.