Thanks! I agreed/appreciated your thoughts on how Psych can actually be relevant to human value alignment as well, especially compared to Neuro!
This seems mostly right to me!
As somebody who
wasted spent 7 years doing a Cognitive Neuroscience PhD, I think it's a bad idea for most people to do Neuroscience PhDs. PhDs in general are not optimised for truth seeking, working on high impact projects, or maximising your personal wellbeing. In fact, rates of anxiety and depression are higher amongst graduate students than the population of people with college degrees of similar age. You also get paid extremely badly, which is a problem for people with families or other financial commitments. For any specific question you want to ask, it seems worth investigating if you can do the same work in industry or at a non-profit, just to see if you would be able to study the same questions in a more focused way outside of academia.
So I don’t think doing a Neuro PhD is the most effective route to working on AI Safety. That said, there seem to be some useful research directions if you want to pursue a Neuro PhD program anyway. Some examples include: interpretability work that can be translated from natural to artificial neural networks; specifically studying neural learning algorithms; or doing completely computational research, aka a backdoor CS PhD while fitting your models to neural data collected by other people. (CS PhD programs are insanely competitive right now, and Neuroscience professors are desperate for lab members who know how to code, so this is one way into a computational academic program at a top university if you’re ok working on Neuroscience relevant research questions.)
Vael Gates (who did a Computational/Cognitive Neuroscience PhD with Tom Griffiths, one of the leaders of this field), has some further thoughts that they’ve written up in this EA Forum post. I completely agree with their assessment of neuroscience research from the perspective of AI Safety research here:
Final note: cellular/molecular neuroscience, circuit-level neuroscience, cognitive neuroscience, and computational neuroscience are some of the divisions within neuroscience, and the skills in each of these subfields have different levels of applicability to AI. My main point is I don’t think any of these without an AI / computational background will help you contribute much to AI safety, though I expect that most computational neuroscientists and a good subset of cognitive neuroscientists will indeed have AI-relevant computational backgrounds. One can ask me what fields I think would be readily deployed towards AI safety without any AI background, and my answer is: math, physics (because of its closeness to math), maybe philosophy and theoretical economics (game theory, principle-agent, etc.)? I expect everyone else without exposure to AI will have to reskill if they’re interested in AI safety, with that being easier if one has a technical background. People just sometimes seem to expect pure neuroscience (absent computational subfields) and social science backgrounds to be unusually useful without further AI grounding, and I’m worried that this is trying to be inclusive when it’s not actually the case that these backgrounds alone are useful.
Going slightly off tangent: your original question specifically mentions moral uncertainty. I share Geoffrey Miller’s views in his comment on this thread, that Psychology is a more useful discipline to study moral uncertainty compared to Neuroscience. My PhD advisor/my old lab did some of the neuroscience research frequently cited by EAs (e.g., the stuff on how different brain regions/processes affect moral decision making). I have to say I’m not super impressed with this research, and most of the authors on this paper have not gone on to pursue more of this kind of research or solve human value alignment. (Exception being Joshua Greene, who I do endorse working with!)
On the flip side, I think psychologists have done very interesting/useful research on human values (see this paper on how normal people think about population ethics, also eloquently written up as a shorter/more readable EA Forum post here). In this vein, I’ve also been very impressed by work produced by psychologists working with empirical philosophers, for example this paper on the Psychology of Existential Risk.
If you want to focus on moral uncertainty, you can collect way more information from a much more diverse set of individuals if you focus on behaviour instead of neural activity. As Geoffrey mentions, it is *much* easier/cheaper to study people’s opinions or behaviour than it is to study their neural activity. For example, it costs ~$5 to pay somebody to take a quick survey on moral decisions, vs. about $500 an hour to run an fMRI scanner for one subject to collect a super messy dataset that’s incredibly different to interpret. People do take research more seriously if you slap a photo of a brain on it, but that doesn’t mean the brain data adds anything more than aesthetic value.
It might make sense for you to check out what EA Psychologists are actually doing to see if their research seems more up your alley compared to the neuroscience questions you’re interested in. A good place to start is here: https://www.eapsychology.org/
Disclaimer: I work on the 1on1 team at 80k, but these comments reflect my individual impressions, though I asked a colleague to have a quick look before I posted.
As someone who did a PhD, this all checks out to me. I especially like your framing of PhDs "as more like an entry-level graduate researcher job than ‘n more years of school’". Many people outside of academia don't understand this, and think of graduate school as just an extension of undergrad when it is really a completely different environment. The main reason to get a PhD is if you want to be a professional researcher (either within or outside of academia), so from this perspective, you'll have to be a junior researcher somewhere for a few years anyway.
In the context of short timelines: if you can do direct work on high impact problems during your PhD, the opportunity cost of a 5-7 year program is substantially lower.
However, in my experience, academia makes it very hard to focus on questions of highest impact; instead people are funneled into projects that are publishable by academic journals. It is really hard to escape this, though having a supportive supervisor (e.g., somebody who already deeply cares about x-risks, or an already tenured professor who is happy to have students study whatever they want) gives you a better shot at studying something actually useful. Just something to consider even if you've already decided you're a good personal fit for doing a PhD!
Thanks so much for sharing this, Michelle!
I think I agree with everything you have written. I also personally feel like my husband and I are having impactful careers despite having a toddler + 1 on the way, and I don't think we would be massively more impactful if we were childfree.
This is due to a combination of factors:
1. Childcare time has replaced friend socialization time, basically completely. So we still have time to do a "normal" amount of work, we have just reprioritized our non-working hours.
2. As you know (being my supervisor haha), I work for a core EA org, which has a WONDERFUL parental leave/support policy. My difficult pregnancy has been happily accommodated in every way, and I have paid parental leave when the baby comes soon, which is a huge weight off my shoulders. I know British/European moms expect this, but as an American, it really is wonderful to have this level of support from my employer. So I think working for the right org is pretty critical for being able to have an impactful career while having kids.
3. My husband founded his own EA startup, so he sets his own schedule. This also allows him to be flexible with his hours and be hyperfocused on working on high impact projects, instead of wasting the working hours he has available on stuff that's less important.
4. My husband is also an excellent partner, who has been averaging more than 50% of the childcare (especially when I'm too pregnant to function). This is a critical factor in me being able to get work done, despite having a high energy toddler.
5. We have access to 8am-6pm daycare, which covers the normal working day. Unfortunately childcare is insanely expensive in the US. We pay over $2k per month in a high cost of living area for one child in daycare. We're lucky to be able to afford it, but figuring out how you're going to manage childcare should also be considered if you want to have kids + impact. Getting free childcare from grandparents is definitely the dream here. We do have some grandparent help, which allows us to do things like go to EAGs for a weekend, but basically nobody besides grandparents or people you pay seems to be interested in helping take care of children in modern Western society. (Kinda sad imo.)
6. Making new humans and trying really hard to give them happy lives seems like having a positive impact to me. I work on some longtermist causes, where it's extremely uncertain what our work now will produce later. But literally creating a new life and taking care of it feels like it has a pretty certain positive expected value :) Or at least more than what I would be doing with my spare time otherwise!
Thanks for summarizing/quoting the most important bits of these articles! But also... AHHHH
I found this guide extremely useful and well formatted. Thanks for putting the effort into writing it! The quotes from Ops people were also a fun way to break up such a big info dump :)
Thanks for sharing this info, Claire!
I think your team correctly concluded that in-person events are enormously valuable for people making big career changes, but running in-person events are expensive and super logistically challenging. I think logistics are somewhat undervalued in the EA community, e.g. I read a lot of criticism along the lines of, "Why don't community organizers or EAGs just do some extremely time costly thing," without much appreciation for how hard it is to get things to happen.
From this perspective, lowering the barrier for in-person events by buying a conference venue seems like a reasonable investment. It's fine to scrutinize the details (were there better deals given location/size constraints?), but I would like more critics of this purchase to acknowledge how buying a conference center has a lot of benefits.
These are really interesting figures, thanks so much for sharing!
Is the 2022 data up to date through November? Or does it cut off substantially earlier in the year? Wondering why it's so much lower than 2021.
Sorry I'm being lazy here and not looking at the raw data myself.
Thanks very much for posting this update!
I totally agree there is a lot of value in going corporate first. I recommend this route to many people! But it does seem unfortunate to not have the choice between EA/corporate, or have the choice set up pretty badly.
Somewhat embarrassing (for me) how you made the same arguments here, but with more clarity and detail than I have, months before my post 😅
100% endorse everything you said! Would have linked to this earlier, just didn't see it when you originally posted, sorry!
Thanks for chiming in here! You're exactly the kind of person who is being put in a bad position :( I hope you can figure something out! (And maybe consider applying for 80k coaching?)
Cool survey! Multiplying average rating by reach is an interesting technique, but I wasn't reading carefully and was super confused by the results for a minute.
It was smart to report the differences between resources recommended based on who was doing the recommending and to whom. Do you have any thoughts on how Rob Miles manages to create useful content for both experts and newcomers? I would have thought a few of the other resources on your list were trying to achieve this as well, but it seems like he's doing the best job.
That's a very important insight! And it's too bad, because programs like Redwood's MLAB are excellent opportunities that I would prefer undergrads to apply to over corporate positions at Bain.
I'm just hoping this post makes orgs update on the benefits of earlier application deadline/acceptances, with the understanding that these might still not outweigh the costs for each specific org.
That's a great point! The exact deadlines differ for each sector. But orgs get a slight advantage for being the first to give out offers, and a massive disadvantage for being the last to give out offers, so it's better to skew earlier than later (if operationally possible).
I've updated the post to say "EA orgs should accept Summer 2023 interns by January" in response to this comment.
Interesting! I support this holding the line.
This is really useful! Thanks for sharing :)
This looks like an awesome opportunity! Thanks for sharing :)
Very few of my peers are having kids. My husband and I are the youngest parents at the Princeton University daycare at 31 years old. The next youngest parent is 3 years older than us, and his kid is a year younger than ours. Considering median age of first birth at the national level is 30 years old, it seems like a potential problem that the national median is the Princeton minimum.
I wonder what the birth rate is specifically among American parents with/doing STEM PhDs. I'm guessing it's extremely low for people under the age of 45. Possibly low enough to raise concerns about how scientists are not procreating anymore.
Most birth rate statistics I've seen group doctorates in with any professional degree other than a masters, so it's hard to tell what's going on outside anecdotal evidence. For example: https://www.cdc.gov/nchs/data/nvsr/nvsr70/nvsr70-05-508.pdf
Princeton is raising annual stipends to about $45,000. Two graduate student parents now have a reasonable combined household income, especially if they can live in subsidized student housing. I wonder if this will make a big difference in Princeton fertility rates.
On the other hand, none of my NYC friends making way over $90,000 have kids, so this might be a deeper cultural problem.
To be clear, I don't think people who don't want to have kids should have them, or that they're being "selfish" or whatever. But societies without children will literally die, so it's concerning that American society has such strong anti-natal sentiment. Especially if it's the part of American society with some of the smartest people who are more motivated by truth seeking than money.
A lot of people (myself very much included) don't know how to talk about loss in a way that provides comfort to the person experiencing the loss. Thank you so much for this extremely well articulated set of suggestions and framework for implementing them!
Makes sense! Thanks again for writing such a comprehensive report!
Definitely!!!! A lot of journalists seem to cover topics they don't really understand (mainstream media coverage of things like nuclear power or cryptocurrency can be particularly painful), so it was awesome to read something written by a person who gets the basic philosophy.
I think this is a really comprehensive report on this space! Nothing against the report itself, I think you did a great job.
As somebody who has spent the last ~10 years studying neuroscience, I'm basically pretty cynical about current brain imaging/BCI methods. I plan to pivot out of neuro into higher impact fields once I graduate. I just wanted to add my 2 cents as somebody who has spent time doing EEG, CT Scan, MRI, fMRI, TMS, and TDCS research (in addition to being pretty familiar with MEG and FNIRS):
+ I don't think getting high quality structural images of the brain is useful from an EA perspective, though it has substantial medical benefits for the people who need brain scans/can afford to get them. This just doesn't strike me as one of the most effective cause areas, the same way a cure for Huntington's disease would be a wonderful thing, but might not qualify as a top EA cause area.
+ I don't think getting measures of brain activity via EEG or fMRI has yet produced results that I would consider worth funding from an EA perspective. Again, I'm not saying some results aren't useful (I'm especially impressed with how EEG helped us understand sleep). But I don't think any of this research is substantially relevant to preventing civilizational or existential risks.
+ I don't think our current brain stimulation methods (e.g., TMS, TDCS) have any EA relevance. The stimulation provided from these procedures (in healthy subjects) just doesn't seem to have huge cognitive effects compared to more robust methods (education, diet, exercise, sleep, etc.). Brain stimulation might have much bigger impacts for chronically depressed and Parkinson's patients via DBS. But again I don't think this stuff is relevant to civilizational or existential risks, and I think there are probably much more cost effective ways of improving welfare.
There may still be useful neurotechnology research to be done. But I think the highest impact will be in computational/algorithmic stuff instead of things that directly probe the human brain.
I thought this was a surprisingly good article! Many journalists get unreasonably snarky about EA topics (e.g., insinuate that people who work in technology are out of touch awkward nerds who could never improve the world; suggest EA is cult-like; make fun of people for caring about literally anything besides climate change and poverty). This journalist took EA ideas seriously, talked about the personal psychological impact of being an EA, and correctly (imo) portrayed the ideas and mindsets of a bunch of central people in the EA movement.
Voted, it was surprisingly painless. Fingers crossed for Will, although he was buried in the middle of the pack of names due to unfortunate lack of alphabetical prominence. New cause area: renaming our thought leaders Aaron Aaronson.
Spicy takes, but I think these are good points people should consider!
I'm also doing a PhD in Cognitive Neuroscience, and I would strongly agree with your footnote that:
"Final note: cellular/molecular neuroscience, circuit-level neuroscience, cognitive neuroscience, and computational neuroscience are some of the divisions within neuroscience, and the skills in each of these subfields have different levels of applicability to AI. My main point is I don’t think any of these without an AI / computational background will help you contribute much to AI safety, though I expect that most computational neuroscientists and a good subset of cognitive neuroscientists will indeed have AI-relevant computational backgrounds."
A bunch of people in my program have gone into research at DeepMind. But these were all people who specifically focused on ML and algorithm development in their research. There's a wide swath of cognitive neuroscience, and other neuro sub-disciplines you list, where you can avoid serious ML research. I've spoken to about a dozen EA neuroscientists who didn't focus on ML and have become pretty pessimistic about how their research is useful to AI development/alignment. This is a bummer for EAs who want to use their PhDs to help with AI safety. So please take this into consideration if you're an early stage student considering different career paths!
This is cool; I often think about how much better the UK system is than the US when it comes to educating doctors.
I think my biggest quibble with your post is: "I assume the odds of a successful campaign are 50%."
I would maybe revise that down to 5%? Professional organizations like the American Medical Association have their professions in a stranglehold; they have financial incentives to keep their profession difficult to access (eg allows them to demand higher wages), and they can easily manipulate the public by saying things like "Don't you want a FULLY trained doctor? Not somebody who skipped undergraduate and went straight to medical school?"
A substantially more skeptical campaign success probability obviously lowers the expected ROI of this effort. But I wonder if other people who know more about politics are as skeptical as me.
All that being said, I would vote for your campaign if it came up on my state's ballot!
Thanks! I was just curious, didn't expect a super in depth analysis. Although that would be super cool to see too :)
Cool report! Thanks for sharing.
Was there anything in the report that you or the Happier Lives Institute were particularly surprised by?
This is cool! Thanks for compiling. I really love Focusmate, glad to see it included.
I wonder if it would be possible to allow people to vote for different recommendations so you could sort by # of endorsements? Just as a quick way to see which tools have been useful to the most people.
Great talk, and thanks for including the slides and the transcript!
- Which directions in global priorities research seem most promising?
- Has Andreas ever tried communicating deep philosophical research to politicians/CEOs/powerful non-academics? If so, how did they react to ideas like deontic long-termism? Does he think any of them made a big behavior change after hearing about these kinds of ideas?
I'm a little surprised by your perspective. My impression is that Open Phil, EA Infrastructure, FTX, Future Flourishing, etc. are all eagerly funding AI safety stuff. Who else are you imagining funding this space who isn't already?
Also, a bunch of EA community organizers are pushing AI risks substantially harder as a cause area now than they did 5 years ago (e.g. 80k, many university groups).
If you're worried about short timelines, shouldn't the push be to transition people from meta work on community building to object level work directly on alignment?
Thanks for sharing your thoughts! Let me know if I misunderstood something.
I'm not optimistic about this. I do a version of street outreach every year at Princeton's club fairs, where I pitch EA to young, smart, analytical people, aka a pretty EA-friendly demographic. Our conversion rate of outreach/pitches to big life changes is TINY. I would believe TLYCS numbers if they think online advertising is more cost effective than in person outreach.
NYC needs a bigger EA community! Super happy you are working on this :)
Is there any chance you could share more information about the coworking space and community center?
This is a great list of scientists! Thanks for compiling :)
I would also add Tom Griffiths to this list.
Sam Gershman, who you mention, actually just published a really accessible book, What Makes Us Smart, which I recommend to people new to the field of human intelligence. Sam Gershman is probably one of the most productive and brilliant scientists in this field. https://press.princeton.edu/books/paperback/9780691205717/what-makes-us-smart
For people who are interested in the intersection of AI/psychology, I strongly encourage you to focus on high level/computational questions, staying away from super low level biological/chemical neuroscience questions. There are a bunch of EA Neuroscientists I've spoken to who are all pretty disillusioned by the progress we can make via brain computer interfaces or low level cellular research. But people are excited about computational models of human cognition!
Thanks for writing this!
I appreciate your points about how EA grantmakers are 1. part time, 2. extremely busy, 3. and should spend more time getting grants out the door instead of writing feedback. I hope nobody has interpreted your lack of feedback as a personal affront! It just seems like the correct way to allocate your (and other grantmakers') time.
I think the EA community as a whole is biased too far towards spending resources on transparency at the expense of actually doing ~the thing~. Hopefully this post makes some people update!
Really cool survey, and great write up of the results! I especially liked the multilevel regression and post-stratification method of estimating distributions.
Peter Singer seems to be higher profile than the other EAs on your list. How much of this do you think is from popular media, like The Good Place, versus from just being around for longer?
Peter Singer is also well known because of his controversial disability/abortion views. I wonder if people who indicated they only heard about Peter Singer (as opposed to only hearing about MackAskill, Ord, Alexander, etc.) scored lower on ratings of understanding EA? I've had conversations with people who refused to engage with the EA community because we were "led by a eugenicist", but that's clearly not what EA believes in.
Also kinda sad EA is being absolutely crushed by taffeta.
Great question! We need more research ;)
Sounds really cool! Would love to hear more when you're ready :)
This is such cool research! Thanks to everybody who contributed :)
I've found the majority of EA University Club members drift out of the EA community and into fairly low impact careers. These people presumably agree with all the EA basic premises, and many of them have done in depth EA fellowships, so they aren't just agreeing to ideas in a quick survey due to experimenter demands, acquiescence bias, etc.
Yet, exposure to/agreement with EA philosophy doesn't seem sufficient to convince people to actually make high impact career choices. I would say the conversion rate is actually shockingly low. Maybe CEA has more information on this, but I would be surprised if more than 5% of people who do Introductory EA fellowships make a high impact career change.
So I would be super excited to see more research into your first future direction: "Beyond agreement with basic EA principles, what other (e.g., motivational or cognitive) predictors are essential to becoming more engaged and making valuable contributions?"
Effective Thesis is awesome! I will mention their coaching services in the top post :)
Great advice! Thanks for sharing :)
A bunch of this definitely does generalize, especially:
"If you have multiple research ideas, considering writing more than one (i.e. tailored) SOP and submit the SOP which is most relevant to faculty at each university."
"Look at groups' pages to get a sense of the qualification distribution for successful applicants, this is a better way to calibrate where to apply than looking at rankings IMO. This is also a good way to calibrate how much experience you're expected to have pre-PhD."
And if you can pull this off, you'll make an excellent impression: "For interviews, bringing up concrete ideas on next steps for a professor's paper is probably very helpful."
CS majors and any program that's business relevant (e.g. Operations Research and Financial Engineering) have excellent earning/job prospects if they decide to leave partway through. I think the major hurdle to leaving partway through is psychological?
+1 to AXRP!
I had the same exact reaction! "Only $200 for one attendee? In this economy? What is that, 20 bananas?"
Thanks, Julia! The "Advice for responding to journalists" doc you link is really excellent. Everyone should read this before speaking to the media. https://docs.google.com/document/d/1GlVEKYdJU2LqE6tXPPay_2tBmJTQrsQxAO27ZaeKAQk/edit#heading=h.86t1p0fnb9uz
Some advice I would add: if a journalist asks to interview you, try to understand where they are in their research.
Do they have a narrative that they are already committed to and they're just trying to get a juicy quote from you? If so, it might not make sense to talk to them since they might twist whatever you say to fit the story they have already written.
Alternatively, are they in information gathering mode and are honestly trying to understand a complex issue? If they have not written their story yet and you think you can give them information that will make their writing more accurate, then it makes more sense to do an interview.
That's a good point, prestige is very important. I would argue having a good relationship with your advisor is the most important, since its a bad idea to be in an abusive relationship for multiple years, but I will edit the main post to take this perspective into account!
Sorry, you're right about Bryan Caplan making a more nuanced argument than what I suggested! But I just found his whole thing about how you can have more time if you don't drive your kid around to activities is basically inapplicable to early childhood. My partner and I easily spent 40 hours a week on childcare related stuff and the only places my kid goes to are daycare and the park. Young children just need a lot of attention! I found all his arguments about how to save time basically only apply to older kids who can read and amuse themselves, which sounds great, but is currently useless advice.
I totally agree with your points on: movements that frown upon having children will repel top talent, and you can have kids and still be an über effective altruist.
I disagree with the idea that having kids makes people care more about the future. I deeply respect Julia Wise, and maybe this is true for her and other people, but I have found being a parent hasn't really lengthened my philanthropic time horizons. I would change my mind on this if anybody has studied changes in altruistic behaviors before/after people became parents, but after having a child I've actually found myself more open to hyper short-termist altruism that I never would have considered pre-having children. (E.g., adopting a child makes more sense to me now. )
I also disagree on the Bryan Caplan stuff on how you can be a good parent with less effort than current USA norms dictate. Like, if you have an infant that needs to eat every 2 hours, 24 hours a day, for 3 months, there's no way to slack on that. You can hire somebody to do it for you, you can live near helpful family, or you can be a deadbeat parent. But somebody has to do this intense amount of labor or the child will die. Things definitely get easier once kids get older, and you can just choose to let your kid read all day after school instead of driving them around to a million extracurricular activities, but Bryan's focus on this being the "norm" that you can easily ignore reveals more about the social class of people he feels peer pressure from than what children actually normally do. I still think it's great to have kids, even though it's a lot of work. I just don't like people acting like it's not a lot of work.
I think one of the main reasons EA people are underrating having kids is because they almost never interact with children? At least in graduate school, very few people have children. I'm the only student in my department with a child. I get the sense that many EAs live in similar age segregated environments. I would encourage more people to babysit their young relatives if they have the opportunity, just so they can see how fun it is :)