Introducing Probably Good: A New Career Guidance Organization

post by omernevo (, sella · 2020-11-06T14:50:56.726Z · EA · GW · 76 comments


  How You Can Help
  Probably Good Overview
  Core Principles
  Short-term Plans
  Open questions
  Further reading
  About Us

We’re excited to announce the launch of Probably Good, a new organization that provides career guidance intended to help people do as much good as possible.


For a while, we have felt that there was a need for a more generalist careers organization than 80,000 Hours — one which is more agnostic regarding different cause areas and might provide a different entry point into the community to people who aren’t a good fit for 80K’s priority areas. Following 80,000 Hours’ post about what they view as gaps in the careers space [EA · GW], we contacted them about how a new organization could effectively fill some of those gaps.

After a few months of planning, asking questions, writing content, and interviewing experts, we’re almost ready to go live (we aim to start putting our content online in 1-2 months) and would love to hear more from the community at large.

How You Can Help

The most important thing we’d like from you is feedback. Please comment on this post, send us personal messages on the Forum, email us (omer at probablygood dot org, sella at probablygood dot org), or set up a conversation with us via videoconference. We would love to receive as much feedback as we can get.

We’re particularly interested in hearing about things that you, personally, would actually read // use // engage with, but would appreciate absolutely any suggestions or feedback.

Probably Good Overview

The most updated version of the overview is here.

Following is the content of the overview at the time this announcement is posted.


Probably Good is a new organization that provides career guidance intended to help people do as much good as possible. We will start by focusing on online content and a small number of 1:1 consultations. We will later consider other forms of career guidance such as a job board, scaling up the 1:1 consultations, more in-depth research, etc.

Our approach to guidance is focused on how to help each individual maximize their career impact based on their values, personal circumstances, and motivations. This means that we will accommodate a wide range of preferences (for example, different cause areas), as long as they’re consistent with our principles, and try to give guidance in accordance with those preferences.

Therefore, we’ll be looking at a wide range of impactful careers under different views on what to optimize for or under various circumstantial constraints, such as how to maximize impact within specific career paths, within specific geographic regions, through earning to give, or within more specific situations (e.g. making an impact from within a large corporation).

There are other organizations in this space, the most well-known being 80,000 Hours. We think our approach is complementary to 80,000 Hours’ current approach: Their guidance mostly focuses on people aiming to work on their priority problem areas [EA · GW], and we would be able to guide high quality candidates who aren’t. We would direct candidates to 80,000 Hours or other specialized organizations (such as Animal Advocacy Careers) if they’re a better fit for their principles and priority paths.

This characterization of our target audience is very broad; this has two main motivations. First, as part of our experimental approach: we are interested in identifying which cause areas currently have the most unserved demand. By providing preliminary value in multiple areas of expertise, we hope to more efficiently identify where our investment would be most useful, and we may specialize (in a more informed manner) in the future. The second motivation for this is that one possibility for specialization is as a “router” interface - helping individuals make preliminary decisions tailored to their needs and context, and then connecting them to specific domain experts (or specialized organizations).

We believe the three main source of impact for this project are:

You can see more details about why we believe this project will be impactful here.

Core Principles

Following are the core principles we believe this project should work by:

You can see more details about these core principles here.

Short-term Plans

Following are the areas we were considering focusing on in our early stages. Much like everything else in this document, we are very interested in feedback about them and are willing to change them:

You can see more details about our short term plans here.

Open questions

We are always in the process of trying to improve our understanding of our most critical points of uncertainty. We have a long list of open questions relating to our strategy, but we believe these are a few of the most critical ones:

You can read more about our open questions here.

Further reading

You can find the rest of our preliminary documents (also linked throughout this document) here:

About Us

Probably Good was founded by Omer and Sella Nevo - two brothers committed to enacting large-scale positive change. After several years providing impact-driven career advice locally, we wanted to try and fill in gaps in impact-driven evidence based career guidance globally.

Omer was the co-founder and CEO of Neowize, a YC-backed startup, which was acquired by Il Makiage, where Omer currently acts as VP of Research & Development. He is also a co-founder of Effective Altruism Israel. 

Sella is the head of Google’s Flood Forecasting Initiative. He also teaches Applied Ethics and Information Security at Tel Aviv University, is a Venture Partner at the Firstime VC advising on impact-driven investments, and is the founder and head of the board of Effective Altruism Israel.

Thank you for your time and feedback!



Comments sorted by top scores.

comment by MichaelA · 2020-11-07T07:13:28.583Z · EA(p) · GW(p)

Could you say why you chose the name Probably Good, and to what extent that's locked-in at this stage?

I may be alone in this, but to me it seems like a weird name, perhaps especially if a large part of your target audience will be new EAs and non-EAs. 

Firstly, it seems like it doesn't make it at all clear what the focus of the organisation is (i.e., career advice). 80,000 Hours' name also doesn't make its focus clear right away, but the connection can be explained in a single sentence, and from then on the connection seems very clear. Whereas if you say "We want to give career advice that's probably good", I might still think "But couldn't that name work just as well and for just the same reason for donation advice, or AI research, or relationship advice, or advice about what present to buy a friend?" 

This is perhaps exacerbated by the fact that "good" can be about either morality or quality, and that the name doesn't provide any clues that in this case it's about morality. (Whereas CEA has "altruism" in the name - not just "effective" - and GiveWell has "give" in the name - not just "well".)

In contrast, most other EA orgs' names seem to more clearly gesture at roughly what they focus on (e.g., Animal Advocacy Careers, Animal Charity Evaluators, GiveWell, Giving What We Can, Centre for Effective Altruism...).

Secondly, I think I'd feel pretty underwhelmed if someone introduces what they do as "We want to give career advice that's probably good." 

I'd be even more strongly turned off if someone said "We give highly precise career advice that's all definitely good for everyone", as I'd think they're wrong and overconfident. And I'd want your org, 80,000 Hours, GiveWell, etc. to all make very clear how complicated the questions they tackle are and how confident they are in what they say (which should and will often be "not very"). 

But maybe the best way to do that is by saying something like "We want to help people have the highest impact they can have. This is extremely complicated, and we know we don't have all the answers, and on some questions on which we have pretty much no clue at all. But we work hard to get the best answers we can, and there are some questions where we're pretty confident we can be quite helpful." (This is just what came to mind quickly; I'm not saying it's the ideal pitch.)

Maybe to me, starting by saying "Probably Good" sounds not like virtuous humility and recognition of uncertainty, but rather like a lack of ambition - like shrugging and settling for something decent, rather than pushing hard to get closer and closer to the best answers, even if they can never be reached with certainty. (I'm saying that's what the phrase brings to mind, not that I think that accurately describes your approach.)

I suspect this might be a bigger problem for new or non-EAs. They might think the answers should be relatively easy and certain, as they haven't considered complexities like downside risks and counterfactuals and flow-through effects. If so, they might find something less attractive if it says it's just "Probably Good". And/or they might be used to overconfidence, and thus instinctively interpret people saying "certainly" as "probably", "probably" as "maybe", etc. 

So my personal view is that it seems likely you could come up with a better name. And it also seems like the best time to think carefully about this is now or soon, before you e.g. put up a website.

Replies from:, willbradshaw, rime, MaxRa
comment by omernevo ( · 2020-11-07T10:12:52.343Z · EA(p) · GW(p)

First of all, thank you for the feedback! It's not always easy to solicit quality (and very thoroughly justified) feedback, so I really appreciate it.

Before diving into the specifics, I'll say that on the one hand - the name could definitely change if we keep getting feedback that it's suboptimal. That could be in a week or in a year or two, so the name isn't final in that sense.

On the other hand, we did run this name quite a few people (including some who aren’t familiar with EA). We tried (to the best of our ability) to receive honest feedback (like not telling people that this is something we're setting up or letting someone else solicit the feedback). Most of what you wrote came up, but rarely. And people seemed to feel positively about it. It's definitely possible that the feedback we got on it was still skewed positive, but it was much better for this name than for other options we tried.

Now, to dive into the specifics and my thoughts on them:

* The name doesn't make the function clear: I think this is a stylistic preference. I prefer having a name that's more memorable, when the function can be explained in a sentence or two right after it. I know the current norm for EA is to name orgs by stating their function in 2 or 3 words, but I think the vast majority of orgs (for profit and non-profit) choose a name that doesn't just plainly state what the org does. I will mention that, depending on context, what might appear is "Probably Good Career Advice", which is clearer (though still doesn't fully optimize for clarity).

* Good can mean quality and morality: Again, I liked that. We do mean it in both ways (the advice is both attempting to be as high quality as possibly and as high as possible in moral impact, but we are working under uncertainty in both parameters).

* Turning people off by giving the message that the product isn't good or that we're not ambitious in making it good: I pretty much fully agree with you on the analysis. I think this name reduces the risk of people expecting a level of certainty that we'll never reach (and is very commonly marketed in non-EA career advice) and increases the risk of people initially being turned off by perceived low quality or low effort.

I also like and agree with your "pitch" and that is more of less how I'm thinking about the issue.

Two relevant points on weighing this trade-off:

1. Currently, I'm more worried about setting too high expectations than the perception of low quality. Both because I think we can potentially cause more harm (people following advice with less thought than needed) and because I think there are other ways to signal high quality and very few ways to (effectively) lower people's perceived certainty in our advice.

2. Most people we ran the name by did catch on that the name was a little tongue-in-cheek in it's phrasing. This wasn't everyone, but the people who did see that - didn't think there was a signal of lower quality.

I do agree there's a risk there, but I see it as relatively small, especially if I'm assuming that most people will reach us through channels where they have supposedly heard something about us and aren't only aware of the name.

To summarize my thoughts:

I don't think it's a perfect name.

I like that it's a memorable phrase rather than a bland statement of what we do. I like that it's a little tongue-in-cheek and that it does a few things at the same time (two meaning or good, alluding the the uncertainty). I like that it put our uncertainty front and center.

I agree there's a risk of signaling low quality \ effort and that all of the things that I like could also be a net harm if I'm wrong (which isn't specifically unlikely).

We'll collect more feedback on the name and we'll change if it doesn't look good.

Replies from: EricHerboso, Tsunayoshi, MichaelA, Manuel_Allgaier, jtm, MichaelA, MichaelA, MichaelA, michaelchen
comment by EricHerboso · 2020-11-10T10:40:26.415Z · EA(p) · GW(p)

In addition to the other points brought up, I wanted to add that "probably good" has ~4 million google search results, and the username/url for "ProbablyGood" has already been taken on Facebook, Twitter, Instagram, etc. This may make the name especially difficult to effectively market.

comment by Tsunayoshi · 2020-11-07T14:33:04.673Z · EA(p) · GW(p)

* Good can mean quality and morality: Again, I liked that. We do mean it in both ways (the advice is both attempting to be as high quality as possibly and as high as possible in moral impact, but we are working under uncertainty in both parameters).  

For what it's worth, I liked the name specifically because to me it seemed to advertise an intention of increasing a lot of readers' impact individually by a moderate amount, unlike 80000's approach where the goal is to increase  fewer readers' impact by a large amount. 

I.e. unlike Michael I like the understatement in the name, but I agree with him that it does convey understatement. 

comment by MichaelA · 2020-11-07T11:07:51.634Z · EA(p) · GW(p)

I continue to like how thoughtful you two seem to be! It seems like you've already anticipated most of what I'm pointing to and have reasonable reasons to hold your current position. I especially like that you "tried (to the best of [your] ability) to receive honest feedback (like not telling people that this is something [you're] setting up or letting someone else solicit the feedback)."

I still think this name doesn't seem great to me, but now that's with lower confidence. 

(Also, I'm just reporting my independent impression - i.e., what I'd believe if not updating on other people's beliefs - and don't mean to imply there's any reason to weight my belief more strongly than that of the other people you've gotten feedback from.)

I'll again split my responses into separate threads.

comment by Manuel_Allgaier · 2020-11-11T16:17:43.751Z · EA(p) · GW(p)

FWIW: 75 upvotes (as of now) for Michael's post seem strong evidence that at least a significant fraction of forum readers find the name "weird" or "off-putting" at first glance. In most cases, that might be enough for people not to look into it more (e.g. if it's one of hundreds of posts on their Facebook timeline). 

Even if the other half of people find the name great, I think I'd rather go for a less controversial name which no-one finds weird (even if fewer people find it great). 

Finding a good name is difficult - all the best and let us know if we can help! You could e.g. solicit ideas here on in a Facebook group and run polls in the "EA polls" group to get better quantitative feedback. 

Replies from: sella
comment by sella · 2020-11-13T09:58:27.586Z · EA(p) · GW(p)

We’re definitely taking into account the different comments and upvotes on this post. We appreciate people upvoting the views they’d like to support - this is indeed a quick and efficient way for us to aggregate feedback.

We’ve received recommendations against opening public polls about the name of the organization from founders of existing EA organizations, and we trust those recommendations so we’ll probably avoid that route. But we will likely look into ways we can test the hypothesis of whether a “less controversial” name has positive or negative effects on the reaction of someone hearing this name for the first time.

Replies from: MaxRa
comment by MaxRa · 2020-11-15T15:08:57.997Z · EA(p) · GW(p)

Sorry if this is not helpful, but I felt like brainstorming some names.

  • Worthwhile/Worthy Pursuits
  • Paths of Impact
  • Good Callings
  • Careers for Good/Change
  • Good Careers Advice
  • Altruistic Career Support
  • (Impactify, WorkWell seem already taken... and for the latter GiveWell might not appreciate the association)
Replies from: RandomEA
comment by RandomEA · 2020-11-23T04:12:44.902Z · EA(p) · GW(p)

How about just Good Careers?

The two most widely known EA organizations, GiveWell and 80,000 Hours, both have short and simple names.

comment by jtm · 2020-11-16T23:12:00.590Z · EA(p) · GW(p)

I think this name reduces the risk of people expecting a level of certainty that we'll never reach (and is very commonly marketed in non-EA career advice)

Just commenting to say that, in my view,  it's really promising for your project that this concern is so front-and-center already.  

I'm probably preaching to the choir, but I think that epistemic modesty is absolutely key in EA, and working hard to communicate your uncertainty – even when your audience is looking for certainty – is even better. 

Best of luck!

Replies from: jtm
comment by jtm · 2021-03-04T14:41:39.460Z · EA(p) · GW(p)

Revisiting this just to say that, for what it's worth, the  Danish beer company Carlsberg has been very successful with its slogan of being "Probably the Best Beer in the World."  

comment by MichaelA · 2020-11-07T11:22:17.162Z · EA(p) · GW(p)

I think this name reduces the risk of people expecting a level of certainty that we'll never reach (and is very commonly marketed in non-EA career advice)

[...] Currently, I'm more worried about setting too high expectations than the perception of low quality. Both because I think we can potentially cause more harm (people following advice with less thought than needed) and because I think there are other ways to signal high quality and very few ways to (effectively) lower people's perceived certainty in our advice.

I agree that: 

  • Many non-EA things market themselves with more certainty than is warranted
  • EA things that don't want to be perceived as very confident or as having definitive answers sometimes are anyway (e.g., 80k have often expressed that this happen to them)
  • It's worth making serious efforts to mitigate that risk
  • This name might help with mitigating that

From my current perspective, this might be the strongest argument for Probably Good as the name. 

I don't know enough to say whether there are indeed "very few ways to (effectively) lower people's perceived certainty in our advice". (Though I think one bit of evidence in favour of that is that 80k seems to struggle with this despite putting a lot of effort into it.) Could you expand on why you think that? 

If you're right about that, and the name Probably Good would substantially help with this issue, then that seems like quite a strong argument indeed for this name.

But maybe if you're right about the above claim, that's also evidence that the name Probably Good won't substantially help?

Another framing is that the marginal risk-mitigation from having that name might be relatively small, if you'll in any case infuse a lot of the rest of the project with clear statements of uncertainty and efforts to push against being taken as gospel. I say this (with low confidence) because:

  • I'd imagine that for many people, those statements and efforts will be enough.
  • And for some people, any EA career advice provider, and especially any "lists" or concrete suggestions they provide, will be taken roughly as gospel, regardless of that provider's efforts to prevent that.
  • So I feel unsure whether there'd be many people for whom the name being Probably Good would  substantially affect the extent to which they overweight the advice, or get angry if following the advice doesn't work out, or the like.
    • But maybe there would be - I wouldn't claim to have any real expertise or data on this. And you've obviously thought about it much more than me :)
Replies from:
comment by omernevo ( · 2020-11-07T20:31:19.584Z · EA(p) · GW(p)

I think we agree on more than we disagree :-)

I was thinking of two main things when I said there aren’t many ways to reduce people’s expectation of certainty.

The first, as you mentioned, is 80k’s experience that this is something where claiming it (clearly and repeatedly) didn’t have the desired outcome.

The second, is through my own experience, both in giving career advice and in other areas where I did consultation-type work. My impression was (and again, this is far from strong evidence) that (1) this is hard to do and (2) it gets harder if you don’t do it immediately at the beginning. So for example, when I do 1:1s - that’s something I go into when setting expectations in the first few minutes. When I didn’t, that was very hard to correct after 30 minutes. This is one of the reasons that I think having this prominent (doesn’t have to be the name, could be in the tagline \ etc.) could be helpful.

Your later points seem to indicate something which I also agree with: That naming isn’t super important. I think there are specific pitfalls that can be seriously harmful, but besides that - I don’t expect the org name to have a very large effect by itself one way or another.

comment by MichaelA · 2020-11-07T11:46:43.317Z · EA(p) · GW(p)

The name doesn't make the function clear: I think this is a stylistic preference. [...] I think the vast majority of orgs (for profit and non-profit) choose a name that doesn't just plainly state what the org does. 

Yeah, I think this is true, and reduces the importance of my first "argument against" the name. (I think my second argument seems a bigger deal to me than the first one, but I didn't make that clear.)

I do agree there's a risk there, but I see it as relatively small, especially if I'm assuming that most people will reach us through channels where they have supposedly heard something about us and aren't only aware of the name.

That's a good point. I think this reduces both the risks and also perhaps the benefits of any particular name (as it makes precisely what the name is less important in people's overall views or actions regarding the organisation).

comment by MichaelA · 2020-11-07T11:14:14.964Z · EA(p) · GW(p)

I will mention that, depending on context, what might appear is "Probably Good Career Advice", which is clearer (though still doesn't fully optimize for clarity).

Yeah, that helps with the first "issue" I raised. 

Though reading that sentence made me realise another potential issue with the name (or maybe another thing that was subconsciously part of my initial aversion to it but): I think it sounds to me quite tongue-in-cheek and non-serious, in a way that might not be best for your aims. (You note the "tongue-in-cheek"-ness later in your comment as a positive,  and I think it can be sometimes, but in this particular case I currently think it may be more likely to be negative.)

If someone directed me to "Probably Good Career Advice", it might sound like either some sort of joke/prank/spoof, or something that was real but the name of which is sort-of a joke. And I might assume it was set up by people who are still in college. (It maybe feels like the sort of name the Weasley brothers in Harry Potter would've come up with.)

So if what I'm after in this context is advice on how to maximise my impact on the world, I might think these people probably aren't the sort of people who'll be addressing that serious question in a serious way. I think this would actually be true for me, and I'm only 24 and did stand-up comedy for several years - i.e., I'm not a very "serious person", but I've got my "serious person" hat on when I'm first engaging with a new org regarding how to make my career impactful. I imagine this issue might be more pronounced on average for people who are older or "more serious" than me, which includes a lot of potentially impactful people.

This is different to e.g. 80k having some tongue-in-cheek parts of some articles or podcast episodes, because that's not the very first thing someone will see from 80k, and it's always just a part of a larger thing that's mostly focused on impact. With the name Probably Good, that's essentially the first thing someone will see from the org, and it's not just a part embedded in something else (the name is like its own thing, not a sentence in an article). 

But it's totally possible a higher proportion of your target audience would be attracted to than pushed away by the tongue-in-cheek-ness of the name; I'm just going by my own reaction, which is of course a minuscule sample size.

Replies from:
comment by omernevo ( · 2020-11-07T20:32:11.414Z · EA(p) · GW(p)

This is the risk we were most worried about regarding the name. It does set a relatively light tone. We decided to go with it anyway for two reasons:

The first is that the people we talked to said that it sounds interesting and interested them more than the responses we got for more regular, descriptive names.

The second is that our general tone in writing is more serious. Serious enough that we’re working hard to make sure that it isn’t boring for some people who don’t like reading huge walls of dense text. We figure it’s best to err on the other side in this case.

comment by mic (michaelchen) · 2020-12-23T04:56:10.470Z · EA(p) · GW(p)

I'm not a fan of the name "Probably Good" because:

  • if it's describing the advice, it seems like the advice might be pretty low-effort and not worth paying attention to
  • if it's describing the careers, it sounds like the careers recommended have a significant chance of having a negative impact, so again, not worth reading about
comment by Will Bradshaw (willbradshaw) · 2020-11-10T10:05:50.272Z · EA(p) · GW(p)

I want to briefly second (third?, nth?) this. I'm potentially pretty excited about more EA oriented career advice/coaching/mentoring from an EA perspective, but I think I'd feel kind of embarrassed about referring someone to an organisation called "Probably Good".

When I saw the title of this post I thought it was evaluating whether or not another career guidance organisation would be good or not, and concluding yes. I was pretty surprised to discover this was not the case. That confusion might be kind of funny to some people, I guess, but I don't think it bodes terribly well. In general I think jokey org names are a pretty bad idea.

Replies from:
comment by omernevo ( · 2020-12-08T08:54:11.497Z · EA(p) · GW(p)

Just writing a quick comment here that I've changed the title of this post to be less confusing.

The previous title: "A New Career Guidance Organization: Probably Good" does sound like this is an evaluation. Didn't want to it seem like this comment didn't make sense to people who haven't seen the previous post title.

comment by Rémi T (rime) · 2020-11-09T11:52:39.688Z · EA(p) · GW(p)

The quality of this conversation is awesome!

I think Probably good is a great name. What are some other good names you have considered so far? Does anyone have alternative ideas ?

My first understanding of the name was something like "this is a website that will help me have a career that will probably have a good impact", where probably meant something like ~70%-ish. I thought this wasn’t very ambitious, but it also had something intriguing, so I felt curious to learn more.

I’d like my career to be (almost) guaranteed to have some good consequences. I think my odds of doing some good with my career if EA didn’t exist at all would be above 95%. (As many people interested in EA, I already wanted to do good when I discovered the movement.)

So I’d be even more interested in a website which can probably help me do even better than I would have done without its advice.

I’m not sure that “Probably better” would be a better name than “Probably good”. I feel like it preserves the modesty and the catchiness, while also making it sound a little more ambitious. It could also be in line with your experimental approach, trying to make the quality your advice better as you gain experience.

What do you think ? :)

Replies from:
comment by omernevo ( · 2020-11-10T09:40:43.550Z · EA(p) · GW(p)

My initial intuition (stressing even more that this is based on no evidence but my best guess) is that the name "Probably Better" would be more confusing to people than "Probably Good". I'm expecting a lot of people asking "better than what?"

It also loses the meaning of good as in moral good (which I like, but not everyone here did).

comment by MaxRa · 2020-11-07T09:38:29.990Z · EA(p) · GW(p)

That was also my first thought. My brain autocompleted something like "Probably good, but wouldn't be surprised if bad". I think I don't mind names being more or less informative much, though,  as long as the name is unique and sounds nice (though the EA standard seems to be more discriptive rather than less). 

(And thanks to the founders, I really would love seeing new orgs to cover what 80,000Hours doesn't!)

comment by lincolnq · 2020-11-09T19:12:31.719Z · EA(p) · GW(p)

I read your Overview and several of the other materials and feel there is a lack of examples. Your idea seems large and abstract, and even after reading a bunch of your materials, I don't feel that I really understand what your career guidance is -- or especially what it isn't.

The only hook I have to compare this to is 80000 Hours, and the comparison you seem to be pointing at is "80k but for more kinds of people". Instinctively, this feels too broad: 80k is presumably doing well in part because they chose to focus instead of do everything. To help with this, it might make sense to answer strategic questions like: if you were to merge with 80k, would it be better or worse for the world? why did 80k choose their focus one way, and why are you choosing differently? What sorts of impact can you make that 80k will never be able to achieve?

Replies from: MichaelA,
comment by MichaelA · 2020-11-11T05:49:47.108Z · EA(p) · GW(p)

FWIW, I personally felt like: 

  • The lack of examples is totally reasonable at this early stage
  • It mostly seemed clear to me what this organisation's scope, aims, and differences from 80k would be
  • The important uncertainties I'd have about the organisation's intended scope, aims, and differences from 80k were mostly already in their explicit Open Questions.
  • There were some other uncertainties I had, which I raised in other comments. But that process itself seems like an example of why it's ok for the materials to have been fairly abstract at this stage; it's probably more efficient for the authors to (a) produce what they produced, post it publicly, have people ask about the specific confusing bits, and then provide clarity by answering those questions publicly, rather than (b) trying to guess by themselves what will and won't be clear to other people.
    • Then they can adjust their materials in light of public feedback and questions.
    • (It's often hard to guess what will be clear to other people when you yourself have lots of context on what you were thinking and why.)
comment by omernevo ( · 2020-11-10T15:21:22.943Z · EA(p) · GW(p)

Thank you for the input!

I think some of the questions you raised are (at least partially) answered in our documents. Specifically, where we detail the impacts that we hope to achieve - those are impacts that we think we would potentially have a comparative advantage over 80,000 hours. Areas where we think we would be similar to 80,000 hours wouldn’t be areas where we’d expect to have significant counterfactual impact.

Regarding the abstractness and general nature of the documents, that’s completely fair. I expect things will be a lot clearer when we have a website up and some content, rather than documents explaining the principles by which we are creating the content. 

As we’ve written in a few places, we’re taking this one step at a time and trying to get as much feedback as possible at every stage. I hope it won’t be very long before we’re able to start publishing some of our materials, which will be a good example of our actual work and will convey the specifics of our focus.

comment by pmelchor · 2020-11-06T16:19:09.724Z · EA(p) · GW(p)

Sella, thanks for the post. I think this is a very interesting idea (and I am guessing that other non-US/UK EA groups may think so as well). I see it as doing relative optimization in a much larger space rather than absolute optimization within a small group (people who actually have a chance of going into 80,000 Hours's highest-impact paths).

In that sense, Probably Good reminds me of what Elijah explained here [EA(p) · GW(p)] about what the ImpactMatters team is trying to do under their new roof at Charity Navigator:

Certainly in typical EA terms, many of the nonprofits that are analyzed are not the most cost-effective. But we also know that standard EA nonprofits are a fraction of the $300 bil nonprofit sector, and there is a portion of that money that has high intra-cause elasticity but low inter-cause elasticity. Impact analysis could be a way of shifting that money, yielding very cost-effective returns [...]

Replies from: sella
comment by sella · 2020-11-06T17:01:31.009Z · EA(p) · GW(p)

Great point Pablo.

I think the analogy to ImpactMatters is insightful and relevant, and indeed reaching a broader audience/scope (even at the cost of including less impactful career paths) is part of the justification for this work. I think the difference between inter-cause elasticity and intra-cause elasticity may be even larger when discussing careers, because in addition to people's priorities and values, many people will have education, experience and skills which make it less likely (or even desirable) that they move to a completely different cause area.

I do however also want to highlight that I think there are justifications for this view beyond just a numbers game. As we discuss in our overview and in our core principles, we think there are disagreements within EA that warrant some agnosticism and uncertainty. One example of this is the more empiricist view which focuses on measurable interventions and views speculative work that cannot be easily evaluated or validated skeptically, vs. the more hits-based approach which focuses on interventions that are less certain but are estimated to have orders of magnitude more impact in expectation. These views are (arguably) at the crux of comparisons between top cause areas that are a core part of the EA community (e.g. global poverty & health vs. existential risk mitigation). For many people working in both of these cause areas, we genuinely believe careers within their field are the most promising thing they could do.

Additionally, we not only believe in broader career advice is useful in optimizing the impact of those who would not choose top priority paths, but actually may lead to more people joining top priority paths in the focus areas of existing career orgs in the long run. As we mention in our overview and in our paths to impact, and based on our experience in career guidance so far, we believe that providing people answers to the questions they already care about, while discussing crucial considerations they might not think about often, is a great way to expose people to impact maximization principles. Our hope is that even if we care exclusively about top priority paths already researched by 80K and others, this organization will end up having a net positive effect on the number of people who pursue these paths. Whether this will be the case, of course, remains to be seen - but we intend on measuring and evaluating this question as one of our core questions moving forward.

Replies from: pmelchor, MichaelA
comment by pmelchor · 2020-11-06T17:22:45.068Z · EA(p) · GW(p)

Yes, thanks for that: I can see the broader strategic implications. I actually think the equivalent to "but actually may lead to more people joining top priority paths in the focus areas of existing career orgs in the long run" may also be true in the effective giving space.

comment by MichaelA · 2020-11-07T07:31:25.670Z · EA(p) · GW(p)

Whether this will be the case, of course, remains to be seen - but we intend on measuring and evaluating this question as one of our core questions moving forward.

Could your describe your current thinking on how you'd measure and evaluate this question?

I imagine more clarity on that question would be quite useful for evaluating and informing your organisation. And if you measured it in a way that meant the results could generalise to other organisations/efforts, or if you had a method of measuring it that others could adapt, I imagine this could be useful in a variety of other ways too. E.g., it could inform 80k's own approach, approaches of university and local groups, topic and attendee selection for EAGs, etc.

Replies from: sella
comment by sella · 2020-11-07T20:32:28.925Z · EA(p) · GW(p)

I agree this is an important question that would be of value to other organizations as well. We’ve already consulted with 80K, CE and AAC about it, but still feel this is an area we have a lot more work to do on. It isn’t explicitly pointed out in our open questions doc, but when we talk about measuring and evaluating our counterfactual benefits and harms, this question has been top of mind for us.

The short version of our current thinking is separated into short-term measurement and long-term measurement. We expect that longer term this kind of evaluation will be easier - since we’ll at least have career trajectories to evaluate. Counterfactual impact estimation is always challenging without an experimental set up which is hard to do at scale, but I think 80K and OpenPhil have put out multiple surveys that try to and extract estimates of counterfactual impact and do so reasonably well given the challenges, so we’ll probably do something similar. Also, at that point, we could compare our results to theirs, which could be a useful barometer. In the specific context of our effect on people taking existing priority paths, I think it’ll be interesting to compare the chosen career paths of people who have discovered 80K through our website relative to those who discovered 80K from other sources. 

Our larger area of focus at the moment is how to evaluate the effect of our work in the short term, when we can’t yet see our long-term effect on people’s careers. We plan on measuring proxies, such as changes to their values, beliefs and plans. We expect whatever proxy we use in the short term to be very noisy and based on a small sample size, so we plan on relying heavily on qualitative methods. This is one of the reasons we reached out to a lot of people who are experienced in this space (and we’re incredibly grateful they agreed to help) - we think their intuition is an invaluable proxy to figuring out if we’re heading in the right direction.

This is an area that we believe is important and we still have a lot of uncertainty about, so additional advice from people with significant experience in this domain would be highly appreciated.

Replies from: MichaelA
comment by MichaelA · 2020-11-08T05:05:09.337Z · EA(p) · GW(p)

Thanks, that all sounds reasonable :)

Here's a (potentially stupid) idea for a mini RCT-type evaluation of this that came to mind: You could perhaps choose some subset of applicants for advising calls, and then randomly assign half of those to go through your normal process and half to be simply referred to 80k. And 80k could perhaps do the same in the other direction. 

You could perhaps arrange for these referred people to definitely be spoken to (rather than not being accepted for advising or waiting for many months). And/or you could choose the subset for this random allocation to ensure the people are fairly good fits for either organisation's focus (rather than e.g. someone who'll very clearly focus on longtermism or someone who'll very clearly focus on global health & poverty). 

And then you could see whether the outcomes differ depending on which org the people were randomly assigned to speak to. Including seeing if the people assigned to speak to 80k were substantially more likely to then pursue their priority paths, and if so, whether they stuck with that, whether they liked it, and whether they seem to be doing well at it. 

I raise this as food for thought rather than as a worked-out plan. It's possible that anything remotely likely this would be too complicated and time-consuming to be worthwhile. And even if something like this is worth doing, maybe various details would need to be added or changed.

Replies from: Davidmanheim
comment by Davidmanheim · 2020-11-15T07:17:52.562Z · EA(p) · GW(p)

I like this, but have a few concerns. First, you need to pick a good outcome metrics, and most are high-variance and not very informative / objective. I also think the hoped-for outcomes are different, since 80k wants a few people to pick high-priority career paths, and probably good wants slight marginal improvements along potentially non-ideal career paths. And lastly, you can't reliably randomize, since many people who might talk to Probably Good will be looking at 80k as well. Given all of that, I worry that even if you pick something useful to measure, the power / sample size needed, given individual variance, would be very large.

Still, I'd be happy to help Sella / Omer work through this and set it up, since I suspect they will get more applicants than they will be able to handle, and randomizing seems like a reasonable choice - and almost any type of otherwise useful follow-up survey can be used in this way once they are willing to randomize.


comment by Kirsten (Khorton) · 2020-11-09T13:26:17.816Z · EA(p) · GW(p)

Very excited about this! I'd especially be interested in seeing career paths that could start making a significant difference in people's lives this century, as that's something 80k is moving away from - I'd be interested in advice about neglected areas within global health and climate change, for example.

The way I imagine this going wrong most easily is you getting overwhelmed with requests for career coaching in areas you don't know very much about. I hope that you'll set clear expectations about what you will/won't be able to provide with your career coaching, how many people you'll be able to coach, and how you'll choose those people.

Replies from:
comment by omernevo ( · 2020-11-09T14:52:53.852Z · EA(p) · GW(p)

Thank you for writing what you'd find most valuable! This lines up well with my thoughts...

Regarding being overwhelmed by requests for advice: 
Yes! That's definitely a failure mode. We've discussed how much we can give direct advice (very little in the near future, potentially more later but that's quite a bit of work to get there) and how to choose candidates (where we have a lot of thoughts but, as with other things, we expect to decide on a criteria and then have to fix it once we see where it fails).

I'm cautiously optimistic that we just don't have enough time to fall into this failure mode and so we'll stop ourselves before this becomes an issue :-)

Replies from: Khorton
comment by Kirsten (Khorton) · 2020-11-11T18:51:18.455Z · EA(p) · GW(p)

I can imagine you stopping yourself from doing too much coaching, but the people who apply for coaching don't know what happened or why you didn't get in touch. Does that make sense?

Something as simple as having an automatic reply to email enquiries saying "unfortunately we can't respond to every request for coaching" could be helpful.

Replies from:
comment by omernevo ( · 2020-11-12T09:47:34.341Z · EA(p) · GW(p)

Yes, that makes perfect sense. I think we definitely need to have a system that (1) let's people know if they're not going to get coaching even though they asked and (2) doesn't take up a lot of our time.

comment by MichaelA · 2020-11-07T08:26:13.809Z · EA(p) · GW(p)

There's one potential risk that occurs to me and that I think wasn't addressed in the linked docs: A career org that (1) was very broad in its focus, and/or very accepting of different views, but (2) still funnelled people into EA, could potentially erode some of the focus or good distinctive elements of the EA community as a whole in a way that reduces our impact. By being relatively broad, Probably Good might risk causing some degree of that sort of "erosion".

(Note that I'm not saying this is likely, or that it would outweigh the positive impacts of Probably Good - just raising it as something worth thinking about.)

To illustrate by taking it to an extreme, if 9 out of 10 people one met in the EA community (e.g., at EA events, in Forum discussions) were more like the average current non-EA than the average current EA, it would be a loss less obvious why there's an EA community at all, and probably more likely that the community would just dissolve into the broader world, or that more distinctive sub-communities would splinter off.[1] It would also be harder and less motivating to coordinate, find ideas relevant to my interests and plans, get useful feedback from the community, etc. And engaging with the EA community might be less engaging for the sort of potential new members we'd be most excited to have (e.g., people who are thoughtful, open-minded, and passionate about impartial altruism).

This is partly because EAs on average seem to have relatively high levels of some good traits (e.g., desire to have an impact, thoughtfulness, intelligence), and to that extent this is somewhat uncomfortable and smacks of elitism. But it's also partly just because in general communities may coordinate and hang together better and longer if it's more clear what their purpose/focus is. (E.g., I think the current members of a randomly chosen hobby club would enjoy that club less if it got an influx of new members who were much less keen on that hobby than the current members.)

I think this risk could arise even if the set of cause areas is still basically only ones supported by a large portion of current EAs, if this org made it much more likely we'd get lots of new EAs who: 

  • chose these areas for relatively random reasons, and/or
  • aren't very thoughtful about the approach they're taking within the cause area
    • E.g., they decided to address the problem through a particular type of job before learning more about the real nature of the problem, and then don't re-evaluate that decision or listen to feedback on that, and just want advice on precisely how to approach that job or what org to do it at

Or the risk could arise as a result of substantial portions of the EA community being scattered across a huge range of cause areas, or a range that's not that huge but includes some areas that are probably much less pressing (in expectation). (To be clear, I think there are benefits to EA already representing a variety of cause areas, and I like that about the community - but I think there could be more extreme or less thoughtful versions of that where the downsides would outweigh the benefits.)

I'd be interested to hear whether you think that risk is plausible for an initiative roughly like yours in general, and for your org in particular, and whether you have thoughts on how you might deal with it.

(It seems plausible that there are various ways you can mitigate this risk, or reasons why your current plan might already mostly avoid this risk.)

[1] I think my thinking/phrasing here might be informed by parts of this SSC post, as I read that recently. That said, I can't recall if that post as a whole supports my points.

Replies from:
comment by omernevo ( · 2020-11-07T10:22:31.239Z · EA(p) · GW(p)

For the sake of clarity I’ll restate what I think you meant:

We’re not discussing the risk of people taking less impactful career paths than they would have taken counterfactually because we existed (and otherwise they might have only known 80k for example). That is a risk we discuss in the document.

We’re talking specifically about “membership” in the EA community. That people who are less committed \ value aligned \ thoughtful in the way that EAs tend to be \ something else - would now join the community and dilute or erode the things we think are special (and really good) about our community.

Assuming this is what you meant, I'll write my general thoughts on it:

1. The extent to which this is a risk is very dependent on the strength of the two appearances "very" in your sentence "A career org that (1) was very broad in its focus, and/or very accepting of different views". While we’re still working out what the borders of our acceptance are (as I think Sella commented in response to your question on agnosticism), we’re not broadening our values or expectations to areas that are well outside the EA community. I don’t currently see a situation where we give advice or a recommendation that isn’t in line with the community in general. It’s worth noting that the scope and level of generality that the EA community engages with in most other interactions (EA Global, charity evaluation orgs, incubation programs, etc.) is much broader than 80K’s current focus. We see our work as matching that broader scope rather than expanding it, and so we don’t believe we’re changing where EA stands on this spectrum - simply applying it to the career space as well.

2. More importantly, even in cases where we could make a recommendation that (for examples) 80k wouldn’t stand behind - our methodology, values, rigor in analysis, etc. should definitely be in line with what currently exists, and is expected, in the community. I can’t promise we won’t reach different conclusions sometimes, but I won’t be “accepting” of people who reach those conclusions in shoddy ways.

3. This is a relatively general point, but it’s important and it mitigates a lot of our risks: In the next few months, we’re not planning to grow, do extensive reaching out, market or try to bring a lot of new people in. That’s explicitly because we want to create content and start working, do our best to evaluate the risks (with the help of the community) - and only start having a large impact once we’re more confident in the strength and direction of that impact.

In a sense (unless we fail pretty badly at evaluating in a few months) - we’re risking a very small harm of a small unknown org and potentially gaining the benefits that could be quite large if we do find that our impact looks good.

Replies from: MichaelA, MichaelA
comment by MichaelA · 2020-11-07T12:18:43.030Z · EA(p) · GW(p)

The extent to which this is a risk is very dependent on the strength of the two appearances "very" in your sentence "A career org that (1) was very broad in its focus, and/or very accepting of different views". [...] we’re not broadening our values or expectations to areas that are well outside the EA community.

I think this does basically remove the following potential worry I pointed to:

Or the risk could arise as a result of substantial portions of the EA community being scattered across a huge range of cause areas, or a range that's not that huge but includes some areas that are probably much less pressing (in expectation).

But it's not clear to me that it removes this worry I pointed to: 

I think this risk could arise even if the set of cause areas is still basically only ones supported by a large portion of current EAs, if this org made it much more likely we'd get lots of new EAs who: 

  • chose these areas for relatively random reasons, and/or
  • aren't very thoughtful about the approach they're taking within the cause area

You do also say "I won’t be “accepting” of people who reach those conclusions in shoddy ways." But this seems at least somewhat in tension with what seem to be some key parts of the vision for the organisation. E.g., the Impact doc says:

We think it’s very likely that some people might be a good fit for top priority paths but not immediately. This may be because they aren’t ready yet to accept some aspects of EA (e.g. don’t fully accept cause neutrality but are attached to high impact cause areas such as climate change) [...] We think giving them options to start with easier career changes or easier ways to use their career for good may, over time, give them a chance to consider even higher impact changes.

Some people who aren't ready yet to accept aspects of EA like cause-neutrality might indeed become ready for that later, and that does seem like a benefit of this approach. But some might just continue to not accept those aspects of EA. So if part of your value proposition is specifically that you can appeal to people who are not currently cause-neutral, it seems like that poses a risk of reducing the proportion of EAs as a whole who are cause-neutral (and same maybe goes for some other traits, like willingness to reconsider one's current job rather than cause area).

To be clear, I think climate change is an important area, and supporting people to be more impactful in that area seems valuable. But here you're talking about someone who essentially "happens to" be focused on climate change, and doesn't accept cause neutrality. If everyone in EA was replaced by someone identical to them in most ways, including being focused on the same area, except that they were drawn to that area somewhat randomly rather than through careful thought, I think that'd be a less good community. (I still think such people can of course be impactful, dedicated, good people - I'm just talking about averages and movement strategy, and not meaning to pass personal judgement.)

Do you have thoughts on how to resolve the tension between wanting to bring in people who aren't (yet) cause-neutral (or willing to reconsider their job/field/whatever) and avoiding partially "eroding" good aspects of EA? (It could be reasonable to just say you'll experiment at a small scale and reassess after that, or that you think that risk is justified by the various benefits, or something.)

Replies from:
comment by omernevo ( · 2020-11-07T20:34:04.078Z · EA(p) · GW(p)

This is something we discussed at length and are still thinking about.

As you write in the end, the usual “I’ll experiment and see” is true, but we have some more specific thoughts as well:

  • I think there’s a meaningful difference between someone who uses “shoddy” methodology to someone who’s thoughtfully trying to figure out the best course of action and has either not got there yet or still didn’t overcome some bad priors or biases. While I’m sure there are some edge cases, I think most cases aren’t on the edge.
  • I think most of our decisions are easier in practice than in theory. The content we’ll write will be (to the best of our ability) good content that will showcase how we (and the EA community) believe these issues should be considered. 1:1s or workshops will prioritize people we believe could benefit and could have a meaningful impact, and since we expect us to not live up to demand any time soon, I doubt we’ll have to consider cases that seem detrimental to the community. Finally, our writing, while aiming to be accessible and welcoming to people with a wide variety of views, will describe similar thought processes and discuss similar scopes to the broader EA community (albeit not 80K). As a result, I think it will be comparable to other gateways to the community that exist today.
  • The above point makes the practical considerations of the near future simpler. It doesn’t mean that we don’t have a lot to think, talk through and figure out regarding what we mean by ‘Agnostic EA’. That’s something that we haven’t stopped discussing since the idea for this came up and I don’t think we’ll stop any time soon.
comment by MichaelA · 2020-11-07T12:00:22.218Z · EA(p) · GW(p)

Thanks for that response! I think you make good points.

Assuming this is what you meant[...]

Yes, I think you've captured what I was trying to say. 

I should perhaps clarify that I didn't mean to imply that this risk was very likely from your particular org, or that it the existence of this risk means you shouldn't try this. I agree in particular that your point 3 is important and mitigates a lot of your risks, and that there's high information value and low risks from trying this out without yet doing extensive marketing etc. 

I was essentially just wondering whether you'd thought about that risk and how you planned to deal with it :)

comment by Mark Xu · 2020-11-07T02:30:42.805Z · EA(p) · GW(p)

I'm excited about more efficient matching between people who want career advice and people who are not-maximally-qualified to give it, but can still give aid nonetheless. For example, when planning my career, I often find it helpful to talk to other students making similar decisions, even though they're more "more qualified" than me. I suspect that other students/people feel similarly and one doesn't need to be a career coach to be helpful.

Replies from:
comment by omernevo ( · 2020-11-07T10:14:13.038Z · EA(p) · GW(p)

That's really interesting! There are probably quite a few different formats to do this sort of thing (one on ones with people facing the same dilemmas \ people that have faced it recently, bringing together groups of people who have similar situations, etc.)

I think some local groups are doing things like this, but it's definitely something we should think about as an option that can potentially be relatively low effort and (hopefully) high impact.

Replies from: konrad
comment by konrad · 2020-11-11T15:04:31.891Z · EA(p) · GW(p)

As a data point:

We have organized different "collective ABZ planning sessions" in Geneva that hinge on peer feedback given in a setting I would call a light version of CFAR's hamming circles.

This has worked rather well so far and with the efficient pre-selection of the participants can probably scale quite well. We tried to do so at the Student Summit and it seemed to have been useful to 100+ participants, even though we didn't get to collect detailed feedback in the short time frame. 

Already providing the Schelling point for people to meet, pre-selecting participants & improving the format  seems potentially quite valuable.

Replies from:
comment by omernevo ( · 2020-11-12T09:48:37.423Z · EA(p) · GW(p)

That sounds great! Thank you for sharing this.

If that's ok, I might get in touch soon with some questions about this...

Replies from: konrad
comment by konrad · 2020-11-16T18:56:18.230Z · EA(p) · GW(p)

Yes, happily!

comment by Manuel_Allgaier · 2020-11-11T16:10:29.136Z · EA(p) · GW(p)

The case for limiting scope to certain cause areas, fields and/or locations

> What cause areas and career paths do we want to focus on? Do we want to start with specific fields and slowly grow, or do we want to provide shallow introductions more broadly and slowly deepen our content? (from your open questions)

I have supported some 30 people with their career planning, and in my experience, good career advice is both really valuable and quite difficult to give. High impact career paths are complex, difficult to evaluate and change often. If you try to cover all cause areas globally, you might not be able to give good advice, so I would argue for narrowing down the scope now already. 

For instance, 80,000 Hours has narrowed its scope to its "priority cause areas" (longtermist causes) and effectively also to jobs in the UK & the US, partly as some of the best opportunities might be in those countries and partly because they know these countries best. Also, they partner with experts in the various cause areas to ensure accurate content. 

Possible ways to narrow scope: 
- location: focus on general career coaching for people in Israel who are not yet set on a certain cause area
- cause area (such as Animal Advocacy Careers)
- field (e.g. careers in politics & policy such as HIPE)

I'd also consider what fields you know well personally. 

If you receive good feedback on that and have the capacity, you could still expand to more cause areas and locations, but it seems easier to grow this way around rather than start broadly and then narrow down. 

Generally, I do think that good career advice is one of the main bottlenecks of the EA community (and probably also altruistic people in general), and I'm excited to see what might come from this! 

Replies from: sella
comment by sella · 2020-11-13T09:56:33.622Z · EA(p) · GW(p)

Hi Manuel, thanks for this comment. I think I agree with all your considerations listed here. I want to share some thoughts about this, but as you’ve mentioned - this is one of our open questions and so I don’t feel confident about either direction here.

First, we have indeed been giving general career coaching for people in Israel for several years now, so in a sense we are implementing your recommended path and are now moving onto the next phase of that plan. That being said, there still remain reasons to continue to narrow our scope even at this stage.

Second, you mention partnering with experts in the various cause areas to ensure accurate content - I completely agree with this, and wouldn’t dream of providing concrete career advice independently in fields I don’t have experience in. In the content we are writing right now we require interviewing at least 7 experts in the field to provide high-confidence advice, and at least 3 experts in the field even for articles we mark as low confidence (of which we warn people to be careful about). So it’s really important to me to clarify that none of the concrete career-specific advice we provide will be based exclusively on our own opinions or knowledge - even within the fields we do have experience in.

Finally, I think at least some of the issues you’ve (justifiably) raised are mitigated by the way we aim to provide this advice. As opposed to existing materials, which more confidently aim to provide answers to career-related questions, we have a larger emphasis on providing the tools for making that decision depending on your context. As community organizers, one of the things that pushed us to start this effort is the feeling that many people, who don’t happen to be from the (very few) countries that EA orgs focus on, have very little guidance and resources, while more and more is invested in optimizing the careers of those within those countries. We believe that doing highly focused work on Israel would not serve the community as well as providing guidance on what needs to be explored and figured out to apply EA career advice to your own context. As such, we want to provide recommendations on how to check for opportunities within the scope that’s relevant to you (e.g. country or skillset), rather than aiming to provide all the answers as final conclusions on our website. This applies most to our career guide, but also to specific career path profiles - where we want to first provide the main considerations one should look into, so that we provide valuable preliminary guidance for a wide range of people, rather than end-to-end analysis for fewer people.

The mitigations described above can be much better evaluated once we have some materials online, which will allow others to judge their implementation (and not only our aspirations). We plan on soliciting feedback from the community before we begin advocating for them in any meaningful way - hopefully that will help make these responses less abstract and still leave us time to collect feedback, consider it and try to optimize our scope and messaging.

comment by alexrjl · 2020-11-06T21:55:39.323Z · EA(p) · GW(p)

Very excited to see what comes of this!

comment by David Glidden (dglid) · 2020-11-06T23:28:46.222Z · EA(p) · GW(p)

The thing that caught my eye is the one-on-one career advice. I suppose I'm probably not the only one who submitted an application for a one-on-one session with 80,000 Hours a long time ago only to not yet have been contacted. It's understandable that given their popularity it might take them a long time to get to me or I don't meet their criteria for high-fit in one of their topic areas, so your extension of this service would be appreciated, especially for someone like me who is new to the EA space but eager to get more involved. 

An analogy (U.S.-based, forgive me) might be going to the doctor and getting seen by a PA (physician's assistant) instead of the actual doctor, which is often just as good if not better because the PA is more relatable, has more time to spend with you, and usually more recently finished schooling so are more likely to know the most up-to-date approaches.

Replies from: sella
comment by sella · 2020-11-07T10:15:24.049Z · EA(p) · GW(p)

Hi dglid, I agree with your comment. I think there is a lot of value by making career guidance more available to the masses, even without 80K personally being involved.

I see local groups as being the primary type of organization responsible for this type of work - making EA information accessible and personalized for new people and communities. We don’t see ourselves taking over that role. That being said, we are interested in being involved in the process. We know there’s a lot of interest in creating content / tools / support in the career guidance space, both because we’ve seen it in EA Globals and group organizers’ groups, and also because we are group organizers ourselves, and it’s this need that has set us on this path (originally in our own local group).

All of this is to say - I think working with and empowering local EA groups to provide these services is a great way to improve careers at scale, and would especially love any feedback, requests and comments from local group organizers or anyone else on what you believe would be most helpful to you in this area.

comment by ben.smith · 2020-11-08T21:56:10.425Z · EA(p) · GW(p)

Sounds like a great attempt to fill a very salient gap! We will be discussing your project at the EA Auckland meetup tomorrow night (Tuesday 6.30pm utc+13). Let me know if you have any interest in zooming into chat.

Replies from:
comment by omernevo ( · 2020-11-09T13:46:09.723Z · EA(p) · GW(p)

That sounds really cool!

I'll be happy to join! :-)

Replies from: ben.smith
comment by ben.smith · 2020-11-09T21:56:26.586Z · EA(p) · GW(p)

Awesome, will love to have you! I'll message you direct with a couple of details.

comment by MichaelA · 2020-11-07T06:51:12.075Z · EA(p) · GW(p)

This seems really interesting. I really like how clear, concise, and well-structured the linked docs are. And I like how thoughtfully you're approaching this and the potential risks involved, and how you're actively seeking feedback both from some key individuals and from the community at large.

I'll split my various questions and bits of feedback into separate comments, to help responses and discussion remain organised/easy-to-follow :)

Replies from:
comment by omernevo ( · 2020-11-07T10:18:19.958Z · EA(p) · GW(p)

Thank you! Both for the thoughts and for the separation into different comments. It is much easier to keep track of everything and is appreciated :-)

comment by Vaidehi Agarwalla (vaidehi_agarwalla) · 2020-11-07T00:07:19.540Z · EA(p) · GW(p)

Really excited that you are doing this! 

Meta-level comment: Many of the shared google docs were not commentable - would it make sense to make the comment-friendly? I personally would find it easier to leave feedback that way. 

Replies from: MichaelA
comment by MichaelA · 2020-11-07T06:41:34.221Z · EA(p) · GW(p)

I had a similar thought. Though a counterpoint is that feedback given here can be voted on and can be extensively discussed by others more easily (Google doc comment threads can get unwieldy). But that's only really relevant for substantive feedback, rather than e.g. typos.

If Omer and Sella want readers to see clean versions of the docs without comments/suggestions, they could make copies that do allow comments/suggestions, and then link to those copies from the top of the clean versions that people are initially directed to.

Replies from: sella
comment by sella · 2020-11-07T10:02:31.313Z · EA(p) · GW(p)

That’s actually a great idea. I’ve now added a link from each clean doc to a commentable version. Feel free to either comment here, email us, or comment on the commentable version of the doc. Thanks!

Replies from: vaidehi_agarwalla
comment by Vaidehi Agarwalla (vaidehi_agarwalla) · 2020-11-07T14:06:24.485Z · EA(p) · GW(p)

Yay thanks! Looking forward to engaging on the topics :)

comment by Jack Malde (jackmalde) · 2020-11-08T22:18:16.419Z · EA(p) · GW(p)

To what extent do you expect to 'accept' people's preferred cause areas versus introduce people to ideas that may help them make the most informed decision on what cause area they should focus on.

For example, if someone comes to you and says "I want to work on global health" will you say "that's great here's our advice on that cause area" or might you say "that's great although just to check have you engaged with the EA literature on cause areas and understand why some people don't prioritise global health e.g. due to cluelessness [EA · GW] on the expected impacts of interventions in global health". I chose global health here for an example but this can obviously apply to all cause areas. To clarify I'm not certain which of accepting vs educating is the best approach.

Similarly how will you deal with people who don't really have a clue what cause area they are most interested in?

Replies from: sella
comment by sella · 2020-11-09T22:06:27.958Z · EA(p) · GW(p)

Hi Jack, thanks for the great question. 

In general, I don’t think there’s one best approach. Where we want to be on the education \ acceptance trade-off depends on the circumstances. It might be easiest to go over examples (including ones you gave) and give my thoughts on how they’re different.

First, I think the simplest case is the one you ended with. If someone doesn’t know what cause area they’re interested in and wants our help with cause prioritization, I think there aren’t many tradeoffs here - we’d strongly recommend relevant materials to allow them to make intelligent decisions on how to maximize their impact. 

Second, I want to refer to cases where someone is interested in cause areas that don’t seem plausibly compatible with EA, broadly defined. In this case we believe in tending towards the “educate” side of the spectrum (as you call it), though in our writing we still aim not to make it a prerequisite for engaging with our recommendations and advice. That being said, these nuances may be irrelevant in the short-term future (at least months, possibly more), as due to prioritization of content, we probably won’t have any content for cause areas that are not firmly within EA.

In the case where the deliberation is between EA cause areas (as is the case in your example) there are some nuances that will probably be more evident in our content even from day one (though may change over time). Our recommended process for choosing a career will involve engaging with important cause prioritization questions, including who deserves moral concern (e.g. those far from us geographically, non-human animals, and those in the long term future). Within more specific content, e.g. specific career path profiles, we intend to refer to these considerations but not try and force people to engage with them. If I take your global health example, in a career path profile about development economics we would highlight that one of the disadvantages of this path is that it is mainly promising from a near-term perspective and unclear from a long-term perspective, with links to relevant materials. That being said, someone who has decided they’re interested in global health, doesn’t follow our recommended process for choosing a career, and navigates directly to global health-related careers will primarily be reading content related to this cause area (and not material on whether this is the top cause area). Our approach to 1:1 consultation is similar - our top recommendation is for people to engage with relevant materials, but we are willing to assist people with more narrow questions if this is what they’re interested in (though, much like the non-EA case, we expect to be in over-demand in the foreseeable future, and may in practice prioritize those who are pursuing all avenues to increasing their impact). 

Hope this provides at least some clarity, and let me know if you have other questions.

Replies from: jackmalde
comment by Jack Malde (jackmalde) · 2020-11-10T12:59:21.901Z · EA(p) · GW(p)

Thanks that all makes sense and I agree that a one size fits all approach is unlikely to be appropriate.

comment by mawa · 2020-11-08T21:11:30.235Z · EA(p) · GW(p)

We’re particularly interested in hearing about things that you, personally, would actually read // use // engage with

I would personally be excited about a filtering tool similar to the 80000 Hours job-board, that lets you filter resources for background, cause area, role type, etc. (E.g. "If your background is in economics, and you are particularly interested in animal welfare, we would recommend the following resources" )


I would distinguish the different concreteness levels of career advice/career-relevant information, maybe like this:

1. General, can be applied at almost any level and job, e.g. career capital, self-care, cause-prioritization,...

2. Role-specific, e.g. specific advice for entrepreneurship, PhD, ...

3. Concrete, e.g. answering the questions "What are high-impact options for someone with a background in X?" "What are possible caveats in taking up job Y?", ...

The more concrete the information, the more it depends on the specific situation of the person that benefits from it - and is therefore harder to provide for a large number of people.

In my experience however, surprisingly often there was relatively concrete information available where someone addressed just the issue I was currently thinking about. So I think there is probably a bottleneck in actually finding  this concrete information. I think having a filter for resources may help with that.


Possible Downsides:

* It may be too much work to implement this, especially the classification of articles regarding their usefulness for specific situations

* It might give the reader the impression that they have read everything relevant for their situation, and thereby reduce exploring other content that would have been useful.

Replies from:
comment by omernevo ( · 2020-11-09T10:48:45.212Z · EA(p) · GW(p)

Thank you!
This viewpoint it really helpful. It seems relatively easy to look at a specific article and figure out who it might be useful for, but creating a generic way to organize articles that would work for most people is quite a bit harder.

And I agree that concreteness is definitely something we should be explicitly thinking about when creating content and organizing it.

And I agree regarding both downsides \ risks. They're definitely something to think about. The first might mean that this is something that might come later if we don't find a relatively simple way of doing this.

The second can probably be mitigated to a large extent if some effort but requires more thinking in any case. We've discussed this in related contexts (making sure we don't counterfactually cause readers not to engage with other existing quality content), but not in this context.

comment by MichaelA · 2020-11-07T07:37:33.624Z · EA(p) · GW(p)

In your Principles doc, you write about "Agnosticism", and say:

As a result, we expect to provide recommendations within multiple cause areas, such as global health, animal welfare, existential risk, institutional decision making, mental health, climate change, broad societal improvement and more.

Could you say a bit about how broad a set of cause areas you expect to cover/mention, and how you'll make those decisions? E.g., might you also discuss things like psychedelics (arguably a subset of mental health),  anti-aging, and macroeconomic policy? (I don't intend to push those areas - I focus on existential risks myself - I'm just using them as examples.)

I guess I'm asking something very similar to one of your "Open Questions". ("What cause areas and career paths do we want to focus on? Do we want to start with specific fields and slowly grow, or do we want to provide shallow introductions more broadly and slowly deepen our content?") So I'm just after your current guesses, key considerations, or thought process, rather than expecting crisp, final answers :)

Replies from: sella
comment by sella · 2020-11-07T10:13:46.116Z · EA(p) · GW(p)

Hi Michael, as you mention - the issue of accurately defining our scope is still an important open question to us. I’m happy to share our current thinking about this, but we expect this thinking to evolve as we collect feedback and gain some more hands-on experience.

I think it’s worth making a distinction between two versions of this question. The first is the longer-term question of what is the set of all cause areas that should be within scope for this work. That’s a difficult question. At the moment, we’re happy to use the diversity of views meaningfully held in the EA community as a reasonable proxy - i.e. if there’s a non-negligible portion of EAs that believe a certain cause area is promising we think that’s worth investigating. As such, all three of the examples you mention would be potentially in-scope in my view. This is not, in and of itself, a cohesive and well-defined scope, and as I mentioned, it is likely to change. But I hope this gives at least an idea of the type of scope we’re thinking of.

The second version of this question is what we actually intend to work on in the upcoming months, given that we are just getting started and we are still constrained in time and resources. This question will dominate our actual decisions for the foreseeable future. Within the large scope mentioned above, we want to initially focus on areas based on two criteria: First, unmet needs within the EA community, and second, cause areas that are easier to evaluate. Both of these are very weak signals for where we want to focus long-term, but drastically influence how quickly we can experiment, evaluate whether we can provide significant value, and start answering some of our open questions. As a concrete example, we believe the Global Health & Development fits this bill quite well, and so at least part of our first career paths will be in this space. 

I hope this helps clarify some of these questions. I apologize if there are more open questions here than answers - it’s just really important to us to experiment first and make long-term decisions about priorities and scope afterwards rather than the other way around.

Replies from: MichaelA
comment by MichaelA · 2020-11-07T12:06:23.542Z · EA(p) · GW(p)

Thanks, that all sounds reasonable to me. 

I apologize if there are more open questions here than answers - it’s just really important to us to experiment first and make long-term decisions about priorities and scope afterwards rather than the other way around.

Yeah, that totally makes sense. And no need to apologise! I think sharing your current thinking at this stage seems like a really good move, and that necessarily means having lots of remaining uncertainties (indeed, that's part of why it's a good move). So I wouldn't at all want to disincentivise that by demanding that someone has all the details figured out when they first post on the Forum about a project :)

comment by thecommexokid · 2020-11-17T04:09:52.144Z · EA(p) · GW(p)

The advice I would most want that I haven’t gotten from 80000 Hours’ existing literature is a general strategy for pursuing an impactful career subject to the constraint of not leaving the city I live in. Maybe there is no useful generic advice to give on that question and it entirely depends on which city. But if that’s not the case, then maybe your experience giving career advice within Israel could lead to better guidance than 80K, who seem to have an implicit assumption that of course you’d be willing to move to SF or London if you want an impactful career.

comment by MichaelA · 2020-11-07T07:24:29.515Z · EA(p) · GW(p)

In the Impact doc, you write:

those interested in pursuing careers in Global Poverty & Health, broader Animal Welfare work, Climate Change, Mental Health, Scientific Research, and other potentially effective cause areas currently cannot access guidance or even content aimed at them.

This seems to me like an overstatement? 

These people could get guidance from e.g. local EA groups, talking to people from EA orgs focused on these areas (e.g., at online EA conferences), or finding other EAs or non-EAs who work in these areas and talking to them (e.g., here). 

And there's a wealth of content on each of these areas (e.g., the vast academic literature on climate change + several EA reports on the topic), just not necessarily focused on careers. But I think there's even some careers-focused content for these areas - e.g. many of 80k's older problem profiles and career reviews, the careers-focused parts of some 80k podcast episodes (e.g., on climate change), this admittedly very short page from HLI, or various posts on the Forum.

But I'd agree with the following version of that claim:

There is some careers-focused guidance and content available to those interested in pursuing careers in Global Poverty & Health, broader Animal Welfare work, Climate Change, Mental Health, Scientific Research, and other potentially effective cause areas. However, there's relatively little of this guidance and content, and that which exists tends to be relatively shallow, as there's no organisation focused on providing it.

Replies from:, vaidehi_agarwalla
comment by omernevo ( · 2020-11-07T10:19:22.278Z · EA(p) · GW(p)


The intended meaning was that EA materials directed at this need specifically don’t exist. But I think you’re correct and that this wasn’t clear. I also like your version better, so will be updating the doc accordingly. Thank you!

comment by Vaidehi Agarwalla (vaidehi_agarwalla) · 2020-11-08T06:10:36.659Z · EA(p) · GW(p)

I agree that the statement needs rewording and agree with your re-write for factual correctness. However, I think the case for this is really strong, and many of the reasons for this aren't captured in the original document/what you've said above: 

  • The lack of careers advice content is a very big and important gap in the existing resources - based on my own research amongst group organisers & interviews with members of the EA community, it's very rare to find 1) good career advice in general, and  2) apples to apples/consistent comparisons of different jobs, causes etc.
    • It's really hard for individuals to do this research, making it less likely for individuals to do this themselves. It seems really valuable to give people at least a starting point to make it easier to do more research & be more knowledgeable (I think detailed practical knowledge is especially important, and is what is missing currently, where existing content is not targeted at people seriously considering these paths)
    • Although its feasible to get guidance from other sources as you mention, it requires a  lot of (collective) community resources, and could be harder for the person to parse through the information. I don't think this is a very efficient
  • Even thought some topics have a lot of content, I think "translating" non-EA to EA content is very valuable. It can be difficult for people to know how to parse outside resources, or where to begin, or have the time to do this. This is especially important for getting people to consider areas outside their current expertise. 
    • For example, it's hard to know, despite the vast literature on climate change, what could be some of the effective things to do within that cause area, or how to compare climate change with scientific progress if you don't have a background knowledge of one or the other. 
  • In general, having a well-organized and easy to navigate set of resources that follow a consistent research apporach or style (like 80K's articles) will save a lot of collective time and be a valuable resource for movement builders. 
Replies from: MichaelA
comment by MichaelA · 2020-11-08T07:00:58.800Z · EA(p) · GW(p)

Yeah, that all makes sense to me. Thanks for adding those points :)

To be clear, I only meant to highlight that some relevant resources exist, and thus that the particular phrasing I quoted was inaccurate.  I definitely didn't mean to suggest that the existing resources are sufficient and leave no room for additional valuable work to be done here. 

And a lot of what you say aligns well with my sense that summaries and collections [EA · GW] can often provide a lot of value, as can analyses that draw on existing work in order answer questions that that work didn't explicitly address (e.g., drawing on literature on climate change to discuss concrete career pathways in which EAs might have a lot of counterfactual impact).

Perhaps some of what you say could be captured by tweaking what I said above to instead say something like:

There is some careers-focused guidance and content available to those interested in pursuing careers in Global Poverty & Health, broader Animal Welfare work, Climate Change, Mental Health, Scientific Research, and other potentially effective cause areas. However, there's relatively little of this guidance and content, and that which exists tends to be relatively shallow, scattered across many places, and inconsistent in structure and approach, as there's no organisation focused on providing it. (emphasis added just to highlight what's different here)

comment by MichaelA · 2020-11-07T07:41:44.251Z · EA(p) · GW(p)

In your Short-term Plans doc, you write:

There are two types of content that we have started producing and plan to continue in the short term: First, a new guide\introduction to effective careers that is more agnostic than the ones that currently exist. We think this might be useful as an introduction to new and prospective members of the community as some of them might not yet be fully aligned with many of the values in the community. Though this guide will obviously refer to existing introductions (most notably 80,000 Hours’ Key Ideas), the guide will explicitly attempt to be relevant to a wider range of people. This guide will then refer people to articles or relevant organizations according to their preferences (or the ideas they are willing to explore).

At first glance, this sounds perhaps fairly similar to 80k's old career guide (at least based on my memory of that from 2018). Do you expect your guide/introduction to differ from 80k's career guide in substantial ways (and, if so, how)? Or is there a different reason for producing your own one? (E.g., to speculate, perhaps 80k prefers you not refer people to that guide, as it no longer reflects their most up-to-date views and could thus confuse people?)

Replies from:
comment by omernevo ( · 2020-11-07T10:17:30.324Z · EA(p) · GW(p)

The guide we're working on is indeed similar in some aspects to 80k's old guide.

We're still working on it (and are at relatively early stages) so none of this is very certain but I expect that:

* The guide will differ in our framework for thinking about it (so things like the thought process and steps you go through to make a decision).

* I expect the guide will differ on some specific areas where we are more agnostic than 80k, but won't differ on most.

* Specifically, 80k have updated their 2017 guide to focus on longtermism more than it originally did. That would be a specific area where we will differ.

* Quite sure we pretty much agree on a lot of the meta-considerations (things like "it's a very important decision and is worth a lot of consideration" or "for most people, making an effort to expand the scope of the search is worthwhile").

Regarding why we're doing this: Even if this was 80k's current guide, I'd think a second general viewpoint on how to approach career decisions would be valuable to a lot of people. Given that 80k consider this their "older" guide, I really think it would be helpful to have another one.

Also (and probably more importantly), a general guide is just a really useful way to put the most important general information that we think is necessary in one place. There's a lot of things we think are important that fit very well into this format.

comment by SoniaAlbrecht · 2021-01-31T01:43:43.645Z · EA(p) · GW(p)

Thank you so much for doing this! I really need your help. I'm a junior electrical engineering major at University of California Davis and I've always assumed my planned career in computer hardware design would do plenty of good becuase I read in my economics textbook that technology is the biggest reason for increases in living standards there are lots of specific examples with computer hardware.  However, I read an article on this forum that said computer hardware is probably advancing too fast already on average becuase it's contributing to AI advancing faster than safeguards to keep it manageable. I'd like to make sure I'm taking the right electives and internships to makes me a good candidate for jobs that don't increase the AI existential risk. If the job market isn't any worse I'd also like to consider something that's more specifically beneficial such as something to reduce climate change. I'm planning to get a masters degree and stay in Northern California.