Posts

80,000 Hours user survey closes this Sunday 2020-09-08T17:37:20.525Z · score: 26 (9 votes)
Some promising career ideas beyond 80,000 Hours' priority paths 2020-06-26T10:34:11.912Z · score: 128 (50 votes)
Problem areas beyond 80,000 Hours' current priorities 2020-06-22T12:49:48.166Z · score: 241 (109 votes)
Essential facts and figures -- COVID-19 2020-04-20T18:33:50.565Z · score: 19 (4 votes)
Thoughts on 80,000 Hours’ research that might help with job-search frustrations 2019-04-16T18:51:04.319Z · score: 97 (63 votes)

Comments

Comment by ardenlk on How have you become more (or less) engaged with EA in the last year? · 2020-09-25T19:18:30.540Z · score: 1 (1 votes) · EA · GW

Thanks this is super helpful -- context is I wanted to get a rough sense of how doable this level of "getting up to speed" is for people.

Comment by ardenlk on How have you become more (or less) engaged with EA in the last year? · 2020-09-23T13:30:27.484Z · score: 10 (4 votes) · EA · GW

Hey Michael, thanks for detailing this. Do you have a sense of how long this process took you approximately?

Comment by ardenlk on 80,000 Hours user survey closes this Sunday · 2020-09-12T14:35:32.540Z · score: 1 (1 votes) · EA · GW

Thanks for filling out the survey and for the kind words!

Comment by ardenlk on Asking for advice · 2020-09-05T20:18:15.943Z · score: 8 (7 votes) · EA · GW

I wonder whether other people also like to have deadlines asked for for their feedback or have specific dates suggested for meeting? Sometimes I prefer to have someone ask for feedback within a week than within 6 months (or as soon as is convenient), because it forces me to get it off my to-do list. Though it's best of both worlds if they also indicate that if I can't do it in that time it's ok.

Comment by ardenlk on Improving disaster shelters to increase the chances of recovery from a global catastrophe · 2020-09-04T17:13:34.274Z · score: 3 (2 votes) · EA · GW

Cheers!

Comment by ardenlk on EA reading list: Scott Alexander · 2020-08-09T07:48:33.635Z · score: 11 (4 votes) · EA · GW

Thanks! This post caused me to read 'beware systemic change', which I hadn't before and am glad I did.

I know this post isn't about that piece specifically, but I had a reaction and I figured 'why not comment here? It's mostly to record my own thoughts anyway.'

It seems like Scott is associating a few different distinctions with the distinction between the titular distinction, (1) 'systemic vs. non-systemic'

These are: (2) not necessarily easy to measure vs. easy to measure (3) controversial ('man vs. man') vs. universially thought of as good or neutral.

These are related but different. I think the thing that actually produces the danger Scott is worired about is (3). (Of course you could worry that movement on (2) will turn EA into an ineffectual, wishy-washy movement, but that doesn't seem as much Scott's concern)

I asked myself: to what extent has EA (as it promised to in 2015) moved toward systemic change? Toward change that's not necessarily easy to measure? Toward controversial change?

80K's top priority problem areas (causes) are:

  • AI safety (split into tech safety and policy)
  • Biorisk
  • Building EA
  • global priorities research
  • improving inst decisionmaking
  • preventing extreme climate change
  • preventing nuclear war

These are all longtermist causes. Then there's the other two very popular EA causes:

  • ending factory farming
  • global health

Of the issues on this list, only the AI policy bit of AI safety and bulding EA seem to be particularly controversial change. I say AI policy is controversial becuase it favors the US over China as practiced by EA, and presumably people in China would think that's bad, and building EA seems controversial because some people think EA is deeply confused/bad (though it's not as controversial as the stuff Scott mentions in the post I think). But 'building EA' was always a cause within EA, so only the investment in AI policy represents a move towrard the controversial since Scott's post.

(Though maybe I'm underestimating the controversialness of things like ending factory farming -- obviously some people think that'd be positively bad...but I guess I'd guess that's more often of the 'this isn't the best use of resources' variety of positive badness.)

Of the problems listed above, only ending factory farming and improving global health are particularly measurable. So it does seem like we've moved toward the less-easily-measured (with the popularization of longtermism probably).

Are any of the above 'systemic'? Maybe Scott associated this concept with the left halves of distinctions (2) and (3) because it's harder to tell what's systemic vs. not. But I guess I'd say again the AI policy half of AI safety, builidng EA, and improving institutional decisionmaking are systemic issues. (Though maybe systemic interventions will be needed to address some of the others, e.g., nuclear security.)

So it's kind of interesting that even though EA promised to care about systemic issues, it mostly didn't expand into them, and only really expanded into the less easily measurble. Hopefully Scott would also be heartened that the only substantial expansion into the realm of the controversial seems to be AI policy.

If that's right as a picture of EA, why would that be? Maybe because although EA has tried to tackle a wider range of kinds of issues, it's still pretty mainstream within EA that working on politically controversial causes is not particularly fruitful. Maybe because people are just better than Scott seems to think they are at taking into account the possibility of being on the wrong side of stuff when directly estimating the EV on working on causes, which has resulted in shying away from controversial issues.

In part 2 of Scott's post there's the idea that if we pursue systemic change we might turn into something like the Brookings institution, and that that would be bad because we'd lose our special moral message. I feel a little unsure of what the special moral message is that Scott is referring to in the post that is necessarily different between brookings-EA and bednet-EA, but I think it has something to do with stopping periodically and saying "Wait are we getting distracted? Do we really think that this thing is the most good we can do with $2,000 when we could with high confidence save someone's life if we gave it to AMF instead?" At least, that's the version of the special moral message that I agree is really important and distinctive.

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-09T21:04:49.565Z · score: 3 (2 votes) · EA · GW

Great! Linked.

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-09T21:04:21.949Z · score: 5 (3 votes) · EA · GW

Just to let you know I've revised the blurb in light of this. Thanks again!

Comment by ardenlk on Some history topics it might be very valuable to investigate · 2020-07-08T12:46:45.520Z · score: 8 (5 votes) · EA · GW

We also had this choice with our other problems and other paths posts, and decided against the listicle style, basically for the reasons you say. I think there is a nacent/weak norm, and think it makes sense to uphold it. The main argument against is that is actually kind of helpful to know if something is a long list or a short list -- esp if I have a small bit of time and won't want to start something long.

Comment by ardenlk on Some history topics it might be very valuable to investigate · 2020-07-08T12:41:23.976Z · score: 3 (2 votes) · EA · GW

Thank you for writing this up!

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:14:16.804Z · score: 4 (3 votes) · EA · GW

Hey Michael,

Thanks (as often) for this list! I'm wondering, might you be up for putting it into a slightly more fomal standalone post or google doc that we could potentially link to from the blurb?

Really love how you're collecting resources on so many different important topics!

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:12:41.976Z · score: 5 (3 votes) · EA · GW

Thanks for these points! Very encouraging that you can do this work from such a variety of disciplines. I'll revise the blurb in light of this.

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:11:23.972Z · score: 2 (2 votes) · EA · GW

Interesting! I think this might fall under global priorities research, which we have as a 'priority path' -- but it's not really talked about in our profile on that, and I agree it seems like it could be a good straetgy. I'll take a look at the priority path and consider adding something about it. Thanks!

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:08:11.575Z · score: 6 (4 votes) · EA · GW

Thanks so much Rohin for this explanation. It sounds somewhat persuasive to me, but I don't feel in a psoition to have a good judgement on the matter. I'll pass this on to our AI specialists to see what they think!

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:07:36.077Z · score: 1 (1 votes) · EA · GW

Thanks Max -- I'll pass this on!

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-29T14:55:54.813Z · score: 14 (7 votes) · EA · GW

Hi Brian,

In general, we have a heuristic according to which issues that primarily affect people in countries like the US are less likely to be high impact for more people to focus on at the margin than issues that primiarly affect others or affect all people equally. While criminal justice does affect people in other countries as well, it seems like most of the most promising interventions are country-, and especially US-, specific -- including the interventions Open Phil recommends, like those discussed here and here. The main reason for this heuristic is that these issues are likely to be less neglected (even if they're still neglected relative to how much attention they should receive in general), and likely to affect a smaller number of people. Does that make sense?

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-29T14:48:23.122Z · score: 8 (5 votes) · EA · GW

Hi Tobias, we've added "governance of outer space" on your recommendation. Thanks!

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-06-28T15:18:32.948Z · score: 3 (3 votes) · EA · GW

Hi Rohin,

Thanks for this comment. I don't know a lot about this area, so I'm not confident here. But I would have thought that it would sometimes be important for making safe and beneficial AI to be able to prove that systems actually exhibit certain properties when implemented.

I guess I think this first becuase bugs seem capable of being big deals in this context (maybe I'm wrong there?), and because it seems like there could be some instances where it's more feasible to use proof assistants than math to prove that a system has a property.

Curious to hear if/why you disagree!

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-06-28T11:39:00.129Z · score: 4 (3 votes) · EA · GW

Thanks for this feedback (and for the links)!

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-06-28T11:35:19.368Z · score: 17 (6 votes) · EA · GW

Hm - interesting suggestion! The basic case here seems pretty compelling to me. One question I don't know the answer to is how predicable countries trajectories are -- like how much would a niave extrapolation have predicted the current balance of power 50 years ago? If very unpredictable it might not be worth it in terms of EV to bet on the extrapolation. But

I feel more intuitievely excited about trying to foster home grown EA communities in a range of such countries, since many of the people working on it would probably have reasons to be in and focus on those countries anyway because they're from there.

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-28T10:23:33.239Z · score: 1 (1 votes) · EA · GW

Thanks! I'm seeing that I sometimes only used links that worked on the 80k site. Fixing the issue now.

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-28T10:20:02.794Z · score: 8 (5 votes) · EA · GW

Hi Will,

To be honest, I'm not that confident in wild animal welfare being on the 'other longtermist' list rather than the 'other global' list -- we had some internal discussion on the matter and opinions differed.

Basically it's on 'other longtemrmist' because the case for it contributing to spreding positive values seems stronger to me than in the case of the other global problems. In some sense working on any issue spreds positive values, but wild animal welfare is sufficiently 'weird' that it's success as a cause area seems more likely to disrupt people's intuitive views than successes of other areas, which might be particularly useful for spreading postitive values/moral philosophy progress. In particular, the rejection of "natural = good" seems like it could be a unique and useful contribtuion. I also find the analogy of wild animals and other forms of consciousness that we might find ourselves influencing (alien life? Artificial consciousnesses?) somewhat compelling, such that getting our heads straight on wild animal welfare might help prepare us for that.

Comment by ardenlk on Can I archive the EA forum on the wayback machine (internet archive, archive.org) ? · 2020-06-25T08:02:20.110Z · score: 3 (2 votes) · EA · GW

Thank you for pointing out ea.greaterwrong.org! I've had the problem of not being able to wayback forum posts before.

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-24T17:11:01.844Z · score: 1 (1 votes) · EA · GW

Hey jackmalde, interesting idea -- though I think I'd lean against writing it. I guess the main reason is something like: There are quite a few issues to explore on the above list so if someone is searching around for something (rather than if they have something in mind already), they might be able to find an idea there. I guess despite what I said to Michael above, I do want people to see it as some positive signal if something's on the list. Having a list of things not on the list would probably not add a lot, because the reasons would just be pretty weak things like "brief investigation + asking around didn't make this seem compelling acc to our assumptions". Insofar as soeone was already thinking of working on something and they saw that, they probably wouldn't take it as much reason to change course. Does that make sense?

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-24T17:01:02.854Z · score: 3 (2 votes) · EA · GW

Thanks! Helpful pointers.

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-24T17:00:11.241Z · score: 3 (2 votes) · EA · GW

Hey atlasunshrugged,

I'm afraid I don't know the answers to your specific questions. I agree that there are things worse than great power conflict, and perhaps China becoming the dominent world power could be one of those things. FWIW although war between the US and China does seem like one of the more worrying scinarios at the moment, I meant the description problem to be broader than that and include any great power war.

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T10:00:34.453Z · score: 4 (3 votes) · EA · GW

Hey Michael,

Glad you've found it helpful, and thanks for these resource lists! I'm adding them to our inernal list of resources. Anything you've read from them you think it'd be particularly good to add to the above blurbs?

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T09:45:50.244Z · score: 3 (2 votes) · EA · GW

This is great, thanks!

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T09:44:59.967Z · score: 14 (6 votes) · EA · GW

Thanks Pablo -- I agree we should discuss risks to EA more. It seems like it should be a natural part of 'building effective altruism' to me. I wonder why we don't discuss it more in that area. Maybe people are afraid it will seem self-indulgent?

I think I'd worry about how to frame it in 80k content because our stuff is very outward-facing and people who aren't already part of the community might not respond well to it. But that's less of an issue with forum posts, etc.

I'd also guess most people's estimates for EA going away or becoming much less valuable in the next 10 years are lower than yours. Want to expand a bit on why you think it's as high as you do?

Thanks for bringing this up and also for the list of places this has been discussed!

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T09:27:31.854Z · score: 1 (1 votes) · EA · GW

Fixed, thanks!

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-22T21:05:01.653Z · score: 5 (3 votes) · EA · GW

Hey Michael -- there isn't such a list, though we did consider and decide not to include a number of problems in the process of putting this together. I definately think that "X and Y are on the list so Z, which wasn't mentioned explicitly, is also likely a good area" would be a bad inference! But there are also probably lots of issues that we didn't even consider so something not being on the list is probably at best a weak negative signal. [Edit: I shouldn't have said "at best" -- it's a weak negative signal.]

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-22T19:12:19.924Z · score: 1 (1 votes) · EA · GW

fixed, thanks!

Comment by ardenlk on How hot will it get? · 2020-04-29T14:38:56.342Z · score: 1 (1 votes) · EA · GW

Thanks, this is helpful!

Comment by ardenlk on How hot will it get? · 2020-04-24T17:08:09.357Z · score: 1 (1 votes) · EA · GW

Great post, and great point about the priors! I have a question about how to use/interpret these which I'd love help with from you or someone else who understands this better than I do.

Can I draw implications of your models about emissions scenarios as defined by the IPCC?

First, can I take the first model to indicate something about how likely various emissions pathways (e.g., RCP 6) are if we take little 'extra action'? e.g., on the "JH extrapolation" version of business as usual that we're 95% likely not to reach above the mean RCP 8.5 emissions scenario (6180 Gt), 70% likely not to reach above the mean RCP 6.0 scenario (3885 Gt), etc? (all by 2100)

Second, can I take your second model to indicate something about how how much warming we'd get if we were to reach those emissions scenarios? So if RCP 6.0 is the 70th percentile outcome of business as usual (on the 'JH extrapolation' version), can we then take the 70th percentile of the probability density function for one of the sensitivity assumptions (say, the Webster one) for how hot it will get on that version of business as usual + that sensitivity assumption to get the amount of warming predicted for RCP 6.0 -- i.e., 3C?

Comment by ardenlk on What are the key ongoing debates in EA? · 2020-03-09T09:32:58.783Z · score: 70 (32 votes) · EA · GW

I'm excited to read any list you come up with at the end of this!

Some I thought of:

  • How likely is it that we're living at the most influential time in history?
  • What is the total x-risk this century?
  • Are we saving/investing enough for the future?
  • How much less of an x-risk is AI if there is no "fast takeoff"? If the paper clip scenario is super unlikely? and how unlikely are those things? [can sum up question: how much should we be updating on the risk from AI due to some people updating away from Bostrom-style scenarios?]
  • how important are S-risks/should we place more emphasis on reducing suffering than on creating happiness?
  • Do anti-aging research, animal welfare work, and/or economic growth speedups have positive very long term benefits in expectation?
  • Should EA stay as a "big tent" or split up into different movements?
  • How much should EA be trying to grow?
  • Does EA pay enough attention to climate change?
Comment by ardenlk on Institutions for Future Generations · 2019-11-17T13:19:17.553Z · score: 7 (4 votes) · EA · GW

This is such an exciting project! Really glad you're doing it.

I have two questions/tentative suggestions on the scope/framing of the project:

(1) Are you considering any existing institutions? It seems like it could be useful to identify any existing institutions that seem advantageous for future generations -- so that we can have a better sense of the value of preserving or expanding them, and in case they could be used as templates for new institutions.

(2) The evaluation criteria seem good. But should you add something along the lines of "How likely is this institution to gain support in the future from people with nonlongtermist interests in a way that doesn't undermine its value, such that we wouldn't need to provide it with as much ongoing support?" (Related to political feasibility but more forward looking -- maybe can be just rolled into that criterion.)

Comment by ardenlk on Ask Me Anything! · 2019-09-03T14:45:04.092Z · score: 1 (1 votes) · EA · GW

Re: these being alternatives to philosophy, I see what you mean. But I think it's ok to group together non-academic philosophy and non-philosophy alternatives since it's a career review of philosophy academia. However, I take the point that I can better connect the two 'alternatives' sections in the article and have added a link.

As for individual grants, I'm hesitant to add that suggestion because I worry that that would encourage some people people who aren't able to get philosophy roles in academia or in other organizations to go the 'independent' route, and I think that will rarely be the right choice.

Comment by ardenlk on Ask Me Anything! · 2019-08-21T21:46:56.706Z · score: 20 (7 votes) · EA · GW

Hey Wei_Dai, thanks for this feedback! I agree that philosophers can be useful in alignment research by way of working on some of the philosophical questions you list in the linked post. Insofar as you're talking about working on questions like those within academia, I think of that as covered by the suggestion to work on global priorities research. For instance, I know that working on some of those questions would be welcome at the Global Priorities Institute, and I think FHI would probably also welcome philosophers working on AI questions. But I agree that that isn’t clear from the article, and I’ve added a bit to clarify it.

But maybe the suggestion is working on those questions outside academia. We mention DeepMind and Open AI as having ethics divisions, but likely only some philosophical questions relevant to AI safely are done in those kinds of centers, and it could be worth listing more non-academic settings in which philosophers might be able to pursue alignment relevant questions. There are, for instance, lots of AI ethics organizations, though most are only focused on short-term issues, and are more concerned with 'implications' than with philosophical questions that arise in the course of design. CHAI, AI Impacts, the Leverhulme center, and MIRI also seem to do a bit of philosophy each. The future Schwarzman Center at Oxford may also be a good place for this once it gets going. I’ve edited the relevant sections to reflect this.

Do you know of any other projects or organizations that might be useful to mention? I also think your list of philosophy questions relevant to AI is useful--thanks for writing it up!-- and would like to link to it in the article.

As for the comparison with journalism and AI policy, in line with what Will wrote below I was thinking of those as suggestions for people who are trying to get out of philosophy or who will be deciding not to go into it in the first place, i.e., for people who would be good at philosophy but who choose to do something else that takes advantage of their general strengths.

Comment by ardenlk on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-19T13:17:15.403Z · score: 5 (2 votes) · EA · GW

Hey Lexande-

Just to address your last point/question: I don’t think that the right thing to take away from 80,000 Hours changing its mind over the years on some of these points is pessimism about the targeted career capital one builds now being useful in 5-10 years-- there are a lot of ways to do good, and these changes reflect 8,000 Hours changing views on what the absolute optimal way of doing good is. That's obviously a hard thing to figure out, and it obviously changes over time. But most of their advice seems pretty robust to me. Even if it looks like in 5-10 years it would have been absolutely optimal to be doing something somewhat different, having followed 80,000 Hours advice would still likely put you in a position that is pretty close to the best place to be.

For example, if you are working toward doing governmental AI policy, and in 5-10 years that area is more saturated and so slightly less optimal than they think now it will be, and now it's better to be working in an independent think tank, or on other technology policy, etc., then (1) what you're doing is probably still pretty close to optimal, and (2) you might be able to switch over because the direct work you've been engaging in has also resulted in useful career capital.

It's also important to remember that if in 10 years some 80,000 Hours-recommended career path, such as AI policy, is less neglected than it used to be, that is a good thing, and doesn't undermine people having worked toward it--it's less neglected in this case because more people worked toward it.

Comment by ardenlk on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-19T03:37:27.900Z · score: 10 (4 votes) · EA · GW

This seems great to me! Thanks for writing this out. As for building flexible career capital (re: the comment below): flexibility is of course good all else equal, and more important the earlier people are in their careers. It's just that people can face a trade-off at some point between flexibility and usefulness to something specific. I think 80,000 Hours has changed its views on how to weight the considerations in that trade-off, favoring usefulness to something specific more than they used to. But if someone can both work toward something that they think will be really valuable and build flexible career capital at the same time, that seems all the better.

Comment by ardenlk on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-18T13:11:26.862Z · score: 14 (5 votes) · EA · GW

If the way you talk about career capital here is indicative of 80k's current thinking then it sounds like they've changed their position AGAIN, mostly reversing their position from 2018

I didn't mean for what I said to suggest a departure from 80,000 Hours' current position on career capital. They still think (and I agree) that it's better for most people to have a specific plan for impact in mind when they're building career capital, instead of just building 'generic' career capital (generally transferable skills), and that in the best case that means going straight into a career path. But of course sometimes that won't be possible and people will have to skill up somewhere else.

how well can people feasibly target narrow career capital 5-10 years out when the skill bottlenecks of the future will surely be different?

This is a good question, and it's of course not easy to predict what the most impactful things to do in 5-10 years will be-- it seems unlikely, though, that working toward one of 80,000 Hours’ "priority paths" will become not very useful down the line. And in general being sensitive to how skill bottlenecks might change in the future is definitely something that 80,000 Hours is keen on.

To your second point: I mean, yeah--it's hard to keep everything up to date, especially as the body of research grows, but it's obviously bad to have old content up there that is misleading or confusing. Updating and flagging (and maybe removing--I'm not sure) old content is something 80,000 Hours is working on.

an obvious strategy for dealing with this is to explicitly state (for each page or section of the site) who the intended audience is.

I'm not sure what exactly the 80,000 Hours team would say about explicitly labeling different pages with notes about the intended audience, but my guess is that they wouldn't want to do that for a lot of their content because it's very hard to say exactly who it will be useful for. They do have something about intended audience on their homepage: "Ultimately, we want to help everyone in the world have a big social impact in their career. Right now, we’re focusing on giving advice to talented and ambitious graduates in their twenties and thirties." I know that's vague, but it seems like it has to be vague to keep from screening off people who could benefit from the research.

Maybe they could do a better job of helping people figure out what content is for them and what content isn’t, but it doesn’t seem to me at least like explicitly labels at the top of pages would be the right way to go about it.

Comment by ardenlk on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-17T21:09:47.357Z · score: 20 (8 votes) · EA · GW

Hey Michael,

Thanks for commenting. With regard to your first point: I don't think there is a tension-- The idea of a list of the best careers for everyone from top to bottom doesn't make much conceptual sense. But a list of career paths that it does the most good for a specific set of people to read about and consider does make conceptual sense. I think of 80,000 Hours as more like the latter.

And as I wrote in a reply to your other comment below, a list like that can be really helpful for people in creating their own personal lists of what the best options are for them.

(this is basically to agree with cole_haus's reply)

Comment by ardenlk on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-17T21:04:25.153Z · score: 23 (10 votes) · EA · GW

The tension is between saying 'our research is useful: it tells (group X) of people what it is best for them to do' and 'our research does not offering a definitive ranking of what it is before for people to do (whether people in group X or otherwise)'.

Though I am saying that 80,000 Hours' research can't offer a single, definite ranking of what is best for everyone to do, that doesn't mean that their research isn't very useful for people figuring out what it is best for them to do.

The way I might put it: 80,000 Hours research helps people put together their own list of what is best for them to do, by (1) offering lots of information people need to combine with their own knowledge about themselves to build their list--e.g., what certain jobs are like, how people typically get into a particular job, and so on, (2) offering tools for people to use to figure out the information about themselves that they need-- like for assessing personal fit, etc., and (3) offering guidance on how to prioritize options according to the impact that people in the roles can have under various different circumstances. 80,000 hours also does things like seek out specific positions and bring them to people's attention.

All this is really useful, I believe, for helping people do the most good they can with their careers, without any of it amounting to creating a big list of what it's best for everyone in group x (e.g., the EA community) to do.