Posts

AMA: The new Open Philanthropy Technology Policy Fellowship 2021-07-26T15:11:50.661Z
Apply to the new Open Philanthropy Technology Policy Fellowship! 2021-07-20T18:41:46.759Z
A personal take on longtermist AI governance 2021-07-16T22:08:03.981Z
EA needs consultancies 2021-06-28T15:18:38.844Z
Superforecasting in a nutshell 2021-02-25T06:11:28.886Z
Notes on 'Atomic Obsession' (2009) 2019-10-26T00:30:21.491Z
Information security careers for GCR reduction 2019-06-20T23:56:58.275Z
How big a deal was the Industrial Revolution? 2017-09-16T07:00:00.000Z
Three wild speculations from amateur quantitative macrohistory 2017-09-12T09:47:51.112Z
Hi, I'm Luke Muehlhauser. AMA about Open Philanthropy's new report on consciousness and moral patienthood 2017-06-28T15:49:11.655Z
Efforts to Improve the Accuracy of Our Judgments and Forecasts (Open Philanthropy) 2016-10-25T10:09:07.145Z
Meetup : GiveWell research event for Bay Area effective atlruists! 2015-06-30T01:22:05.169Z
The Cognitive Science of Rationality 2011-09-12T10:35:42.246Z

Comments

Comment by lukeprog on Apply to the new Open Philanthropy Technology Policy Fellowship! · 2021-07-22T21:47:48.745Z · EA · GW

Oops! Should be fixed now.

Comment by lukeprog on A personal take on longtermist AI governance · 2021-07-19T21:36:10.789Z · EA · GW

As far as I know it's true that there isn't much of this sort of work happening at any given time, though over the years there has been a fair amount of non-public work of this sort, and it has usually failed to convince people who weren't already sympathetic to the work's conclusions (about which intermediate goals are vs. aren't worth aiming for, or about the worldview cruxes underlying those disagreements). There isn't even consensus about intermediate goals such as the "make government generically smarter about AI policy" goals you suggested, though in some (not all) cases the objection to that category is less "it's net harmful" and more "it won't be that important / decisive."

Comment by lukeprog on EA needs consultancies · 2021-07-04T12:26:54.383Z · EA · GW

A couple quick replies:

  • Yes, there are several reasons why Open Phil is reluctant to hire in-house talent in many cases, hence the "e.g." before "because our needs change over time, so we can't make a commitment that there's much future work of a particular sort to be done within our organizations."
  • I actually think there is more widespread EA client demand (outside OP) for EA consulting of the types listed in this post than the post itself represents, because there were several people who gave me feedback on the post and said something like "This is great, I think my org has lots of demand for several of these services if they can be provided to a sufficient quality level, but please don't quote me on that because I haven't thought hard enough about this and don't want people to become over-enthusiastic about this on the basis of my OTOH reaction." Perhaps I should've mentioned this in the original post.
Comment by lukeprog on EA needs consultancies · 2021-06-30T22:38:21.128Z · EA · GW

I don't feel strongly. You all have more context than I do on what seems feasible here. My hunch is in favor of RP maintaining current quality (or raising it only a tiny bit) and scaling quickly for a while — I mostly wanted to give some counterpoints to your suggestion that maybe RP should lower its quality to get more quantity.

Comment by lukeprog on EA needs consultancies · 2021-06-30T15:33:43.275Z · EA · GW

I don't think EAs have a comparative advantage in policy/research in general, but I do think some EAs have a comparative advantage in doing some specific kinds of policy/research for other EAs, since EAs care more than many (not all) clients about certain analytic features, e.g. scope-sensitivity, focus on counterfactual impact, probability calibration, reasoning transparency of a particular sort, a tolerance for certain kinds of weirdness, etc.

Comment by lukeprog on EA needs consultancies · 2021-06-30T15:30:55.220Z · EA · GW

Other Rethink Priorities clients (including at Open Phil) might disagree, but my hunch is that if anything, higher quality and lower quantity is the way to go, because a client like me has less context on consultants doing some project than I do on someone I've directly managed (internally) on research projects for 2 years. So e.g. Holden vetted my Open Phil work pretty closely for 2 years and now feels less need to do so because he has a sense of what my strengths and weaknesses are, where he can just defer to me and where he should make sure to develop his own opinion, etc. That's part of the massive cost of hiring, training, and managing internal talent, but it eventually gets you to a place where you don't need to be so nervous about major crippling flaws (of some kinds) in someone's work. But a major purpose of outsourcing analysis work is to get some information you need without needing to first have built up months or years of deep context with them. But how can I trust the work of someone I have so little context with? I think "go kinda overboard on legibility / reasoning transparency" and "go kinda overboard on quality / thoroughness / vetting" are two major strategies, especially when the client is far more time-constrained than funding constrained (as Open Phil is).

Comment by lukeprog on EA needs consultancies · 2021-06-30T15:30:16.569Z · EA · GW

Thanks for your thoughtful comment!

Re: reluctance. Can you say more about the concern about donor perceptions? E.g. maybe grantmakers like me should be more often nudging grantees with questions like "How could you get more done / move faster by outsourcing some work to consultants/contractors?" I've done that in a few cases but haven't made a consistent effort to signal willingness to fund subcontracts.

What do you mean about approval from a few parties? Is it different than other expenditures?

Re: university rules. Yes, very annoying. BERI is trying to help with that, and there could be more BERIs.

Re: "isolated to Open Phil." Agree that the consultancy model doesn't help much if in practice there's only one client, or just a few — hence my attempt (mostly in the footnotes) to get some sense of how much demand there is for these services outside Open Phil. Of course, with Open Phil being the largest funder in the EA space, many potential clients of EA consultancies are themselves in part funded by Open Phil, but that doesn't seem too problematic so long as Open Phil isn't institutionally opposed to subgranting/subcontracting.

(Even within Open Phil, a bit of robustness could come from multiple teams demanding a particular genre of services, e.g. at least 3 pretty independent teams at Open Phil have contracted Rethink Priorities for analysis work. But still much safer for contractors if there are several truly independent clients.)

Re: prices. Seems like an education issue. If you find you need additional validation for the fact that contractors have good reasons for costing ~1.3x to 2x as much as an employee per hour worked, feel free to point people to this comment. :)

Re: subsidizing. Yes, this would be interesting to think more about. There's even a model like Founders Pledge and Longview where donors fund the service entirely and then the consultant provides the services for free to clients (in this case, donor services to founders and high-net-worth individuals).

I'm struggling to parse "Many contractors that organizations themselves come from those organizations." Could you rephrase?

Definitely agree that understanding the internal needs of clients is difficult. Speaking from the side of someone trying to communicate my needs/desires to various grantees and consultants, it also feels difficult on this end of things. This difficulty is often a major reason to do something in-house even if it would in theory be simpler and more efficient to outsource. E.g. it's a major part of why Open Phil as built a "worldview investigations" team: it's sort-of weird to have a think tank within a grantmaker instead of just funding external think tanks, but it was too hard to communicate to external parties exactly what we needed to make our funding decisions, so the only way forward was to hire that talent internally so we could build up more shared context etc. with the people doing that work. That was very expensive in staff time, but ultimately the only way to get what we needed. But in other cases it should be possible (and has been possible) for clients to communicate what they need to consultants. One person I spoke to recently suggested that programs like RSP could be a good complement to consultancy work because it allows more people to hang out and gain context on how potential future clients (in that case FHI, but also sort-of "veteran hardcore longtermists in general") think about things and what they need.

Comment by lukeprog on EA needs consultancies · 2021-06-30T15:29:56.046Z · EA · GW

The problem I'm trying to solve (at the top of the post) is that (non-consultancy) EA organizations like Open Phil, for a variety of reasons, can't hire the talent we need to accomplish everything we'd like to accomplish. So when we do manage to hire someone into a specific role, I think their work in that role can be highly valuable, and if they're performing well in that role after the first ~year then my hunch is they should stay in that role for at least a few years. That said, we've had staff leave and become a grantee/similar instead, and I could imagine some staff leaving to become an EA consultant at some point if they think they can accomplish more good that way and/or if they think that's a better fit for them personally.

Comment by lukeprog on EA needs consultancies · 2021-06-30T15:29:39.023Z · EA · GW

I don't think that would play to Open Phil's comparative advantages especially well. I think Open Phil should focus on figuring out how to move large amounts of money toward high-ROI work.

Comment by lukeprog on EA needs consultancies · 2021-06-30T12:28:44.849Z · EA · GW

Interesting, thanks, I didn't know about this. That group's first newsletter says:

  • The EACN network consist of 200+ members by now
  • All major consulting firms represented
  • BCG & McKinsey launched their own internal EA slack channels - featuring 70+ consultants each
Comment by lukeprog on EA needs consultancies · 2021-06-29T02:03:36.249Z · EA · GW

Thanks, I didn't know this!

Comment by lukeprog on EA needs consultancies · 2021-06-28T21:43:53.193Z · EA · GW

I agree, the EA Infrastructure Fund seems like a great source of funding for launching potential new EA consultancies!

Comment by lukeprog on EA needs consultancies · 2021-06-28T21:41:43.922Z · EA · GW

Yeah, I originally had the same thought, and I considered e.g. web development, event management, legal services, and HR services as not benefiting enough from EA context etc. to be worth the opportunity cost of EA talent, but then several people at multiple organizations said "Actually we've struggled to get what we want from non-EA consultants doing those things. I really wish I could contract EA consultants to do that work instead." So I added them to the list of possibilities for services that EA consultancies could provide.

I'm still not sure which conditions make it worth the opportunity cost of EA talent to provide these kinds of services, but I wanted to list them as possibilities given the feedback I got on earlier drafts of this post.

See also footnote 18.

Comment by lukeprog on On the limits of idealized values · 2021-06-25T17:03:04.027Z · EA · GW

Just FYI, some additional related literature is cited here.

Comment by lukeprog on High Impact Careers in Formal Verification: Artificial Intelligence · 2021-06-08T23:20:15.304Z · EA · GW

Is it easy to dig up a source for the RL agent that learned to crash into the deck?

Comment by lukeprog on EA is a Career Endpoint · 2021-05-15T02:52:47.575Z · EA · GW

I broadly endorse this advice.

Comment by lukeprog on A central directory for open research questions · 2021-05-14T20:53:15.420Z · EA · GW

Addition: Open Problems in Cooperative AI.

Comment by lukeprog on Why AI is Harder Than We Think - Melanie Mitchell · 2021-04-28T14:04:39.876Z · EA · GW

I wish "relative skeptics" about deep learning capability timelines such as Melanie Mitchell and Gary Marcus would move beyond qualitative arguments and try to build models and make quantified predictions about how quickly they expect things to proceed, a la Cotra (2020) or Davidson (2021) or even Kurzweil. As things stand today, I can't even tell whether Mitchell or Marcus have more or less optimistic timelines than the people who have made quantified predictions, including e.g. authors from top ML conferences.

Comment by lukeprog on International cooperation as a tool to reduce two existential risks. · 2021-04-19T23:25:45.583Z · EA · GW

I think EAs focused on x-risks are typically pretty gung-ho about improving international cooperation and coordination, but it's hard to know what would actually be effective for reducing x-risk, rather than just e.g. writing more papers about how cooperation is desirable. There are a few ideas I'm exploring in the AI governance area, but I'm not sure how valuable and tractable they'll look upon further inspection. If you're curious, some concrete ideas in the AI space are laid out here and here.

Comment by lukeprog on EA Debate Championship & Lecture Series · 2021-04-07T16:45:31.151Z · EA · GW

This seems great to me, please do more.

Comment by lukeprog on Strong Longtermism, Irrefutability, and Moral Progress · 2021-04-01T00:57:30.602Z · EA · GW

I know I'm late to the discussion, but…

I agree with AGB's comment, but I would also like to add that strong longtermism seems like a moral perspective with much less "natural" appeal, and thus much less ultimate growth potential, than neartermist EA causes such as global poverty reduction or even animal welfare.

For example, I'm a Program Officer in the longtermist part of Open Philanthropy, but >80% of my grantmaking dollars go to people who are not longtermists (who are nevertheless doing work I think is helpful for certain longtermist goals). Why? Because there are almost no longtermists anywhere in the world, and even fewer who happen to have the skills and interests that make them a fit for my particular grantmaking remit. Meanwhile, Open Philanthropy makes far more grants in neartermist causes (though this might change in the future), in part because there are tons of people who are excited about doing cost-effective things to help humans and animals who are alive and visibly suffering today, and not so many people who are excited about trying to help hypothetical people living millions of years in the future.

Of course to some degree this is because longtermism is fairly new, though I would date it at least as far back as Bostrom's "Astronomical Waste" paper from 2003.

I would also like to note that many people I speak to who identify (like me) as "primarily longtermist" have sympathy (like me) for something like "worldview diversification," given the deep uncertainties involved in the quest to help others as much as possible. So e.g. while I spend most of my own time on longtermism-motivated efforts, I also help out with other EA causes in various ways (e.g. this giant project on animal sentience), and I link to or talk positively about GiveWell top charities a lot, and I mostly avoid eating non-AWA meat, and so on… rather than treating these non-longtermist priorities as a rounding error. Of course some longtermists take a different approach than I do, but I'm hardly alone in my approach.

Comment by lukeprog on Forecasting Newsletter: January 2021 · 2021-02-02T02:29:27.685Z · EA · GW

Cool search engine for probabilities! Any chance you could add Hypermind?

Comment by lukeprog on Why are party politics not an EA priority? · 2021-01-03T20:31:22.887Z · EA · GW

A few links that may be of interest:

Comment by lukeprog on The EA Meta Fund is now the EA Infrastructure Fund · 2020-08-20T16:17:39.948Z · EA · GW

I like the new name; solid choice.

Comment by lukeprog on Informational Lobbying: Theory and Effectiveness · 2020-08-03T12:10:25.052Z · EA · GW

Thanks for this!

FWIW, I'd love to see a follow-up review on lobbying Executive Branch agencies. They're less powerful than Congress, but often more influenceable as well, and can sometimes be the most relevant target of lobbying if you're aiming for a very specific goal (that is too "in the weeds" to be addressed directly in legislation). I found Godwin et al. (2012) helpful here, but I haven't read much else. Interestingly, Godwin et al. find that some of the conclusions from Baumgartner et al. (2009) about Congressional lobbying don't hold for agency lobbying.

Comment by lukeprog on Forecasting Newsletter: May 2020. · 2020-05-31T18:33:35.044Z · EA · GW

Thanks!

Some additional recent stuff I found interesting:

  • This summary of US and UK policies for communicating probability in intelligence reports.
  • Apparently Niall Ferguson’s consulting firm makes & checks some quantified forecasts every year: “So at the beginning of each year we at Greenmantle make predictions about the year ahead, and at the end of the year we see — and tell our clients — how we did. Each December we also rate every predictive statement we have made in the previous 12 months, either “true”, “false” or “not proven”. In recent years, we have also forced ourselves to attach probabilities to our predictions — not easy when so much lies in the realm of uncertainty rather than calculable risk. We have, in short, tried to be superforecasters.”
  • Review of some failed long-term space forecasts by Carl Shulman.
  • Some early promising results from DARPA SCORE.
  • Assessing Kurzweil predictions about 2019: the results
  • Bias, Information, Noise: The BIN Model of Forecasting is a pretty interesting result if it holds up. Another explanation by Mauboussin here. Supposedly this is what Kahneman's next book will be about; HBR preview here.
  • GJP2 is now recruiting forecasters.
Comment by lukeprog on Forecasting Newsletter: April 2020 · 2020-05-06T19:34:30.403Z · EA · GW

The headline looks broken in my browser. It looks like this:

/(Good Judgement?[^]*)|(Superforecast(ing|er))/gi

The last explicit probabilistic prediction I made was probably a series of forecasts on my most recent internal Open Phil grant writeup, since it's part of our internal writeup template to prompt the grant investigator for explicit probabilistic forecasts about the grant. But it could've easily been elsewhere; I do somewhat-often make probabilistic forecasts just in conversation, or in GDoc/Slack comments, though for those I usually spend less time pinning down a totally precise formulation of the forecasting statement, since it's more about quickly indicating to others roughly what my views are rather than about establishing my calibration across a large number of precisely stated forecasts.

Comment by lukeprog on Forecasting Newsletter: April 2020 · 2020-05-01T17:52:49.915Z · EA · GW

Note that the headline ("Good Judgement Project: gjopen.com") is still confusing, since it seems to be saying GJP = GJO. The thing that ties the items under that headline is that they are all projects of GJI. Also, "Of the questions which have been added recently" is misleading since it seems to be about the previous paragraph (the superforecasters-only questions), but in fact all the links go to GJO.

Comment by lukeprog on Forecasting Newsletter: April 2020 · 2020-04-30T22:46:37.148Z · EA · GW

Nice to see a newsletter on this topic!

Clarification: The GJO coronavirus questions are not funded by Open Phil. The thing funded by Open Phil is this dashboard (linked from our blog post) put together by Good Judgment Inc. (GJI), which runs both GJO (where anyone can sign up and make forecasts) and their Superforecaster Analytics service (where only superforecasters can make forecasts). The dashboard Open Phil funded uses the Superforecaster Analytics service, not GJO. Also, I don't think Tetlock is involved in GJO (or GJI in general) much at all these days, but GJI is indeed the commercial spinoff from the Good Judgment Project (GJP) that Tetlock & Mellers led and which won the IARPA ACE forecasting competition and resulted in the research covered in Tetlock's book Superforecasting.

Comment by lukeprog on Insomnia with an EA lens: Bigger than malaria? · 2020-04-21T16:10:30.794Z · EA · GW

I wrote up some thoughts on CBT-I and the evidence base behind it here.

Comment by lukeprog on Information security careers for GCR reduction · 2019-12-31T22:44:04.006Z · EA · GW

Is it easy to say more about (1) which personality/mindset traits might predict infosec fit, and (2) infosec experts' objections to typical GCR concerns of EAs?

Comment by lukeprog on Rethink Priorities 2019 Impact and Strategy · 2019-12-03T02:44:45.181Z · EA · GW

FWIW I was substantially positively surprised by the amount and quality of the work you put out in 2019, though I didn't vet any of it in depth. (And prior to 2019 I think I wasn't aware of Rethink.)

Comment by lukeprog on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T22:41:18.029Z · EA · GW

FWIW, it's not clear to me that AI alignment folks with different agendas have put less effort into (or have made less progress on) understanding the motivations for other agendas than is typical in other somewhat-analogous fields. Like, MIRI leadership and Paul have put >25 (and maybe >100, over the years?) hours into arguing about merits of their differing agendas (in person, on the web, in GDocs comments), and my impression is that central participants to those conversations (e.g. Paul, Eliezer, Nate) can pass the others' ideological Turing tests reasonably well on a fair number of sub-questions and down 1-3 levels of "depth" (depending on the sub-question), and that might be more effort and better ITT performance than is typical for "research agenda motivation disagreements" in small niche fields that are comparable on some other dimensions.

Comment by lukeprog on Opinion: Estimating Invertebrate Sentience · 2019-11-07T17:06:38.592Z · EA · GW

Interesting stuff, thanks!

Comment by lukeprog on Notes on 'Atomic Obsession' (2009) · 2019-11-04T04:42:11.151Z · EA · GW

Thanks!

Comment by lukeprog on Information security careers for GCR reduction · 2019-10-15T22:48:13.619Z · EA · GW

Awesome, thanks for letting us know!

Comment by lukeprog on [Link] "How feasible is long-range forecasting?" (Open Phil) · 2019-10-14T00:59:41.447Z · EA · GW

Thanks! I knew there was one major study I was missing from the 70s, and that I had emailed people about before, but I couldn't track it down when I was writing this post, and I'm pretty sure this is the one I was thinking of. Of course, this study suffers from several of the problems I list in the post.

Comment by lukeprog on What to know before talking with journalists about EA · 2019-09-05T03:53:19.766Z · EA · GW

The linked "full guide" seem to require sign-in?

Comment by lukeprog on AI Forecasting Question Database (Forecasting infrastructure, part 3) · 2019-09-04T02:58:31.374Z · EA · GW

Nice work!

Comment by lukeprog on Are we living at the most influential time in history? · 2019-09-04T02:48:39.940Z · EA · GW

Great post!

Even just a few decades ago, a longtermist altruist would not have thought of risk from AI or synthetic biology, and wouldn’t have known that they could have taken action on them.

Minor point, but I think this is unclear. On AI see e.g. here. On synbio I'm less familiar but I'm guessing someone more than a few decades ago was able to think thoughts like "Once we understand cell biology realy well, seems like we might be able to engineer pathogens much more destructive than those served up by nature."

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-15T20:53:44.307Z · EA · GW

On the difference between the role we've tried to hire for at Open Phil specifically and a typical Security Analyst or Security Officer role, a few things come to mind, though we also think we don't yet have a great sense of the range of security roles throughout the field. One possible difference is that many security roles focus on security systems for a single organization, whereas we've primarily looked for someone who could help both Open Phil and some of our grantees, each of whom have potentially quite different needs. Another possible difference is that our GCR focus in AI and biosecurity leads us to some non-standard threat models, and it has been difficult thus far for us to find experienced security experts who readily adapt standard security thinking to a somewhat different set of threat models.

Re: industry roles that would be particularly good or bad preparation. My guess is that for the GCR-mitigating roles we discuss above (i.e. not just potential future roles at Open Phil), the better-preparation for roles will tend to (a) expose one to many different types of challenges, and different aspects of those challenges, rather than being very narrowly scoped, (b) involve threat modeling of, and defense from, very capable and well-resourced attackers, and (c) require some development of novel solutions (not necessary new crypto research; could also just be new configurations of interacting hardware/software systems and user behavior policies and training), among other things.

Comment by lukeprog on Invertebrate Welfare Cause Profile · 2019-07-15T01:37:28.591Z · EA · GW

On factors of potential relevance to moral weight, including some that could intuitively upweight many invertebrates, see also my Preliminary thoughts on moral weight.

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-05T20:22:26.084Z · EA · GW

Presumably, though I know very little about that and don't know how much value would be added there by someone focused on worst case scenarios (over their replacement).

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-05T20:20:56.755Z · EA · GW

This all sounds right to me, though I think some people have different views, and I'm hardly an expert. Speaking for myself at least, the things you point to are roughly why I wanted the "maybe" in front of "relevant roles in government." Though one added benefit of doing security in government is that, at least if you get a strong security clearance, you might learn classified helpful things about e.g. repelling state-originating APTs.

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-03T03:31:01.714Z · EA · GW

Yeah, something closer to the former.

Comment by lukeprog on How likely is a nuclear exchange between the US and Russia? · 2019-07-02T01:14:56.852Z · EA · GW

Very minor: "GJP" should be "GJI." Good Judgment Project ended with the end of the IARPA ACE tournaments. The company that pays superforecasters from that project to continue making forecasts for clients is Good Judgment Inc.

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-01T20:53:44.025Z · EA · GW

I think we meant a bit of (b) and (c) but especially (a).

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-01T20:53:28.453Z · EA · GW

IIRC the main concern in the earlier conversations was about how many high-impact roles of this type there might really be in the next couple decades. Probably the number is smaller than (e.g.) the number of similarly high-impact "AI policy" roles, but (as our post says) we think the number of high-impact roles of this type will be substantial. And given how few GCR-focused people there are in general, and how few of them are likely a personal fit for this kind of career path anyway, it might well be that even if many of the people who are a good fit for this path pursue it, that would still not be enough to meet expected need in the next couple decades.

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-01T20:53:10.918Z · EA · GW

The key roles we have in mind are a bit closer to what is sometimes called "security officer," i.e. someone who can think through (novel, GCR-focused) threat models, plausibly involving targeted state-based attacks, develop partly-custom system and software solutions that are a match to those threat models, think through and gather user feedback about tradeoffs between convenience and security of those solutions, develop and perhaps deliver appropriate training for those users, etc. Some of this might include things like "protect some unusual configuration of AWS services," but I imagine that might also be something that the security officer is able to outsource. We’ve tried working with a few security consultants, and it hasn’t met our needs so far.

Projects like "develop novel cryptographic methods" might also be useful in some cases — see my bullet points on research (rather than implementation) applications of security expertise in the context of AI — but they aren't the modal use-case we're thinking of.

But also, we haven't studied this potential career path to the level of depth that (e.g.) 80,000 Hours typically does when developing a career profile, so we have more uncertainty about many of the details here even than is typically represented in an 80,000 Hours career profile.

Comment by lukeprog on Who are the people that most publicly predicted we'd have AGI by now? Have they published any kind of retrospective, and updated their views? · 2019-06-29T18:35:17.032Z · EA · GW

You could dive into the specific examples in the spreadsheet linked here (the MIRI AI predictions dataset).