Posts

EA intro videos for kids 2021-05-24T17:45:42.115Z
Day One Project Technology Policy Accelerator 2021-02-08T15:14:41.073Z
New article from Oren Etzioni 2020-02-25T15:38:38.073Z
Brief summary of key disagreements in AI Risk 2019-12-26T19:40:28.354Z

Comments

Comment by iarwain on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-01T20:34:28.137Z · EA · GW

In your 80,000 Hours interview you talked about worldview diversification. You emphasized the distinction between total utilitarianism vs. person-affecting views within the EA community. What about diversification beyond utilitarianism entirely? How would you incorporate other normative ethical views into cause prioritization considerations? (I'm aware that in general this is basically just the question of moral uncertainty, but I'm curious how you and Open Phil view this issue in practice.)

Comment by iarwain on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-01T20:22:07.692Z · EA · GW

True. My main concern here is the lamppost issue (looking under the lamppost because that's where the light is). If the unknown unknowns affect the probability distribution, then personally I'd prefer to incorporate that or at least explicitly acknowledge it. Not a critique - I think you do acknowledge it - but just a comment.

Comment by iarwain on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-01T19:23:41.440Z · EA · GW

Shouldn't a combination of those two heuristics lead to spreading out the probability but with somewhat more probability mass on the longer-term rather than the shorter term?

Comment by iarwain on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-28T19:04:10.444Z · EA · GW
  • What skills/types of people do you think AI forecasting needs?

 

I know you asked Ajeya, but I'm going to add my own unsolicited opinion that we need more people with professional risk analysis backgrounds, and if we're going to do expert judgment elicitations as part of forecasting then we need people with professional elicitation backgrounds. Properly done elicitations are hard. (Relevant background: I led an AI forecasting project for about a year.)

Comment by iarwain on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-28T18:41:12.789Z · EA · GW

For thinking about AI timelines, how do you go about choosing the best reference classes to use (see e.g., here and here)?

Comment by iarwain on I am Nate Soares, AMA! · 2015-06-11T14:00:12.975Z · EA · GW

I know that in the past LessWrong, HPMOR, and similar community-oriented publications have been a significant source of recruitment for areas that MIRI is interested in, such as rationality, EA, awareness of the AI problem, and actual research associates (including yourself, I think). What, if anything, are you planning to do to further support community engagement of this sort? Specifically, as a LW member I'm interested to know if you have any plans to help LW in some way.