Posts

Comments

Comment by gsastry on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:01:11.911Z · EA · GW

What key metrics do research analysts pay attention to in the course of their work? More broadly, how do employees know that they're doing a good job?

Comment by gsastry on Changes in funding in the AI safety field · 2017-02-07T02:54:20.194Z · EA · GW

Luke Muehlhauser posted a list of strategic questions here: http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/ (originally posted in 2014).

Comment by gsastry on Ask MIRI Anything (AMA) · 2016-12-06T06:07:58.780Z · EA · GW

By (3), do you mean the publications that are listed under "forecasting" on MIRI's publications page?

Comment by gsastry on Why I'm donating to MIRI this year · 2016-12-04T21:56:50.486Z · EA · GW

I agree that this makes sense in the "ideal" world, where potential donors have better mental models of this sort of research pathway, and have found this sort of thinking useful as a potential donor.

From an organizational perspective, I think MIRI should put more effort into producing visible explanations of their work (well, depending on their strategy to get funding). As worries about AI risk become more widely known, there will be a larger pool potential donations to research in the area. MIRI risks becoming out-competed by others who are better at explaining how their work decreases risk from advanced AI (I think this concern applies both to talent and money, but here I'm specifically talking about money).

High-touch, extremely large donors will probably get better explanations, reports on progress, etc from organizations, but the pool of potential $ from donors who just read what's available online may be very large, and very influenced by clear explanations about the work. This pool of donors is also more subject to network effects, cultural norms, and memes. Given that MIRI is running public fundraisers to close funding gaps, it seems that they do rely on these sorts of donors for essential funding. Ideally, they'd just have a bunch of unrestricted funding to keep them secure forever (including allaying the risk of potential geopolitical crises and macroeconomic downturns).

Comment by gsastry on Ask MIRI Anything (AMA) · 2016-10-12T23:47:02.859Z · EA · GW

Do you share Open Phil's view that there is a > 10% chance of transformative AI (defined as in Open Phil's post) in the next 20 years? What signposts would alert you that transformative AI is near?

Relatedly, suppose that transformative AI will happen within about 20 years (not necessarily a self improving AGI). Can you explain how MIRI's research will be relevant in such a near-term scenario (e.g. if it happens by scaling up deep learning methods)?

Comment by gsastry on Ask MIRI Anything (AMA) · 2016-10-12T23:37:09.363Z · EA · GW

The authors of the "Concrete Problems in AI safety" paper distinguish between misuse risks and accident risks. Do you think in these terms, and how does your roadmap address misuse risk?