What can the principal-agent literature tell us about AI risk? 2020-02-10T10:10:19.645Z · score: 23 (13 votes)


Comment by alexis-carlier on What posts do you want someone to write? · 2020-04-03T18:15:10.292Z · score: 1 (3 votes) · EA · GW

When to use quantitative vs qualitative research

MacAskill mentions some considerations here, but the dividing line still feels fuzzy. Sample size is one consideration, but I suspect there are many others, such as the goal of the research (e.g. arguing for the possibility vs the plausibility of some phenomena).

This is relevant to many EA questions, especially those relating to longtermism or disruptive technologies. For instance, this post uses qualitative methods (in depth case studies) to argue that "an AI which is generally more intelligent than us could take over the world, even if it isn't superintelligent." I'm unsure whether three case studies actually constitutes much evidence; in a comment, the author suggests that a higher-n study ("quantitative") would be helpful.

Without a framework for thinking about this, I'm often unsure what I should be learning from qualitative studies, and I don't know when it makes sense to conduct them. (This seems related to the debate between cleometricians and counterfactual narrative historians; some discussion here, page 18)

Comment by alexis-carlier on What posts do you want someone to write? · 2020-04-03T17:35:18.268Z · score: 4 (3 votes) · EA · GW

I doubt that there is any one answer re the marginal value of such projects, because the value depends on what is being governed. For instance, I think a successful implementation of regulatory markets for AI safety would be very valuable, but regulatory markets for corporate law wouldn't be; yet the same basic framework is being implemented.

For this reason, I'd be more interested in analysis of governance innovation for a particular cause area.