Posts

Comments

Comment by elizabethbarnes on COVID-19 brief for friends and family · 2020-03-03T18:15:35.874Z · score: 6 (3 votes) · EA · GW

Although I believe all the deaths were at a nursing home, where you'd expect a much higher death rate

Comment by elizabethbarnes on COVID-19 brief for friends and family · 2020-03-03T18:03:30.326Z · score: 6 (4 votes) · EA · GW

Big source of uncertainty is how long the fatigue persists - it wasn't entirely clear from the SARS paper whether that was the fraction of people who still had fatigue at 4 years, or people who'd had it at some point. Numbers are very different if it's a few months of fatigue vs rest of your life. Not sure I've split up the persistent CF vs temporary post-viral fatigue properly

Comment by elizabethbarnes on COVID-19 brief for friends and family · 2020-03-02T21:52:20.770Z · score: 21 (11 votes) · EA · GW

A friend pointed me to a study showing a high rate of chronic fatigue in SARS survivors (40%). I did a quick analysis of risk of chronic fatigue from getting COVID-19 (my best guess for young healthy people is ~2 weeks lost in expectation, but could be less than a day or more like 100 days on what seem like reasonable assumptions. ) https://docs.google.com/spreadsheets/d/1z2HTn72fM6saFH42VKs6lEdvooLJ6qaXwCrQ5YZ33Fk/edit?usp=sharing

Comment by elizabethbarnes on EA Survey 2018 Series: Donation Data · 2019-03-06T15:33:37.107Z · score: 6 (4 votes) · EA · GW

Thanks for doing this! Some nitpicking on this graph: https://i.ibb.co/wLd1vSg/donations-income-scatter.png (donations and income)

1) the trendline looks a bit weird. Did you force it to go through (0,0)?

2) Your axis labels initially go up by factors of 100, then the last one only a factor of 10.

Comment by elizabethbarnes on Three Biases That Made Me Believe in AI Risk · 2019-02-18T12:21:05.243Z · score: 22 (15 votes) · EA · GW

Thanks for the post! I am generally pretty worried that I and many people I know are all deluding ourselves about AI safety - it has a lot of red flags from the outside (although these are lessening as more experts come onboard, more progress is made in AI capabilities, and more concrete work is done on safety). I think it's more likely than not we've got things completely wrong, but that it's still worth working on. If that's not the case, I'd like to know!

I like your points about language. I think there's a closely related problem where it's very hard to talk or think about anything that's between human level at some task and omnipotent. Once you try to imagine something that can do things humans can't, there's no way to argue that the system wouldn't be able to do something. There is always a retort that just because you, a human, think it's impossible, doesn't mean a more intelligent system couldn't achieve it.

On the other hand, I think there are some good examples of couching safety concerns in non-anthropomorphic language. I like Dr Krakovna's list of specification gaming examples: https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/

I also think Iterated Distillation and Amplification is a good example of a discussion of AI safety and potential mitigation strategies that's couched in ideas of training distributions and gradient descent rather than desires and omnipotence.

Re the sense of meaning point, I don't think that's been my personal experience - I switched into CS from biology partly because of concern about x-risk, and know various other people who switched fields from physics, music, maths and medicine. As far as I could tell, the arguments for AI safety still mostly hold up now I know more about relevant fields, and I don't think I've noticed egregious errors in major papers. I've definitely noticed some people who advocate for the importance of AI safety making mistakes and being confused about CS/ML fundamentals, but I don't think I've seen this from serious AI safety researchers.

Re anchoring, this seems like a very strong claim. I think a sensible baseline to take here would be expert surveys, which usually put several percent probability on HLMI being catastrophically bad. (e.g. https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/#Chance_that_the_intelligence_explosion_argument_is_about_right)

I'd be curious if you have an explanation for why your numbers are so far away from expert estimates? I don't think that these expert surveys are a reliable source of truth, just a good ballpark for what sort of orders of magnitude we should be considering.