Posts

Does 80,000 Hours focus too much on AI risk? 2019-11-02T20:14:16.598Z · score: 57 (50 votes)

Comments

Comment by earlyvelcro on Does 80,000 Hours focus too much on AI risk? · 2019-11-04T06:57:27.144Z · score: 12 (6 votes) · EA · GW

Thank you for the thoughtful response, Howie. :)

That said, I do see how the page might give the impression that AI dominates 80k’s recommendations since most of the other paths/problems talked about are ‘meta’ or ‘capacity building’ paths.

Indeed. When Todd replied earlier that only 2 of the 9 paths were directly related to AI safety, I have to say it felt slightly disingenuous to me, even though I'm sure he did not mean it that way. Many of the other paths could be interpreted as "indirectly help AI safety." (Other than that, I appreciated Todd's comment.)

The good news is that we’re currently putting together a more thorough list of areas that we think might be very promising but aren't among our priority paths/problems.[1] Unfortunately, it didn’t quite get done in time to add it to this version of key ideas.

I'm looking forward to this list of other potentially promising areas. Should be quite interesting.

Comment by earlyvelcro on What are your top papers of the 2010s? · 2019-11-02T20:14:16.604Z · score: 10 (5 votes) · EA · GW

The following list is highly biased towards EA authors. That's not to say that non-EA authors haven't done a lot of important work. It just means that I haven't read it. I'm only including articles that haven't been mentioned by others so far.

In philosophy:

Several of Nick Bostrom's papers are insightful. I'm not going to discuss them one-by-one because it would take too long. :-) Even though I don't agree with all of his arguments, they have nonetheless influenced my thinking.

"On the Overwhelming Importance of Shaping the Long-Term Future" (2013) by Nicholas Beckstead

This PhD thesis presents philosophical arguments in favor of working to improve the far-future. Beckstead discusses different ways of doing so (existential risk reduction, trajectory changes, ripple effects of short-run altruism), and responds to objections. The thesis includes a section on population ethics that I still use as a reference.

"The Importance of Wild-Animal Suffering" (2015) by Brian Tomasik

(Note: The first version of this article was written in July 2009, but only formally published as a paper in 2015.)

This was the first article I read that made me seriously consider reducing wild animal suffering as a cause worthy of significant attention. It was highly influential on me personally. Although I admit it was not the first article to make the general argument that humans should reduce wild-animal suffering, Tomasik goes into much more depth and includes many more details and crucial considerations. Previous work was mainly philosophical in nature. In contrast, Tomasik is largely responsible for the existence of actual organizations like Wild Animal Initiative and Animal Ethics which are actively trying to address wild-animal suffering.

"Dissolving the Fermi Paradox" (2018) by Anders Sandberg, Eric Drexler, Toby Ord

(Note: I'm not sure whether this article was ever actually formally published.)

This article shows that previous work on the Drake equation / Fermi paradox relied on a fundamental mathematical error. Namely, people would take the Drake equation, plug in estimates for each variable, and multiply them together. The authors show that when the calculation is done in a more complicated but also statistically accurate way, the lack of evidence of aliens is no longer such a surprise.

In economics:

"Are Ideas Getting Harder to Find?" (2017) by Bloom et al.

"In many growth models, economic growth arises from people creating ideas, and the long-run growth rate is the product of two terms: the effective number of researchers and their research productivity. We present a wide range of evidence from various industries, products, and firms showing that research effort is rising substantially while research productivity is declining sharply. A good example is Moore's Law. The number of researchers required today to achieve the famous doubling every two years of the density of computer chips is more than 18 times larger than the number required in the early 1970s. Across a broad range of case studies at various levels of (dis)aggregation, we find that ideas — and in particular the exponential growth they imply — are getting harder and harder to find. Exponential growth results from the large increases in research effort that offset its declining productivity."

In CS:

"Deep Learning: A Critical Appraisal" (2018) by Gary Marcus

"Although deep learning has historical roots going back decades, neither the term "deep learning" nor the approach was popular just over five years ago, when the field was reignited by papers such as Krizhevsky, Sutskever and Hinton's now classic (2012) deep network model of Imagenet. What has the field discovered in the five subsequent years? Against a background of considerable progress in areas such as speech recognition, image recognition, and game playing, and considerable enthusiasm in the popular press, I present ten concerns for deep learning, and suggest that deep learning must be supplemented by other techniques if we are to reach artificial general intelligence."