AMA: Jason Crawford, The Roots of Progress

post by jasoncrawford · 2020-12-03T16:49:28.075Z · EA · GW · 80 comments

I write The Roots of Progress, a blog about the history of technology and the philosophy of progress. Some of my top posts:

I am also the creator of Progress Studies for Young Scholars, an online learning program for high schoolers; and a part-time adviser and technical consultant to Our World in Data, an Oxford-based non-profit for research and data on global development.

My work is funded by grants from Emergent Ventures, Open Philanthropy, the Long-Term Future Fund [EA · GW], and Jaan Tallinn (via the Survival and Flourishing Fund).

Previously, I spent 18 years as a software engineer, engineering manager, and startup founder.

Ask me anything!

UPDATE: I'm pausing for now but will come back and I will try to get to everyone, thanks for all the questions!

80 comments

Comments sorted by top scores.

comment by Benjamin_Todd · 2020-12-04T12:37:48.229Z · EA(p) · GW(p)

Hi Jason,

I think your blog and work is great, and I'm keen to see what comes out of Progress Studies.

I wanted to ask a question, and also to comment on your response to another question, that I think this has been incorrect after about 2017:

My perception of EA is that a lot of it is focused on saving lives and relieving suffering.

More figures here.

The following is more accurate:

I don't see as much focus on general economic growth and scientific and technological progress.

(Though even then, Open Philanthropy has allocated $100m+ to scientific research, which would make it a significant fraction of the portfolio. They've also funded several areas of US policy research aimed at growth.)

However, the reason for less emphasis on economic growth is because the community members who are not focused on global health, are mostly focused on longtermism, and have argued it's not the top priority from that perspective. I'm going to try to give a (rather direct) summary of why, and would be interested in your response.

Those focused on longtermism have argued that influencing the trajectory of civilization is far higher value than speeding up progress (e.g. one example of that argument here [? · GW].)

Indeed, if you're concerned about existential risk from technology, it becomes unclear if faster progress in the short-term is even positive at all – though my guess is that it is [EA · GW].

In addition, longtermists have also argued that long-term trajectory-shaping efforts – which include reducing existential risk but are not limited to that – tend to be far more neglected than efforts to speed-up economic growth.

This is partly because I think there are stronger theoretical reasons to expect them to be market failures, but also from empirical observation: e.g. the field of AI safety and reducing catastrophic biorisks both receive well under $100m of funding per year, and issues around existential risk receive little attention in policy. In contrast, the world spends $1 trillion plus per year on R&D, and boosting economic growth is perhaps the main priority of governments worldwide.

I'd argue that the expected value of marginal work on an issue is proportional to its importance and neglectedness, and so these factors would suggest work on trajectory changes could be several orders of magnitude more effective.

I agree Progress Studies itself is far more neglected than general work to boost economic growth, I expect that work on Progress Studies is very high-impact by ordinary standards, and I'd be happy if a some more EAs worked on it, but I'd still expect marginal resources towards research in topics like existential risk or longtermist global priorities research to be far more effective per dollar / per person.

I've never seen a proponent of boosting economic growth or Progress Studies clearly give their response to these points (though I have several of my own ideas). We tried discussing it with Tyler Cowen, but my impression of that interview was that he basically conceded that existential risk is the greater priority, defending economic growth mainly because it's something the average person is better able / more likely to contribute to.

So my question would be: why should a longtermist EA work on boosting economic growth?

comment by richard_ngo · 2020-12-10T14:25:32.336Z · EA(p) · GW(p)

Not a direct response to your question, but I do think progress studies is very complementary to longtermism. In particular, it seems to me that longtermists are often much more interested in big ethical ideas rather than big empirical ideas. Yet, if anything, the latter are more important.

So I expect that most of the high-level research in progress studies (e.g. about the industrial revolution, or about principles for institutional reform) will be useful in informing longtermist's empirical ideas about the future.

This will be less true for research into specific interventions.

comment by So-Low Growth · 2020-12-04T13:44:20.184Z · EA(p) · GW(p)

What a great question Benjamin! "Why should a longtermist EA work on boosting economic growth? " Is something I have been thinking about myself (my username gives it away...). 

One quick comment on this "I agree Progress Studies itself is far more neglected than general work to boost economic growth"

This spurs a question for me. How is Progress Studies different from people working on Economic Growth? 

comment by Benjamin_Todd · 2020-12-04T19:19:02.247Z · EA(p) · GW(p)

One quick addition is that I see Progress Studies as innovation into how to do innovation, so it's a double market failure :)

comment by jasoncrawford · 2020-12-04T23:52:03.518Z · EA(p) · GW(p)

I will answer this, but there's a lot to read here, so I will come back to it later—thanks!

comment by jasoncrawford · 2020-12-23T22:05:46.393Z · EA(p) · GW(p)

I haven't forgotten this, but my response has turned into an entire essay. I think I'll do it as a separate post, and link it here. Thanks!

comment by MichaelPlant · 2020-12-04T12:09:23.456Z · EA(p) · GW(p)

Are you aware of the research on the questionable, and perhaps non-existent, relationship between economic growth and measures of subjective well-being (e.g. lif satisfaction and happiness) over the long run, aka the Easterlin Paradox? I assume you are if you work with OurWorldInData. If so, does this worry you about 'progress' as I think(?) you're understanding it? If not, why not?

 I suppose I'm pretty sceptical that (further) technological progress will do that much to improve our quality of life. There this related, not-so-well-known worry that rising rates of mental health are because of, not despite, modern living: we now live in ways quite far from our environment of evolutionary adaptation. I recognise my scepticism here is counterintuitive, but I think it's the most plausible reading of the well-being data. I could say a bit more about this and plan to write up my thoughts some time.

I run the Happier Lives Institute and have been itching to talk to advocates of progress studies about this concern for some time. 

comment by jasoncrawford · 2020-12-04T21:10:51.636Z · EA(p) · GW(p)

In brief, I think: (1) subjective measures of well-being don't tell us the full story about whether progress is real, and (2) the measures we have are actually inconsistent, with some showing positive benefits of progress, others flat, and a few slightly negative (but most of them not epidemics).

To elaborate, on the second point first:

The Easterlin Paradox, to my understanding, dissolved over time with more and better data. Steven Pinker addresses this pretty well in Enlightenment Now, which I reviewed here: https://rootsofprogress.org/enlightenment-now

Our World in Data has a section on this, showing that happiness is correlated with income both within and between countries, and over time: https://ourworldindata.org/happiness-and-life-satisfaction#the-link-between-happiness-and-income

Regarding rates of mental illness, the data don't show a consistent increasing trend, and certainly nothing like the “epidemic” we sometimes hear about:

But to return to the first point, I think we have to be careful in using metrics like self-reported life satisfaction to evaluate progress.

Emotional responses tend to be short-term and relative. They report a derivative, not an integral. That does not, however, mean that the derivative is all that matters! Rather, it means that our emotions don't tell us about everything that matters.

In the last few hundred years, we have eradicated smallpox, given women the ability to control their reproduction and choose their careers, liberated most of humanity from back-breaking physical labor and 80+ hour work weeks, opened the world to travel and cultural exchange, and made the combined knowledge, art, and philosophy of the world available to almost everyone. (And that's just a small sample of the highlights.)

I think these things are self-evidently good. If a subjective measure of well-being doesn't report that people are happier when they aren't sentenced to hard labor on a farm, when they aren't trapped within a few miles of their village, when they and their families don't starve from famine caused by drought, and when their children don't die before the age of five from infectious disease… then all that proves is that people have forgotten what those things are like and don't know how good they have it.

comment by MichaelPlant · 2020-12-08T11:27:37.715Z · EA(p) · GW(p)

Hello. Thanks for engaging!

First, there are a few different versions of the Easterlin paradox. The most relevant one, for this discussion, is whether economic growth over the long-term (i.e. 10+ years for economists - longer than the business cycle) increases subjective well-being. This version of the paradox holds in quite a few developed nations (see linked paper). That leaves it open what we might find for developing nations.

Second, the only paper I know of that looks globally at SWB over time is Neve et al. (2018). Those authors use affect data from the Gallup World Poll and find:

The level of (log) per  capita  GDP  is  not  significantly  related  to  the  day-to-day  emotional  experience  of  individuals within countries over time.  However, emotional well-being is significantly related to macroeconomic movements over the business cycle

Which indicates we should not expect further global growth will increase happiness. At least, there's a case to answer.

Third, the OWID point about flat rates of MH is interesting. I'd not seen that and I'll see if I can find out more. 

Fourth, you make this hypothetical point along the lines of "if SWB data told us this, we should disbelieve it" and then you sort of assume it does show us that. But it doesn't. If you look at the causes and correlates of SWB they tell a pretty intuitive story, for the most part: higher SWB (measured as happiness or life satisfaction) is associated with greater health and wealth, being in a relationship, lower crime, lower suicide rates, less air pollution, etc. The only result that's puzzling is the Easterlin paradox. But if you think SWB measure get the 'wrong' result with Easterlin, that implies the measures aren't valid, e.g.  life satisfaction measures don't actually measure life satisfaction. But then you need to explain how they get the 'right' answers basically everywhere else.

What's more, the Easterlin Paradox isn't that surprising when you try to explain it, e.g. that effect of income on SWB is mostly relative

comment by jasoncrawford · 2020-12-04T21:46:22.194Z · EA(p) · GW(p)

I should add, though, that I think there is an important truth in the concern about whether progress makes us happier. Material progress doesn't make us happier on its own: it also requires good choices and a healthy psychology.

Technology isn't inherently good or bad, it is made so by how we use it. Technology generally gives us more power and more choices, and as our choices expand, we need to get better at making choices. And I'm not sure we're getting better at making choices as fast as our choices are expanding.

The society-level version of this is that technology can be used for evil at a society level too, for instance, when it enables authoritarian governments or destructive wars. And just as at the individual level, I'm not sure our “moral technology” is advancing at the same rate as our physical technology.

So, I do see problems here. I just don't think that technology is the problem! Technology is good and we need more of it. But we also need to improve our psychological, social, and moral “technology”.

More in this dialogue: https://pairagraph.com/dialogue/354c72095d2f42dab92bf42726d785ff 

comment by jwithing · 2020-12-03T20:57:10.312Z · EA(p) · GW(p)

Along what dimensions, if any, have we not progressed or even regressed in the past 100 years?

comment by jasoncrawford · 2020-12-04T21:19:19.856Z · EA(p) · GW(p)

Off the top of my head:

  • Maximum life expectancy. We've pushed up life expectancy at birth enormously, and life expectancy at all ages has increased somewhat. But 80–90 years is still “old” and we haven't cured aging itself.
  • Art? I haven't looked into it much, but I don't really know of any significant improvement in fine arts for a very long time—not in style/technique and not even in the technology (e.g., methods of casting a bronze sculpture). I'd also suggest that music has gotten less sophisticated, but this is super-subjective and treads in culture-war territory, so I'm just going to throw it out there as a wild-ass hypothesis for someone to follow up on at some point.
  • Education? High school graduation rates are up, and world literacy rates are up, but I'm not really sure about overall educational achievement?
  • Health care price/affordability: medicine itself has advanced tremendously, but the pricing on basic services is all out of whack and the way we pay for them is a tangled mess.
  • Housing affordability, maybe? I'm not sure.

If you said 50 years instead of 100, there's a longer and more obvious list. There really hasn't been any major breakthrough in manufacturing, agriculture, energy, or transportation in that time, and some things (like passenger flight speeds and airport convenience) have clearly regressed.

comment by Erich_Grunewald · 2020-12-30T20:32:15.490Z · EA(p) · GW(p)

Art? I haven't looked into it much, but I don't really know of any significant improvement in fine arts for a very long time—not in style/technique and not even in the technology (e.g., methods of casting a bronze sculpture). I'd also suggest that music has gotten less sophisticated, but this is super-subjective and treads in culture-war territory, so I'm just going to throw it out there as a wild-ass hypothesis for someone to follow up on at some point.

 

I'm a little bit late to the party here, but there are examples of improvements in sculpture technology/technique/style leading to new (& very beautiful) works of art, see e.g. Barry X Ball's works made with a combination of 3d-scanning, CAD software, CNC mills & traditional techniques. Not to mention he has a wide variety of stone available to him thanks to the global trade system.

As for music, I guess that totally depends on what you're comparing. The proper comparison for today's popular music isn't Beethoven or Bach but folk music & perhaps music for drawing rooms & salons, which, although they had their own beauties, were nowhere near as complex & intricate as the traditional European art music that is most listened to today. Of the past, only the best survives, but in the present the good & the bad coexist. That said, I think maybe there's a kernel of truth in what you suggest. But we shouldn't trust our intuitive judgment on this.

comment by evelynciara · 2020-12-08T15:02:13.662Z · EA(p) · GW(p)

Housing affordability: There are new construction technologies on the horizon, such as modular construction and mass timber; mass timber is being incorporated into new versions of the International Building Code, so it's gradually being normalized. However, my colleagues in the YIMBY movement tell me that zoning laws limit competition among construction companies, which discourages them from investing in these innovations. (Also, construction unions seem to hate modular construction.)

What makes you think there haven't been major breakthroughs in energy technology? As I understand it, there has been significant progress in making renewable energy cheap.

comment by jasoncrawford · 2020-12-16T03:33:52.349Z · EA(p) · GW(p)

I'll have to read more about progress in “renewables” to decide how big a breakthrough that is, but at best it would have to be counted, like genetics, as a potential future revolution, not one that's already here. We still get most of our energy from fossil fuels.

comment by HaukeHillebrandt · 2020-12-03T17:25:52.341Z · EA(p) · GW(p)

Are ideas getting harder to find?

comment by jasoncrawford · 2020-12-03T19:41:29.816Z · EA(p) · GW(p)

I think ideas get progressively harder to find within any given field as it matures. However, when we create new fields or find new breakthrough technologies, it opens up whole new orchards of low-hanging fruit.

When the Web was created, there were lots of new ideas that were easy to find: “put X on the web” for many values of X. After penicillin was invented, there was a similar golden age of antibiotics: “check out X mold or Y soil sample and check it for effectiveness against Z disease”. At times like this you see very rapid progress in certain applications.

Similarly, imagine if we got atomically precise manufacturing (APM). There would be a whole set of easy-to-find ideas: “manufacture X using APM.” Or if we got an easy way to understand and manipulate genes, there would be a set of easy-to-find ideas of the form “edit X gene to cure Y disease or enhance Z trait.”

I think the Great Stagnation is not a failure to extract all the value from existing fields, but rather a failure to open up new fields, to have new breakthroughs decades ago.

Further reading: https://rootsofprogress.org/teasing-apart-the-s-curves

Also: https://rootsofprogress.org/where-is-my-flying-car 

comment by Jack · 2020-12-05T15:00:15.132Z · EA(p) · GW(p)

Thank you for these interesting answers. Do you think the creation of new fields is also subject to diminishing returns? e.g. are new fields harder to find as well? Or do you think that only technologies are subject to diminishing returns? 

On this note, do you think progress is likely to be open to us indefinitely, or would you expect that eventually we will reach a level of technological maturity where all meaningful low-hanging fruit (be they individual technologies or S curves) have been picked and there is little further technological progress? If so, why? If not, why not?

comment by jasoncrawford · 2020-12-06T00:56:35.975Z · EA(p) · GW(p)

“Are new fields getting harder to find?” I think this is the trillion-dollar question! I don't have an answer yet though.

Is progress open indefinitely? I think there is probably at least a theoretic end to progress, but it's so unimaginably far away that for our purposes today we should consider progress as potentially infinite. There are still an enormous number of things to learn and invent.

comment by So-Low Growth · 2020-12-06T16:11:30.702Z · EA(p) · GW(p)

Quick thought here Jack and Jason (caveat - haven't thought about this much at all!). 

Yes, the creation of new fields is important. However, even if there are diminishing returns to new fields (sidenote - I've been thinking about ways to try and measure this empirically), what's more important is the applicability of the new field to existing fields. 

For example, even if we only create one new field but that field could be incredibly powerful. For example, APM (atomically precise manufacturing), or an AGI of some sorts, then it will have major ramifications on progress across all fields. 

However, if we created a lot of new  insignificant fields, then even if we create hundreds of them, progress won't be substantially improved across other domains.

I guess what I'm trying to say is the emphasis is not just on new fields per se. 

comment by So-Low Growth · 2020-12-03T23:28:58.576Z · EA(p) · GW(p)

Thanks for doing this Jason. I agree with your  response here. Seems natural to think that there are diminishing marginal returns to ideas within a sector. 

You mention APM, which would spur progress in other sectors.  Are there ways to identify which sectors open up progress in other domains, i.e. identifying the ideas that could remove the constraining factors of progress (small and big)?

comment by jasoncrawford · 2020-12-04T01:06:59.294Z · EA(p) · GW(p)

I think basically you have to look at where an innovation sits in the tech tree.

Energy technologies tend to be fundamental enablers of other sectors. J. Storrs Hall makes a good case for the need to increase per-capita energy usage, which he calls the Henry Adams Curve: https://rootsofprogress.org/where-is-my-flying-car

But also, a fundamentally new way to do manufacturing, transportation, communication, or information processing would enable a lot of downstream progress.

comment by guzey · 2020-12-04T06:25:08.010Z · EA(p) · GW(p)

I agree with Jason about the S-curves and the importance of distinguishing between within-area progress and between-area progress and he's making some really good points about ways to think about these issues in the linked posts. I also have a giant essay about this paper coming out soon and I'm very skeptical of its findings - lmk if you'd be interested in reading the draft

comment by jasoncrawford · 2020-12-04T20:32:21.344Z · EA(p) · GW(p)

Oh, I should also point to the SSC response to “ideas getting harder to find”, which I thought was very good: https://slatestarcodex.com/2018/11/26/is-science-slowing-down-2/

In particular, I don't think you can measure “research productivity” as percent improvement divided by absolute research input. I understand the rationale for measuring it this way, but I think for reasons Scott points out, it's just not the right metric to use.

Another way to look at this is: one generative model for exponential growth is a thing that is growing in proportion to its size. One way this can happen is that the growing thing invests a constant portion of its resources into growth. But in that model, you expect to see the resources used for growth to be exponentially increasing. IMO this is what we see with R&D.

Another place you can see this is in the growth of a startup. Startups can often grow revenue exponentially, but they also hire exponentially. If you used a similar measure of “employee productivity” parallel to “research productivity”, then you'd say it is going down, because an increasing number of employees is needed to maintain a constant % increase in revenue.

Further, what these examples ought to make clear is that exponentially increasing inputs to create exponential growth is actually totally sustainable. So, I don't see it as a cause for alarm at all, but rather (as Scott says) the natural order of things.

comment by So-Low Growth · 2020-12-04T14:03:39.871Z · EA(p) · GW(p)

Alexey, I'm also skeptical of the findings but haven't had time to dig deeper yet, so it's just hunches at the moment. I have already asked you for the draft :). Honestly, can't wait to read it since you announced it last week! 

comment by So-Low Growth · 2020-12-03T23:39:01.477Z · EA(p) · GW(p)

What do you think EA could learn from the 'Progress Studies' movement ?

comment by jasoncrawford · 2020-12-04T01:04:41.487Z · EA(p) · GW(p)

My perception of EA is that a lot of it is focused on saving lives and relieving suffering. I don't see as much focus on general economic growth and scientific and technological progress.

There are two things to consider here. First, there is value in positives above and beyond merely living without suffering. Entertainment, travel, personal fitness and beauty, luxury—all of these are worth pursuing. Second, over the long run, more lives have been saved and suffering relieved by efforts to pursue general growth and progress than direct charitable efforts. So we should consider the balance between the two.

To EA's credit, I think the community does understand this much better than other proponents of altruism and charity! And some EA organizations put resources into long-term scientific progress, which is great.

One thing I'm puzzled by is why there doesn't seem to be a strong focus within EA on institutional reform (or not as strong as I would expect). A root-cause analysis on most human suffering, if it went deep enough, would blame government and cultures that don't foster science, invention, industry, and business. It seems that the most high-leverage long-term plan to reduce human suffering would be to spread global rationality and capitalism.

comment by Ozzie Gooen (oagr) · 2020-12-04T16:43:48.432Z · EA(p) · GW(p)

As discussed in other comments, it seems that progress studies focuses mostly on  economic and scientific progress,  and these seem to come with risks as well as rewards. At the same time, particular aspects of progress seem more safe; the progress of epistemics or morality for example. Toby Ord wrote about the Long Reflection, as a method of making a lot of very specific progress before focusing on other kinds.  These things are more difficult to study but might be more valuable. 

 So my question is, have you spent much time considering epistemic and moral progress (and other abstract but safe aspects) as a thing to study?  Do you have any thoughts on its viability?

(I've written a bit more here [LW · GW], but it's still relatively short). 

comment by jasoncrawford · 2020-12-04T23:45:10.748Z · EA(p) · GW(p)

Re my own focus:

The irony is that my original motivation for studying progress was to better ground and validate my epistemic and moral ideas!

One challenge with epistemic, moral, and (I'll throw in) political ideas is that we've literally been debating them for 2,500 years and we still don't agree. We've probably come up with many good ideas already, but they haven't gotten wide enough adoption. So I think figuring out how to spread best practices is more high-leverage than making progress in these fields as such.

Before I got into what would come to be called “progress studies”, I spent a quarter-century discussing and debating philosophic ideas with many different people, who had many different viewpoints. One thing that became clear to me was that, not only do people not agree on how to solve our problems, they don't even agree on what the problems are. A left-wing environmentalist focuses on climate change, while a right-wing deficit hawk focuses on the national debt. Each thinks that even the problem the other one is so worried about is overblown, while their own problem is neglected. So of course they call for different policies.

I realized that a lot of the issues I care about, and the problems underlying them, were founded on my keen appreciation for the story of human progress: how bad living standards used to be and how much they've improved.

And, further, I thought that studying the history of progress—not just material, but epistemic and moral too, actually—would be the best way to empirically ground any claims about how to make the world better.

I started by studying material progress because (1) it happened to be what I was most interested in and (2) it's the most obvious and measurable form of progress. But I think that material, epistemic and moral progress are actually tightly intertwined in the overall history of progress. Science obviously supports technology. Freedom of thought and expression is needed for science. Economic freedom is needed for material progress. Technological progress provides the surplus that is needed to fund science, and invents the instruments that science needs too. Economic progress provides the means for a free society to defend itself militarily, and ultimately justifies and validates that society. So I don't think they can be separated.

Long-term, I'd like to study moral and epistemic progress. I'd love to do a history of science, for instance. On moral progress, I'd love to read (or write!) about how we ended practices like slavery, dueling, and trial by ordeal; how we developed concepts like rule of law and individual rights; how we moved from tribalism to universalism and recognized the humanity of all races and sexes. Some of this is covered very well in Pinker's recent books (Better Angels and Enlightenment Now) but more could be done.

Re the Long Reflection:

I haven't read Ord's take on this, but the concept as you describe it strikes me as not quite right. For one, to pause on material progress would come at a terrible cost: all of the lives we could be saving and extending, all the people we could be lifting out of poverty, all of the things we can't even anticipate that would come from more wealth, technology and infrastructure.

For another, it seems to imply a very high degree of being able to anticipate and predict the future, which I think we just don't have. I think David Deutsch captures this better than I can; from The Beginning of Infinity (pp 202–204):

… a recurring theme in pessimistic theories throughout history has been that an exceptionally dangerous moment is imminent. Our Final Century makes the case that the period since the mid twentieth century has been the first in which technology has been capable of destroying civilization. But that is not so. Many civilizations in history were destroyed by the simple technologies of fire and the sword. Indeed, of all civilizations in history, the overwhelming majority have been destroyed, some intentionally, some as a result of plague or natural disaster. Virtually all of them could have avoided the catastrophes that destroyed them if only they had possessed a little additional knowledge, such as improved agricultural or military technology, better hygiene, or better political or economic institutions. Very few, if any, could have been saved by greater caution about innovation. In fact most had enthusiastically implemented the precautionary principle.…

As we look back on the failed civilizations of the past, we can see that they were so poor, their technology was so feeble, and their explanations of the world so fragmentary and full of misconceptions that their caution about innovation and progress was as perverse as expecting a blindfold to be useful when navigating dangerous waters. Pessimists believe that the present state of our own civilization is an exception to that pattern. But what does the precautionary principle say about that claim? Can we be sure that our present knowledge, too, is not riddled with dangerous gaps and misconceptions? That our present wealth is not pathetically inadequate to deal with unforeseen problems? Since we cannot be sure, would not the precautionary principle require us to confine ourselves to the policy that would always have been salutary in the past – namely innovation and, in emergencies, even blind optimism about the benefits of new knowledge?

When you look back at the history of progress, one theme is that it's generally impossible to anticipate where progress will come from or what an advance will lead to. Who could have anticipated that studying electromagnetic radiation would give us ways to communicate long-distance, or to do non-invasive imaging inside the human body?

So to say, “let's not do these risky things, let's only do these safe things”, presumes that (a) we know what risks we are subject to and (b) we know what activities will lead towards or away from them, and towards or away from solutions. But I just don't think we can predict those things, not at the level that a Long Reflection would imply.

If we had paused for Reflection in 2010, instead of founding Moderna and BioNTech to pursue mRNA vaccine technology, where would we be today vs. covid?

In general, science, technology, infrastructure, and surplus wealth are a massive buffer against almost all kinds of risk. So to say that we should stop advancing those things in the name of safety seems wrong to me.

comment by Ozzie Gooen (oagr) · 2020-12-09T21:36:25.520Z · EA(p) · GW(p)

Thanks so much for the comment. This is obviously a complicated topic so I won’t aim to be complete, but here are some thoughts.

One challenge with epistemic, moral, and (I’ll throw in) political ideas is that we’ve literally been debating them for 2,500 years and we still don’t agree.

From my perspective, while we don’t agree on everything, there has been a lot of advancement during this period, especially if one looks at pockets of intellectuals. The Ancient Greeks schools of thought,  The Renaissance,  The Enlightenment, and the growth of atheism are examples of what seems like substantial progress (especially to people who have agreement with them, like myself).

I would agree that epistemic, moral, and political progress seems to be far slower than technological progress, but we definitely still have it and it seems more net positive. Real effort here also seems far more neglected.  There are clearly a fair number of academics in these areas, but I think in terms of number of people, resources, and “get it done” abilities, regular technical progress has been strongly favored. This means that we may have less leverage, but the neglectedness but  this could also mean that there are some really nice returns to highly competent efforts. 

The second thing that I’d flag is that  it’s possible that advances in the Internet and AI could mean that progress in these areas become much more tractable in the next 10 to 100 years.

I started by studying material progress because (1) it happened to be what I was most interested in and (2) it’s the most obvious and measurable form of progress. But I think that material, epistemic and moral progress are actually tightly intertwined in the overall history of progress.

I think I much agree with you here, though I myself am less interested in technical progress.  I agree that they can’t be separated. This is all the more reason I would encourage you to emphasize it in future work of yours :-).  I imagine any good study of epistemic and moral progress would include studies of technology for the reasons you mention. I’m not suggesting that you focus on epistemic and moral progress only, but rather that they could either be the primary emphasis where possible, or just a bit more emphasized here and there.  Perhaps this could be a good spot to collaborate directly with Effective Altruist researchers.

I haven’t read Ord’s take on this, but the concept as you describe it strikes me as not quite right.

My take was written quickly and  I think your impression is very different from his take. In The Precipice, Toby Ord recommends that The Long Reflection happens as one of three phrases, the first being “Reaching Existential Security”. This would involve setting things up so that humanity has a very low chance of existential risk per year.  It’s hard for me to imagine what this would look like.  There’s not much written about it in the book. I imagine it would look very different to what we have now and probably take a fair amount of more technological maturity. Having setups to ensure protections against existentially serious biohazards would be a precondition.  I imagine there is obviously some trade-off between our technological abilities to make quick progress during the reflection, and the risks and speed of us getting there, but that’s probably outside the scope of this conversation.

In general, science, technology, infrastructure, and surplus wealth are a massive buffer against almost all kinds of risk. So to say that we should stop advancing those things in the name of safety seems wrong to me.

I agree that they are massively useful, but they also are massively risky. I’m sure that a lot of advancements that we have are locally a net negative; otherwise it seems odd that we could have so many big changes but still a world as challenging and messy as ours.

Some of science/technology/infrastructure/surplus wealth is obviously useful for getting us to Existential Security, and others are probably harmful.  It's not really clear to me that average modern advancements are net-positive at this point(this is incredibly complicated to figure out!), but it seems clear that at least some are (though we might not be able to tell which ones). 

comment by monadica · 2020-12-03T21:59:19.147Z · EA(p) · GW(p)

Is there an empirical method of measuring progress? How can we account for piecewise progress, for example VR had a massive interest in the 80s, went into a winter in the 90s, reinstalled in 2012 by Palmer Lucky, similarly , AI went into a 10 year winter due to Minsky's critic of Rosenblatt. It seems that progress is not linear, but stochastic and maybe a complex thing to model, it appears that it is not a monolith of which we arrive to but constantly happening in complex ways. 

The perceptron was intended to be a hardware machine, first implemented on software. This theory is similar to the Hardware Lottery[1] published by Sara Hooker , implying that ideas in Software Research are successful not because they are correct but due to the available hardware to solve those problems.


Secondly, what would be necessary for a hypothetical golden age to emerge, is it building a new city, restructuring organisations (university, government), rebirth(renaissance), cataclysm (covid,climate change) or simply moving slowly towards reform.

 

[1] https://hardwarelottery.github.io/

comment by guzey · 2020-12-04T06:29:22.253Z · EA(p) · GW(p)

I'm also super interested in this and would love to hear Jason's thoughts.

As Dietrich Vollrath often points out, technological progress does not necessarily lead to an increase in GDP and sometimes actually lowers it: https://growthecon.wordpress.com/2014/08/25/246/

It seems that lots of contemporary innovations are like this and GDP becomes and ever less reliant way of tracking scientific progress.

If nobody bothered to create a better measure of scientific progress, I would like to create it or to help someone create it or to at least figure out what prevents us from creating it.

comment by jasoncrawford · 2020-12-16T03:40:55.027Z · EA(p) · GW(p)

I don't really have great thoughts on metrics, as I indicated to @monadica. Happy to chat about it sometime! It's a hard problem.

comment by jasoncrawford · 2020-12-16T03:40:27.195Z · EA(p) · GW(p)

Re measuring progress, it's hard. No one metric captures it. The one that people use if they have to use something is GDP but that has all kinds of problems. In practice, you have to just look at multiple metrics, some which are narrow but easy to measure, and some which are broad aggregates or indices.

Re “piecewise” process, it's true that progress is not linear! I agree it is stochastic.

Re a golden age, I'm not sure, but see my reply to @BrianTan below re “interventions”.

comment by BrianTan · 2020-12-04T11:17:02.792Z · EA(p) · GW(p)

How would you know if The Roots of Progress and Progress Studies for Young Scholars are successful? What concrete actions would you like to see from people who take the online course or read your blog? 

Or another way to phrase the question - how would you measure or track the impact of your work on The Roots of Progress and Progress Studies for Young Scholars?

comment by jasoncrawford · 2020-12-06T01:37:22.338Z · EA(p) · GW(p)

Alan Kay suggested that progress in education should be measured in “Sistine Chapel ceilings per lifetime.” Ultimately my goal is something similar, but maybe substitute “Nobel-worthy scientific discoveries”, “Watt-level inventions” or “trillion-dollar businesses” for the artistic goal. I'll know if I'm successful if in twenty years, or fifty, people who did those things are telling me they were given inspiration and courage from my work.

The problem with Sistine Chapel ceilings is that it's a lagging metric. We all need leading metrics to steer ourselves by. So on a much shorter timescale, I look at my audience size—over 12k on Twitter now and ~2,700 on my email list. I also look at the quality of the audience and the feedback I'm getting. With Progress Studies for Young Scholars, we gave the students an end-of-program feedback survey (two-thirds rated it 9 or 10 out of 10). When I write a book, of course, I'll look at how well it sells. Etc.

Re actions I want people to take: right now I'm just happy if they listen and learn and find what I have to say interesting. And, especially for young people, I hope they will consider devoting their careers to ambitious goals that drive forward human progress.

comment by Darius_Meissner · 2020-12-04T10:46:27.933Z · EA(p) · GW(p)

What are your thoughts on the desirability and feasibility of differential technological development (DTD) as a governance strategy for emerging technologies? 

For instance, Toby Ord briefly touches on DTD in The Precipice, writing that "While it may be too difficult to prevent the development of a risky technology, we may be able to reduce existential risk by speeding up the development of protective technologies relative to dangerous ones."

comment by jasoncrawford · 2020-12-04T21:58:04.483Z · EA(p) · GW(p)

I don't know much about it beyond that Wikipedia page, but I think that something like this is generally in the right direction.

In particular, I would say:

  • Technology is not inherently risk-creating or safety-creating. Technology can create safety, when we set safety as a conscious goal.
  • However, technology is probably risk-creating by default. That is, when our goal is anything other than safety—more power, more speed, more efficiency, more abundance, etc.—then it might create risk as a side effect.
  • Historically, we have been reactive rather than proactive about technology risk. People die, then we do the root-cause analysis and fix it.
  • Even when we do anticipate problems, we usually don't anticipate the right ones. When X-rays were first introduced, people had a moral panic about men seeing through women's clothing on the street, but no one worried about radiation burns or cancer.
  • Even when we correctly anticipate problems, we don't necessarily heed the warnings. At the dawn of the antibiotic age, Alexander Fleming foresaw the problem of resistance, but that didn't prevent doctors from way overprescribing antibiotics for many years.
  • We need to get better at all of the above in order to continue to improve safety as we simultaneously pursue other technological goals: more proactive, more accurate at predicting risk, and more disciplined about heeding the risk. (This is obviously so for x-risk, where the reactive approach doesn't work!)
  • I see positive signs of this in how the AI and genetics communities are approach safety in their fields. I can't say whether it's enough, too much, or just right.

Anyway, DTD seems like a much better concept than the conventional “let's slow down progress across the board, for safety's sake.” This is a fundamental error, for reasons David Deutsch describes in The Beginning of Infinity. 

But that's also where I might (I'm not sure) disagree with DTD, depending on how it's formulated. The reason to accelerate safety-creating technology is not because “it may be too difficult to prevent the development of a risky technology.” It's because most risky technologies are also extremely valuable, and we don't want to prevent them. We want them, we just want to have them safely.

comment by BrianTan · 2020-12-04T11:20:30.296Z · EA(p) · GW(p)

At what point will The Roots of Progress start advocating for certain "interventions" to keep human progress going? Are there interventions you're currently advocating for already?

comment by jasoncrawford · 2020-12-06T01:14:59.114Z · EA(p) · GW(p)

Maybe when I have some interventions I'm more sure of! (And/or if some powerful person or agency was directly asking me for input.)

Epistemically, before I can recommend interventions I need to really understand causation, and before I can explain or hypothesize causation, I need to get clear on the specific timeline of events. And in terms of personal motivation, I'm much more interested in the detailed history of progress than in arguing policy with people.

But, yes, eventually the whole point of progress studies is to figure out how to make more (and better) progress, so it should end up in some sort of intervention at some level.

If I had to recommend something now, I would at least point to a few areas of leverage:

  • Promote the idea of progress. Teach its history, in schools and universities. Promote it in art, especially more optimistic sci-fi. Journalists should become industrially literate, and it should be reflected in their stories. Celebrate major achievements. Etc.
  • Roll back over-burdensome regulation. As just one example, there's a big spotlight shining on the FDA right now and its role in delaying the covid vaccines. For another, see Eli Dourado on environmental review.
  • Decentralize funding for science & research. I fear that the dominance of the federal government (in the US at least) in research funding, and the reliance on committee-based peer review, has led to too much consensus and groupthink and not enough room for contrarians and for ideas that challenge dominant paradigms. See Donald Braben's Scientific Freedom (recently re-printed by Stripe Press).

See also my review of Where Is My Flying Car?, which I am very sympathetic with: https://rootsofprogress.org/where-is-my-flying-car 

comment by BrianTan · 2020-12-04T11:16:47.585Z · EA(p) · GW(p)

How useful is your background in being a software engineer, engineering manager, and startup founder to your work currently? Which of those roles helped you more to prepare you for what you're currently doing?

comment by jasoncrawford · 2020-12-06T01:57:50.325Z · EA(p) · GW(p)

I think being an engineer helps me dig into the technical details of the history I'm researching, and to write explanations that go deeper into that detail. Many histories of technology are very light on technical detail and don't really explain how the inventions worked. One thing that makes me unique is actually explaining how stuff works. This is probably the most important thing.

I think being a founder is helpful in understanding some business fundamentals like marketing or finance. And I am constantly drawing parallels and making comparisons between today's tech startup world and how business and invention were done in the past, or how science and research are done today.

I also think my experiences as a founder have helped me in launching The Roots of Progress. I have a sense of what kind of opportunities I'm personally interested in and have aptitude for, how to launch things and iterate on them, when something is taking off, what opportunities to pursue, how to build a social media presence, etc.

comment by BrianTan · 2020-12-04T11:16:05.560Z · EA(p) · GW(p)

Do you wish you had started The Roots of Progress earlier, and then switched full-time to it earlier as well? If so, how many years earlier?

comment by jasoncrawford · 2020-12-06T01:59:36.004Z · EA(p) · GW(p)

The Roots of Progress was really about following an opportunity at a specific moment in time, for me and for the world. Both starting the project as a hobby, when I was personally fascinated by the topic, and going full-time on it right when the “progress studies” movement was taking off. So I don't see how it could have happened any differently.

comment by Aaron Gertler (aarongertler) · 2020-12-04T07:05:28.862Z · EA(p) · GW(p)

What has been most surprising to you about running an online course for high school students?

Related: If someone were creating a course about effective altruism aimed at high school students, what advice would you have for them? So far, attempts to teach EA concepts to this audience haven't been very successful [EA · GW], but people are still interested in trying new methods [EA · GW].

comment by jasoncrawford · 2020-12-04T23:47:59.922Z · EA(p) · GW(p)

Hmm, I thought that running discussion sessions with the students might be hard, but it was quite natural! I was lucky to get a great group of students in the first cohort.

There were some gaps in their knowledge I didn't anticipate. They weren't very familiar with simple machines and mechanical advantage, with basic molecular biochemistry such as proteins and DNA, or with basic financial/accounting concepts such as fixed vs. variable cost.

Not sure what to say about an EA course, sorry!

comment by Aaron Gertler (aarongertler) · 2020-12-07T09:54:53.518Z · EA(p) · GW(p)

Thank you for the reply! Just wanted to let you know I'd seen it :-)

comment by So-Low Growth · 2020-12-04T14:08:08.156Z · EA(p) · GW(p)

Aaron, I'm really ignorant about this issue but didn't Peter Singer have a course on EA a while back that if I recall correctly was fairly accessible and could be marketed towards high school students?

comment by Louis_Dixon (bdixon) · 2020-12-07T12:44:42.974Z · EA(p) · GW(p)

R&D is a public good, and so we'd expect it to be systemically underfunded by the private sector and provided in some part at least by governments. Some economists, such as Mariana Mazzucato argue that government plays a key role in both funding R&D and in applying it for public benefit. Lant Pritchett argues that development comes through interlocking transformations, including the build-up of state capability.   

But in your comments below, and from having read through your blog, it seems like you're not such a fan of government or even alliances between the public and private sectors. 

A root-cause analysis on most human suffering, if it went deep enough, would blame government and cultures that don't foster science, invention, industry, and business. It seems that the most high-leverage long-term plan to reduce human suffering would be to spread global rationality and capitalism. 

Do you think governments have a role to play in improving human progress? And if not, why not?

comment by jasoncrawford · 2020-12-16T04:45:39.750Z · EA(p) · GW(p)

Let me say up front that there is a divergence here between my ideological biases/priors and what I think I can prove or demonstrate objectively. I usually try to stick to the latter because I think that's more useful to everyone, but since you asked I need to get into the former.

Does government have a role  to play? Well, taking that literally, then absolutely, yes. If nothing else, I think it's clear that government creates certain conditions of political stability, and provides legal infrastructure such as corporate and contract law, property law including IP, and the court system. All of those are necessary for progress.

(And when I mentioned “root-cause analysis on most human suffering” above, I was mostly thinking about dysfunctional governments in the poorest countries that are totally corrupt and/or can't even maintain law & order)

I also think government, especially the military, has at least a sort of incidental role to play as a customer of technology. The longitude problem was funded in part by the British navy. The technique of canning was invented when Napoleon offered a prize for a way to preserve food for his military on long foreign campaigns. The US military was one of the first customers of integrated circuits. Etc.

And of course the military has reasons to do at least some R&D in-house, too.

But I think what you're really asking about is whether civilian government should fund progress, or promote it through “policy”, or otherwise be actively involved in directing it.

All I can say for sure here is: I don't know. So here's where we get into my priors, which are pretty much laissez-faire. That makes me generally unfavorable towards government subsidy or intervention. But again, this is what I don't think I have a real answer on yet. In fact, a big part of the motivation for starting The Roots of Progress was to challenge myself on these issues and to try to build up a stronger evidentiary base to draw conclusions.

For now let me just suggest:

  • I think that all government subsidies are morally problematic, since taxpayers are non-consenting
  • I don't (yet?) see what government subsidies can accomplish that can't (in theory) be accomplished non-coercively
  • I worry that even when government attempts to advance progress, it may end up slowing it down—for example, the dominance of NIH/NSF in science funding combined with their committee-based peer-review system is often suggested as a factor slowing down scientific progress
  • In general I think that progress is better made in decentralized fashion, and government solutions tend to be centralized
  • I also think that progress is helped by accountability mechanisms, and government tends to lack these mechanisms or have weaker ones

That said, here are a few things that give me pause.

  • Government-backed R&D, even for not-directly-military purposes, has had some significant wins, such as DARPA kicking off the Internet.
  • Some major projects have only gotten done with government support, such as the US transcontinental railroad in the 1860s. This happened in the most laissez-faire country in the world, at a time when it was way more laissez-faire than it is now, so… if that needed government support, maybe there was a reason. (I don't know yet.)
  • Economic strength is closely related to national security, which entangles the government and the economy in ways I haven't fully worked out yet. E.g., I'm not sure the best way for government to ensure that we have strategic commodities such as oil, steel, and food in wartime.

Anyway, this is all stuff I continue to think deeply about and hope to have more to say about later. And at some point I would like to deeply engage with Mazzucato's work and other similar work so that I can have a more informed opinion.

comment by Mathieu Putz · 2020-12-04T11:36:52.841Z · EA(p) · GW(p)

Thanks for your work and thanks for doing this!

In your interview with Patrick Collison, he says the following: 

"I think of EA as sort of like a metal detector, and they've invented a new kind of metal detector that's really good at detecting some metals that other detectors are not very good at detecting. But I actually think we need some diversity in the different metallic substances which our detectors are attuned to, and for me EA would not be the only one"

Discussion on the EA forum here [EA · GW], link to the interview here.

First, do you broadly agree with that framework?

Second, given that you likely think that progress studies is one of the most important things to work on, do you think it should worry us that the EA detector did not on its own seem to pick up on progress studies as an opportunity to do good, before it became a more mainstream view? Why didn't EAs launch this field years ago? Why isn't it one of the main EA cause areas? Does this hint at a way our detector may be broken? (Note to say  that personally I am agnostic for now as to whether this should be a main EA cause area.)

Third,  how can we tune the EA metal detector to be more effective at finding new niches where there's room to do good effectively? I think Patrick is probably right that the EA detector isn't good enough to pick up on everything that you would want to pick up on. But unlike other detectors, we do have the explicit goal to find all the most important things to do at the margin. So how can we get closer to that goal?

comment by jasoncrawford · 2020-12-16T05:30:00.712Z · EA(p) · GW(p)

I am broadly sympathetic to Patrick's way of looking at this, yes.

If progress studies feels like a miss on EA's part to you… I think folks within EA, especially those who have been well within it for a long time, are better placed to analyze why/how that happened. Maybe rather than give an answer, let me suggest some hypotheses that might be fruitful to explore:

  • A focus on saving lives and relieving suffering, with these seen as more moral or important than comfort, entertainment, enjoyment, or luxury; or economic growth; or the advance of knowledge?
  • A data-driven focus that naturally leads to more short-term, measurable impact? (Vs., say, a more historical and philosophical focus?)
  • A concern about existential risk from technology and progress?
  • Some other tendency to see technology, capitalism, and economic growth as less important, less moral, or otherwise lower-status?
  • An assumption that these things are already popular and well-served by market mechanisms and therefore not-neglected?

As for “tuning the metal detector”, I think a root-cause analysis on progress studies or any other area you feel you “missed” would be the best way to approach it!

Well, one final thought: The question of “how to do the most good” is deep and  challenging enough that you can't answer it with anything less than an entire philosophy. I suspect that EA is significantly influenced by a certain philosophical orientation, and that orientation is fundamentally altruistic. Progress isn't really altruistic, at least not to my mind. Altruism is about giving, whereas progress is about creating. They're not unrelated, but they're different orientations.

But I could be wrong here, and @Benjamin_Todd, above, has given me a whole bunch of stuff to read to challenge my understanding of EA, so I should go digest that before speculating any more.

comment by Darius_Meissner · 2020-12-04T10:36:16.021Z · EA(p) · GW(p)

What are your long-term goals for The Roots of Progress? Are you pleased with how far you have come so far (e.g. quantity and quality of content produced, page-view or subscriber numbers)?

comment by jasoncrawford · 2020-12-07T06:25:53.520Z · EA(p) · GW(p)

See my reply to @BrianTan on a similar question, thanks!

comment by Aaron Gertler (aarongertler) · 2020-12-04T07:00:50.393Z · EA(p) · GW(p)

Aside from the online course in Progress Studies, what are some of the best resources you could share with a high school or college student if you want them to be interested in progress?

Traditional high school/college curriculums often introduce ideas that seem likely to make people less excited about progress (e.g. degrowth as a moral imperative, population growth as net-negative, discussions of technology risk without corresponding discussions of technology's benefits). I'm interested in resources that could provide a counterpoint to this.

comment by jasoncrawford · 2020-12-07T06:28:37.964Z · EA(p) · GW(p)

There isn't a lot out there. In addition to my own work, I would suggest Steven Pinker's Enlightenment Now and perhaps David Deutsch's The Beginning of Infinity. Those are some of the best sources on the philosophy of progress. Also Ayn Rand's Atlas Shrugged, which is the only novel I know of that portrays science, engineering and business as a noble quest for the betterment of humanity.

comment by Aaron Gertler (aarongertler) · 2020-12-07T09:58:47.922Z · EA(p) · GW(p)

Thank you for the reply!

comment by astupple · 2020-12-04T01:08:23.711Z · EA(p) · GW(p)

How could it be that ideas are progressively harder to find AND we waited so long for the bicycle? How can we know how many undiscovered bicycles, ie low hanging fruit, are out there?

Seems as progress progresses and the adjacent possible expands, the number of undiscovered bicycles within easy reach expands.

comment by jasoncrawford · 2020-12-04T01:13:58.204Z · EA(p) · GW(p)

I think there are a couple things with the bicycle. One is that it depended on materials and manufacturing techniques much more than is obvious (and more than I even brought out in that post): bearings, hollow metal tubes, gears and chains, rubber, etc.

The other is that it's really just the overall story of progress: in a sense there was lots of low-hanging fruit for thousands of years before the Industrial Revolution.

But if you want to understand progress now, 300 years in, when the markets are much more efficient, so to speak, the analysis is different. Now there are lots of fruit-pickers everywhere looking for fruit. So there's less obvious stuff lying around. Which is why we need to open up new technical fields, to discover whole new orchards of fruit (some of which will be low-hanging).

comment by astupple · 2020-12-04T02:41:14.498Z · EA(p) · GW(p)

Yes, but what I’m getting at is How do we know there’s a limited number of low hanging fruit? Or, as we make progress, don’t previously high fruit come into reach? AND, more progress opens more markets/fields.

It seems to me low hanging fruit is a bad analogy because there’s not way to know the number of undiscovered fruit out there. And perhaps it’s infinite. Or, it INCREASES the more we figure out.

My two cents - stagnation isn’t due to supply of good ideas waiting to be discovered, it’s stifling of free and open exploration by our norms that promote institutionalization of discovery.

comment by jasoncrawford · 2020-12-07T06:31:32.216Z · EA(p) · GW(p)

Maybe there's just a confusion with the metaphor here? I generally agree that there is a practically infinite amount of progress to be made.

comment by Ben_West · 2020-12-07T20:24:26.571Z · EA(p) · GW(p)

What were your goals for the Progress Studies for Young Scholars program? In particular: is there work that you are hoping (perhaps a small subset of) participants can do immediately, or were you hoping instead to lay some sort of foundation which might payoff years/decades down the line?

comment by jasoncrawford · 2020-12-07T20:58:24.910Z · EA(p) · GW(p)

Well, the participants are high school students, so for most of them the work they are doing immediately is going to university. Like all education, it is more of a long-term investment.

comment by Jakob_J · 2020-12-07T22:02:05.700Z · EA(p) · GW(p)

I would also highlight the contribution towards creating an educational platform that extends beyond the immediate participants in the course. I believe most of the talks are available on Youtube: https://www.youtube.com/channel/UCR4WNZP7Uxfe4F1XNugu5_g

A great resource!

comment by sindirella · 2020-12-06T00:58:44.145Z · EA(p) · GW(p)

Hi Jason, your blog is really interesting. I wonder if you have any medium/long term theory of change of how your work or the progress studies community (if there is such a community yet, or in the future) will have real world impact, e.g. how you or others in your community plan to engage with researchers/academics (e.g. to collaborate or build the field), policy makers, investors, scientist, technologists, entrepreneurs etc. And what some concrete changes you hope to see/affect.

(Do you just focus on research or also aim for real world impact? (And in either case, how do you measure the success of your project?)

comment by jasoncrawford · 2020-12-16T04:53:22.220Z · EA(p) · GW(p)

I have a theory of change but not a super-detailed one. I think ideas matter and that they move the world. I think you get new ideas out there any way you can.

Right now I'm working on a book about progress. I hope this book will be read widely, but above all I'd like it to be read by the scientists, engineers and entrepreneurs who are creating, or will create, the next major breakthroughs that move humanity forward. I want to motivate them, to give them inspiration and courage. Someday, maybe in twenty years, I'd love to meet the scientist who solved human aging, or the engineer who invented atomically precise manufacturing, or the founder of a company providing nuclear power to the world, and hear that they were inspired in part by my work.

I'd also like my message to reach people in education, journalism, and the arts, and for them to help spread the philosophy of progress too, which will magnify that kind of impact.

And I'd like it to reach people involved in policy. See my answer to @BrianTan about “interventions” for more detail on what I'm thinking there.

I'd like to see the progress community doing more work on many fronts: on the history of specific areas, on frontier technologies and their possibilities, and on specific policy programs and reforms that would advance progress.

comment by Darius_Meissner · 2020-12-04T10:33:22.043Z · EA(p) · GW(p)

How do you prioritise between the various projects you are working on? What other projects, if any, do you consider working on to advance progress studies in future?

comment by jasoncrawford · 2020-12-16T05:32:08.455Z · EA(p) · GW(p)

It's hard to prioritize! I try to have overarching / long-term goals, and to spend most of my time on them, but also to take advantage of opportunities when they arise. I look for things that significantly advance my understanding of progress, build my public content base, build my audience, or better, all three.

Right now I'm working on two things. One is continued curriculum development for my progress course for the Academy of Thought and Industry, a private high school. The other, more long-term project is a book on progress. Along the way I intend to keep writing semi-regularly at rootsofprogress.org.

comment by gavintaylor · 2020-12-04T13:49:57.033Z · EA(p) · GW(p)

It seems like most progress to date has come from research in the natural/formal/applied sciences leading to technological advances (or correct me if I'm wrong?). Do you expect that trend to continue, or could you see a case for research in the social sciences/humanities (that lead to social advances) making a more prominent contribution to future progress?

comment by jasoncrawford · 2020-12-16T05:46:57.336Z · EA(p) · GW(p)

I think advances in science leading to technology is only the proximal cause of progress. I think the deeper causes are, in fact, philosophical (including epistemic, moral, and political causes). The Scientific Revolution, the shift from monarchy to republics, the development of free markets and enterprise, the growth of capitalism—all of these are social/political causes that underlie scientific, technological, industrial, and economic progress.

More generally, I think that progress in technology, science, and government are tightly intertwined in history and can't really be separated.

I think advances in the humanities are absolutely needed—more so in a certain sense than advances in the physical sciences, because our material technology today is far more advanced than our moral technology. I think moral and political causes are to blame for our incompetent response to covid; for high prices in housing, education, and medicine; and for lack of economic progress in poorer countries. I think better social “technology” is needed to avoid war, to reform policing, to end conspiracy theories, and to get everyone to vaccinate their children. And ultimately I think cultural and philosophical issues are at the root of the scientific/technological slowdown of the last ~50 years.

So, yeah, I think social advances were actually important in the past and will be in the future.

comment by gavintaylor · 2020-12-30T17:59:09.863Z · EA(p) · GW(p)

Thanks for the perspective, this is interesting and a useful update for me.

comment by gavintaylor · 2020-12-04T13:36:34.171Z · EA(p) · GW(p)

Many areas of science currently appear to have reproducibility problems with published research (some call it a crisis). Do you think that poor reproducibility of recent (approx. the last 30 years) scientific work has been a significant contributor to the current stagnation?

On the margin, do you think that funding is better spent on improving reproducibility (or more generally, the areas covered by Metascience) or on pursuing promising scientific research directly?

comment by jasoncrawford · 2020-12-16T05:52:46.064Z · EA(p) · GW(p)

I don't have strong opinions on the reproducibility issues. My guess is that if it has contributed to stagnation it's been more of a symptom than a cause.

As for where to spend funding, I also don't have a strong answer. My feeling is that reproducibility isn't really stopping anything, it's a tax/friction/overhead at worst? So I would tend to favor a promising science project over a reproducibility project. On the other hand, metascience feels important, and more neglected than science itself.

comment by astupple · 2020-12-04T01:07:34.808Z · EA(p) · GW(p)

How could it be that ideas are progressively harder to find AND we waited so long for the bicycle? How can we know how many undiscovered bicycles (ie low hanging fruit) are out there?

Seems as progress progresses and the adjacent possible expands, the number of undiscovered bicycles within easy reach expands.

comment by astupple · 2020-12-04T01:06:59.915Z · EA(p) · GW(p)

How could it be that ideas are progressively harder to find AND we waited so long for the bicycle? How can we know how many undiscovered bicycles (ie low hanging fruit) are out there?

Seems as progress progresses and the adjacent possible expands, the number of undiscovered bicycles within easy reach expands.

comment by astupple · 2020-12-04T01:06:44.399Z · EA(p) · GW(p)

How could it be that ideas are progressively harder to find AND we waited so long for the bicycle? How can we know how many undiscovered bicycles (ie low hanging fruit) are out there?

Seems as progress progresses and the adjacent possible expands, the number of undiscovered bicycles within easy reach expands.

comment by astupple · 2020-12-04T01:06:16.554Z · EA(p) · GW(p)

How could it be that "ideas are progressively harder to find" AND "we waited so long for the bicycle"? How can we know how many undiscovered bicycles (ie low hanging fruit) are out there?

Seems as progress progresses and the adjacent possible expands, the number of undiscovered bicycles within easy reach expands.

comment by astupple · 2020-12-04T01:03:44.235Z · EA(p) · GW(p)

How could “good ideas become progressively harder to find” AND “we waited so long for the bicycle”?

Seems it’s unknowable how many undiscovered bicycles are out there. And as progress progresses, the adjacent possible has increasing numbers of potential bicycles.