Posts

Help me find the crux between EA/XR and Progress Studies 2021-06-02T18:47:40.576Z
AMA: Jason Crawford, The Roots of Progress 2020-12-03T16:49:28.075Z
Study Group for Progress – 50% off for EAs 2020-09-03T05:04:15.080Z

Comments

Comment by jasoncrawford on Progress studies vs. longtermist EA: some differences · 2021-06-04T04:16:31.480Z · EA · GW

Thanks. That is an interesting argument, and this isn't the first time I've heard it, but I think I see its significance to the issue more clearly now.

I will have to think about this more. My gut reaction is: I don't trust my ability to extrapolate out that many orders of magnitude into the future. So, yes, this is a good first-principles physics argument about the limits to growth. (Much better than the people who stop at pointing out that “the Earth is finite”). But once we're even 10^12 away from where we are now, let alone 10^200, who knows what we'll find? Maybe we'll discover FTL travel (ok, unlikely). Maybe we'll at least be expanding out to other galaxies. Maybe we'll have seriously decoupled economic growth from physical matter: maybe value to humans is in the combinations and arrangements of things, rather than things themselves—bits, not atoms—and so we have many more orders of magnitude to play with.

If you're not willing to apply a moral discount factor against the far future, shouldn't we at least, at some point, apply an epistemic discount? Are we so certain about progress/growth being a brief, transient phase that we're willing to postpone the end of it by literally the length of human civilization so far, or longer?

Comment by jasoncrawford on Help me find the crux between EA/XR and Progress Studies · 2021-06-03T01:19:44.884Z · EA · GW

First, PS is almost anything but an academic discipline (even though that's the context in which it was originally proposed). The term is a bit of a misnomer; I think more in terms of there being (right now) a progress community/movement.

I agree these things aren't mutually exclusive, but there seems to be a tension or difference of opinion (or at least difference of emphasis/priority) between folks in the “progress studies” community, and those in the “longtermist EA” camp who worry about x-risk (sorry if I'm not using the terms with perfect precision). That's what I'm getting at and trying to understand.

Comment by jasoncrawford on Help me find the crux between EA/XR and Progress Studies · 2021-06-03T01:10:44.947Z · EA · GW

Thanks JP!

Minor note: the “Pascal's Mugging” isn't about the chance of x-risk itself, but rather the delta you can achieve through any particular program/action (vs. the cost of that choice).

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2021-06-02T20:21:15.633Z · EA · GW

Followup: I did write that essay some ~5 months ago, but I got some feedback on it that made me think I needed to rethink it more carefully, and then other deadlines took over and I lost momentum.

I was recently nudged on this again, and I've written up some questions here that would help me get to clarity on this issue: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies

Comment by jasoncrawford on Help me find the crux between EA/XR and Progress Studies · 2021-06-02T20:10:43.245Z · EA · GW

Thanks ADS. I'm pretty close to agreeing with all those bullet points actually?

I wonder if, to really get to the crux, we need to outline what are the specific steps, actions, programs, investments, etc. that EA/XR and PS would disagree on. “Develop safe AI” seems totally consistent with PS, as does “be cautious of specific types of development”, although both of those formulations are vague/general.

Re Bostrom:

a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.

By the same logic, would a 0.001% reduction in XR be worth a delay of 10,000 years? Because that seems like the kind of Pascal's Mugging I was talking about.

(Also for what it's worth, I think I'm more sympathetic to the “person-affecting utilitarian” view that Bostrom outlines in the last section of that paper—which may be why I learn more towards speed on the speed/safety tradeoff, and why my view might change if we already had immortality. I wonder if this is the crux?)

Comment by jasoncrawford on Help me find the crux between EA/XR and Progress Studies · 2021-06-02T20:03:07.256Z · EA · GW

OK, so maybe there are a few potential attitudes towards progress studies:

  1. It's definitely good and we should put resources to it
  2. Eh, it's fine but not really important and I'm not interested in it
  3. It is actively harming the world by increasing x-risk, and we should stop it

I've been perceiving a lot of EA/XR folks to be in (3) but maybe you're saying they're more in (2)?

Flipping it around, PS folks could have a similar (1) positive / (2) neutral / (3) negative attitude towards XR efforts. My view is not settled, but right now I'm somewhere between (1) and (2)… I think there are valuable things to do here, and I'm glad people are doing them, but I can't see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).

Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we're just disagreeing on relative priority and neglectedness.

(But I don't think that's all of it.)

Comment by jasoncrawford on Progress studies vs. longtermist EA: some differences · 2021-06-02T19:49:18.028Z · EA · GW

That's interesting, because I think it's much more obvious that we could successfully, say, accelerate GDP growth by 1-2 points per year, than it is that we could successfully, say, stop an AI catastrophe.

The former is something we have tons of experience with: there's history, data, economic theory… and we can experiment and iterate. The latter is something almost completely in the future, where we don't get any chances to get it wrong and course-correct.

(Again, this is not to say that I'm opposed to AI safety work: I basically think it's a good thing, or at least it can be if pursued intelligently. I just think there's a much greater chance that we look back on it and realize, too late, that we were focused on entirely the wrong things.)

Comment by jasoncrawford on Help me find the crux between EA/XR and Progress Studies · 2021-06-02T19:33:18.828Z · EA · GW

As to whether my four questions are cruxy or not, that's not the point! I wasn't claiming they are all cruxes. I just meant that I'm trying to understand the crux, and these are questions I have. So, I would appreciate answers to any/all of them, in order to help my understanding. Thanks!

Comment by jasoncrawford on Help me find the crux between EA/XR and Progress Studies · 2021-06-02T19:31:37.901Z · EA · GW

I'm not making a claim about how effective our efforts can be. I'm asking a more abstract, methodological question about how we weigh costs and benefits.

If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal's Mugging.

If not, then great—we agree that we can and should weigh costs and benefits. Then it just comes down to our estimates of those things.

And so then I just want to know, OK, what's the plan? Maybe the best way to find the crux here is to dive into the specifics of what PS and EA/XR each propose to do going forward. E.g.:

  • We should invest resources in AI safety? OK, I'm good with that. (I'm a little unclear on what we can actually do there that will help at this early stage, but that's because I haven't studied it in depth, and at this point I'm at least willing to believe that there are valuable programs there. So, thumbs up.)
  • We should raise our level of biosafety at labs around the world? Yes, absolutely. I'm in. Let's do it.
  • We should accelerate moral/social progress? Sure, we absolutely need that—how would we actually do it? See question 3 above.

But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost. Failing to maintain and accelerate progress, in my mind, is a global catastrophic risk, if not an existential one. And it's unclear to me whether this would even increase or decrease XR, let alone the amount—in any case I think there are very wide error bars on that estimate.

But maybe that's not actually the proposal from any serious EA/XR folks? I am still unclear on this.

Comment by jasoncrawford on Progress studies vs. longtermist EA: some differences · 2021-06-02T19:07:35.779Z · EA · GW

Good points.

I haven't read Ord's book (although I read the SSC review, so I have the high-level summary). Let's assume Ord is right and we have a 1/6 chance of extinction this century.

My “1e-6” was not an extinction risk. It's a delta between two choices that are actually open to us. There are no zero-risk paths open to us, only one set of risks vs. a different set.

So:

  • What path, or set of choices, would reduce that 1/6 risk?
  • What would be the cost of that path, vs. the path that progress studies is charting?
  • How certain are we about those two estimates? (Or even the sign of those estimates?)

My view on these questions is very far from settled, but I'm generally aligned through all of the points of the form “X seems very dangerous!” Where I get lost is when the conclusion becomes, “therefore let's not accelerate progress.” (Or is that even the conclusion? I'm still not clear. Ord's “long reflection” certainly seems like that.)

I am all for specific safety measures. Better biosecurity in labs—great. AI safety? I'm a little unclear how we can create safety mechanisms for a thing that we haven't exactly invented yet, but hey, if anyone has good ideas for how to do it, let's go for it. Maybe there is some theoretical framework around “value alignment” that we can create up front—wonderful.

I'm also in favor of generally educating scientists and engineers about the grave moral responsibility they have to watch out for these things and to take appropriate responsibility. (I tend to think that existential risk lies most in the actions, good or bad, of those who are actually on the frontier.)

But EA/XR folks don't seem to be primarily advocating for specific safety measures. Instead, what I hear (or think I'm hearing) is a kind of generalized fear of progress. Again, that's where I get lost. I think that (1) progress is too obviously valuable and (2) our ability to actually predict and control future risks is too low.

I wrote up some more detailed questions on the crux here and would appreciate your input: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies

Comment by jasoncrawford on Progress studies vs. longtermist EA: some differences · 2021-06-02T18:49:31.449Z · EA · GW

A someone fairly steeped in Progress Studies (and actively contributing to it), I think this is a good characterization.

From the PS side, I wrote up some thoughts about the difference and some things I don't quite understand about the EA/XR side here; I would appreciate comments: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies

Comment by jasoncrawford on Progress studies vs. longtermist EA: some differences · 2021-06-02T18:16:22.252Z · EA · GW

As someone who is more on the PS side than the EA side, this does not quite resonate with me.

I am still thinking this issue through and don't have  a settled view. But here are a few, scattered reactions I have to this framing.

On time horizon and discount rate:

  • I don't think I'm assuming a short civilization. I very much want civilization to last millions or billions of years! (I differ from Tyler on this point, I guess)
  • You say “what does it matter if we accelerate progress by a few hundred or even a few thousand years”? I don't understand that framing. It's not about a constant number of years of acceleration, it's about the growth rate.
  • I am more interested in actual lives than potential / not-yet-existing ones. I don't place zero value or meaning on the potential for many happy lives in the future, but I also don't like the idea that people today should suffer for the sake of theoretical people who don't actually exist (yet). This is an unresolved philosophical paradox in my mind.
    • Note, if we could cure aging, and I and everyone else had indefinite lifespans, I might change my discount rate? Not sure, but I think I would, significantly.
    • This actually points to perhaps the biggest difference between my personal philosophy (I won't speak for all of progress studies) and Effective Altruism: I am not an altruist! (My view is more of an enlightened egoism, including a sort of selfish value placed on cooperation, relationships, and even on posterity in some sense.)

On risk:

  • I'm always wary of multiplying very small numbers by very large numbers and then trying to reason about the product. So, “this thing has a 1-e6 chance of affecting 1e15 future people and therefore should be valued at 1e9” is very suspect to me. I'm not sure if that's a fair characterization of EA/XR arguments, but some of them land on me this way.
  • Related, even if there are huge catastrophic and even existential risks ahead of us, I'm not convinced that we reduce them by slowing down. It may be that the best way to reduce them is to speed up—to get more knowledge, more technology, more infrastructure, and more general wealth.
    • David Deutsch has said this better than I can, see quotes I posted here and here.

On DTD and moral/social progress:

  • I very much agree with the general observation that material progress has raced ahead of moral/social progress, and that this is a bad and disturbing and dangerous thing. I agree that we need to accelerate moral/social progress, and that in a sense this is more urgent than accelerating material progress.
    • I also am sympathetic in principle to the idea of differential technology development.
    • BUT—I honestly don't know very clearly what either of these would consist of, in practice. I have not engaged deeply with the EA/XR literature, but I'm at least somewhat familiar with the community and its thinking now, and I still don't really know what a practical program of action would mean or what next steps would be.
  • More broadly, I think it makes sense to get smarter about how we approach safety, and I think it's a good thing that in recent decades we are seeing researchers think about safety issues before disasters happen (e.g., in genetic engineering and AI), rather than after as has been the case for most fields in the past.
    • “Let's find safe ways to continue making progress” is maybe a message and a goal that both communities can get behind.

Sorry for the unstructured dump of thoughts, hope that is interesting at least.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-23T22:05:46.393Z · EA · GW

I haven't forgotten this, but my response has turned into an entire essay. I think I'll do it as a separate post, and link it here. Thanks!

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-16T05:52:46.064Z · EA · GW

I don't have strong opinions on the reproducibility issues. My guess is that if it has contributed to stagnation it's been more of a symptom than a cause.

As for where to spend funding, I also don't have a strong answer. My feeling is that reproducibility isn't really stopping anything, it's a tax/friction/overhead at worst? So I would tend to favor a promising science project over a reproducibility project. On the other hand, metascience feels important, and more neglected than science itself.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-16T05:46:57.336Z · EA · GW

I think advances in science leading to technology is only the proximal cause of progress. I think the deeper causes are, in fact, philosophical (including epistemic, moral, and political causes). The Scientific Revolution, the shift from monarchy to republics, the development of free markets and enterprise, the growth of capitalism—all of these are social/political causes that underlie scientific, technological, industrial, and economic progress.

More generally, I think that progress in technology, science, and government are tightly intertwined in history and can't really be separated.

I think advances in the humanities are absolutely needed—more so in a certain sense than advances in the physical sciences, because our material technology today is far more advanced than our moral technology. I think moral and political causes are to blame for our incompetent response to covid; for high prices in housing, education, and medicine; and for lack of economic progress in poorer countries. I think better social “technology” is needed to avoid war, to reform policing, to end conspiracy theories, and to get everyone to vaccinate their children. And ultimately I think cultural and philosophical issues are at the root of the scientific/technological slowdown of the last ~50 years.

So, yeah, I think social advances were actually important in the past and will be in the future.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-16T05:32:08.455Z · EA · GW

It's hard to prioritize! I try to have overarching / long-term goals, and to spend most of my time on them, but also to take advantage of opportunities when they arise. I look for things that significantly advance my understanding of progress, build my public content base, build my audience, or better, all three.

Right now I'm working on two things. One is continued curriculum development for my progress course for the Academy of Thought and Industry, a private high school. The other, more long-term project is a book on progress. Along the way I intend to keep writing semi-regularly at rootsofprogress.org.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-16T05:30:00.712Z · EA · GW

I am broadly sympathetic to Patrick's way of looking at this, yes.

If progress studies feels like a miss on EA's part to you… I think folks within EA, especially those who have been well within it for a long time, are better placed to analyze why/how that happened. Maybe rather than give an answer, let me suggest some hypotheses that might be fruitful to explore:

  • A focus on saving lives and relieving suffering, with these seen as more moral or important than comfort, entertainment, enjoyment, or luxury; or economic growth; or the advance of knowledge?
  • A data-driven focus that naturally leads to more short-term, measurable impact? (Vs., say, a more historical and philosophical focus?)
  • A concern about existential risk from technology and progress?
  • Some other tendency to see technology, capitalism, and economic growth as less important, less moral, or otherwise lower-status?
  • An assumption that these things are already popular and well-served by market mechanisms and therefore not-neglected?

As for “tuning the metal detector”, I think a root-cause analysis on progress studies or any other area you feel you “missed” would be the best way to approach it!

Well, one final thought: The question of “how to do the most good” is deep and  challenging enough that you can't answer it with anything less than an entire philosophy. I suspect that EA is significantly influenced by a certain philosophical orientation, and that orientation is fundamentally altruistic. Progress isn't really altruistic, at least not to my mind. Altruism is about giving, whereas progress is about creating. They're not unrelated, but they're different orientations.

But I could be wrong here, and @Benjamin_Todd, above, has given me a whole bunch of stuff to read to challenge my understanding of EA, so I should go digest that before speculating any more.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-16T04:53:22.220Z · EA · GW

I have a theory of change but not a super-detailed one. I think ideas matter and that they move the world. I think you get new ideas out there any way you can.

Right now I'm working on a book about progress. I hope this book will be read widely, but above all I'd like it to be read by the scientists, engineers and entrepreneurs who are creating, or will create, the next major breakthroughs that move humanity forward. I want to motivate them, to give them inspiration and courage. Someday, maybe in twenty years, I'd love to meet the scientist who solved human aging, or the engineer who invented atomically precise manufacturing, or the founder of a company providing nuclear power to the world, and hear that they were inspired in part by my work.

I'd also like my message to reach people in education, journalism, and the arts, and for them to help spread the philosophy of progress too, which will magnify that kind of impact.

And I'd like it to reach people involved in policy. See my answer to @BrianTan about “interventions” for more detail on what I'm thinking there.

I'd like to see the progress community doing more work on many fronts: on the history of specific areas, on frontier technologies and their possibilities, and on specific policy programs and reforms that would advance progress.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-16T04:45:39.750Z · EA · GW

Let me say up front that there is a divergence here between my ideological biases/priors and what I think I can prove or demonstrate objectively. I usually try to stick to the latter because I think that's more useful to everyone, but since you asked I need to get into the former.

Does government have a role  to play? Well, taking that literally, then absolutely, yes. If nothing else, I think it's clear that government creates certain conditions of political stability, and provides legal infrastructure such as corporate and contract law, property law including IP, and the court system. All of those are necessary for progress.

(And when I mentioned “root-cause analysis on most human suffering” above, I was mostly thinking about dysfunctional governments in the poorest countries that are totally corrupt and/or can't even maintain law & order)

I also think government, especially the military, has at least a sort of incidental role to play as a customer of technology. The longitude problem was funded in part by the British navy. The technique of canning was invented when Napoleon offered a prize for a way to preserve food for his military on long foreign campaigns. The US military was one of the first customers of integrated circuits. Etc.

And of course the military has reasons to do at least some R&D in-house, too.

But I think what you're really asking about is whether civilian government should fund progress, or promote it through “policy”, or otherwise be actively involved in directing it.

All I can say for sure here is: I don't know. So here's where we get into my priors, which are pretty much laissez-faire. That makes me generally unfavorable towards government subsidy or intervention. But again, this is what I don't think I have a real answer on yet. In fact, a big part of the motivation for starting The Roots of Progress was to challenge myself on these issues and to try to build up a stronger evidentiary base to draw conclusions.

For now let me just suggest:

  • I think that all government subsidies are morally problematic, since taxpayers are non-consenting
  • I don't (yet?) see what government subsidies can accomplish that can't (in theory) be accomplished non-coercively
  • I worry that even when government attempts to advance progress, it may end up slowing it down—for example, the dominance of NIH/NSF in science funding combined with their committee-based peer-review system is often suggested as a factor slowing down scientific progress
  • In general I think that progress is better made in decentralized fashion, and government solutions tend to be centralized
  • I also think that progress is helped by accountability mechanisms, and government tends to lack these mechanisms or have weaker ones

That said, here are a few things that give me pause.

  • Government-backed R&D, even for not-directly-military purposes, has had some significant wins, such as DARPA kicking off the Internet.
  • Some major projects have only gotten done with government support, such as the US transcontinental railroad in the 1860s. This happened in the most laissez-faire country in the world, at a time when it was way more laissez-faire than it is now, so… if that needed government support, maybe there was a reason. (I don't know yet.)
  • Economic strength is closely related to national security, which entangles the government and the economy in ways I haven't fully worked out yet. E.g., I'm not sure the best way for government to ensure that we have strategic commodities such as oil, steel, and food in wartime.

Anyway, this is all stuff I continue to think deeply about and hope to have more to say about later. And at some point I would like to deeply engage with Mazzucato's work and other similar work so that I can have a more informed opinion.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-16T03:40:55.027Z · EA · GW

I don't really have great thoughts on metrics, as I indicated to @monadica. Happy to chat about it sometime! It's a hard problem.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-16T03:40:27.195Z · EA · GW

Re measuring progress, it's hard. No one metric captures it. The one that people use if they have to use something is GDP but that has all kinds of problems. In practice, you have to just look at multiple metrics, some which are narrow but easy to measure, and some which are broad aggregates or indices.

Re “piecewise” process, it's true that progress is not linear! I agree it is stochastic.

Re a golden age, I'm not sure, but see my reply to @BrianTan below re “interventions”.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-16T03:33:52.349Z · EA · GW

I'll have to read more about progress in “renewables” to decide how big a breakthrough that is, but at best it would have to be counted, like genetics, as a potential future revolution, not one that's already here. We still get most of our energy from fossil fuels.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-07T20:58:24.910Z · EA · GW

Well, the participants are high school students, so for most of them the work they are doing immediately is going to university. Like all education, it is more of a long-term investment.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-07T06:31:32.216Z · EA · GW

Maybe there's just a confusion with the metaphor here? I generally agree that there is a practically infinite amount of progress to be made.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-07T06:28:37.964Z · EA · GW

There isn't a lot out there. In addition to my own work, I would suggest Steven Pinker's Enlightenment Now and perhaps David Deutsch's The Beginning of Infinity. Those are some of the best sources on the philosophy of progress. Also Ayn Rand's Atlas Shrugged, which is the only novel I know of that portrays science, engineering and business as a noble quest for the betterment of humanity.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-07T06:25:53.520Z · EA · GW

See my reply to @BrianTan on a similar question, thanks!

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-06T01:59:36.004Z · EA · GW

The Roots of Progress was really about following an opportunity at a specific moment in time, for me and for the world. Both starting the project as a hobby, when I was personally fascinated by the topic, and going full-time on it right when the “progress studies” movement was taking off. So I don't see how it could have happened any differently.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-06T01:57:50.325Z · EA · GW

I think being an engineer helps me dig into the technical details of the history I'm researching, and to write explanations that go deeper into that detail. Many histories of technology are very light on technical detail and don't really explain how the inventions worked. One thing that makes me unique is actually explaining how stuff works. This is probably the most important thing.

I think being a founder is helpful in understanding some business fundamentals like marketing or finance. And I am constantly drawing parallels and making comparisons between today's tech startup world and how business and invention were done in the past, or how science and research are done today.

I also think my experiences as a founder have helped me in launching The Roots of Progress. I have a sense of what kind of opportunities I'm personally interested in and have aptitude for, how to launch things and iterate on them, when something is taking off, what opportunities to pursue, how to build a social media presence, etc.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-06T01:37:22.338Z · EA · GW

Alan Kay suggested that progress in education should be measured in “Sistine Chapel ceilings per lifetime.” Ultimately my goal is something similar, but maybe substitute “Nobel-worthy scientific discoveries”, “Watt-level inventions” or “trillion-dollar businesses” for the artistic goal. I'll know if I'm successful if in twenty years, or fifty, people who did those things are telling me they were given inspiration and courage from my work.

The problem with Sistine Chapel ceilings is that it's a lagging metric. We all need leading metrics to steer ourselves by. So on a much shorter timescale, I look at my audience size—over 12k on Twitter now and ~2,700 on my email list. I also look at the quality of the audience and the feedback I'm getting. With Progress Studies for Young Scholars, we gave the students an end-of-program feedback survey (two-thirds rated it 9 or 10 out of 10). When I write a book, of course, I'll look at how well it sells. Etc.

Re actions I want people to take: right now I'm just happy if they listen and learn and find what I have to say interesting. And, especially for young people, I hope they will consider devoting their careers to ambitious goals that drive forward human progress.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-06T01:14:59.114Z · EA · GW

Maybe when I have some interventions I'm more sure of! (And/or if some powerful person or agency was directly asking me for input.)

Epistemically, before I can recommend interventions I need to really understand causation, and before I can explain or hypothesize causation, I need to get clear on the specific timeline of events. And in terms of personal motivation, I'm much more interested in the detailed history of progress than in arguing policy with people.

But, yes, eventually the whole point of progress studies is to figure out how to make more (and better) progress, so it should end up in some sort of intervention at some level.

If I had to recommend something now, I would at least point to a few areas of leverage:

  • Promote the idea of progress. Teach its history, in schools and universities. Promote it in art, especially more optimistic sci-fi. Journalists should become industrially literate, and it should be reflected in their stories. Celebrate major achievements. Etc.
  • Roll back over-burdensome regulation. As just one example, there's a big spotlight shining on the FDA right now and its role in delaying the covid vaccines. For another, see Eli Dourado on environmental review.
  • Decentralize funding for science & research. I fear that the dominance of the federal government (in the US at least) in research funding, and the reliance on committee-based peer review, has led to too much consensus and groupthink and not enough room for contrarians and for ideas that challenge dominant paradigms. See Donald Braben's Scientific Freedom (recently re-printed by Stripe Press).

See also my review of Where Is My Flying Car?, which I am very sympathetic with: https://rootsofprogress.org/where-is-my-flying-car 

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-06T00:56:35.975Z · EA · GW

“Are new fields getting harder to find?” I think this is the trillion-dollar question! I don't have an answer yet though.

Is progress open indefinitely? I think there is probably at least a theoretic end to progress, but it's so unimaginably far away that for our purposes today we should consider progress as potentially infinite. There are still an enormous number of things to learn and invent.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T23:52:03.518Z · EA · GW

I will answer this, but there's a lot to read here, so I will come back to it later—thanks!

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T23:47:59.922Z · EA · GW

Hmm, I thought that running discussion sessions with the students might be hard, but it was quite natural! I was lucky to get a great group of students in the first cohort.

There were some gaps in their knowledge I didn't anticipate. They weren't very familiar with simple machines and mechanical advantage, with basic molecular biochemistry such as proteins and DNA, or with basic financial/accounting concepts such as fixed vs. variable cost.

Not sure what to say about an EA course, sorry!

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T23:45:10.748Z · EA · GW

Re my own focus:

The irony is that my original motivation for studying progress was to better ground and validate my epistemic and moral ideas!

One challenge with epistemic, moral, and (I'll throw in) political ideas is that we've literally been debating them for 2,500 years and we still don't agree. We've probably come up with many good ideas already, but they haven't gotten wide enough adoption. So I think figuring out how to spread best practices is more high-leverage than making progress in these fields as such.

Before I got into what would come to be called “progress studies”, I spent a quarter-century discussing and debating philosophic ideas with many different people, who had many different viewpoints. One thing that became clear to me was that, not only do people not agree on how to solve our problems, they don't even agree on what the problems are. A left-wing environmentalist focuses on climate change, while a right-wing deficit hawk focuses on the national debt. Each thinks that even the problem the other one is so worried about is overblown, while their own problem is neglected. So of course they call for different policies.

I realized that a lot of the issues I care about, and the problems underlying them, were founded on my keen appreciation for the story of human progress: how bad living standards used to be and how much they've improved.

And, further, I thought that studying the history of progress—not just material, but epistemic and moral too, actually—would be the best way to empirically ground any claims about how to make the world better.

I started by studying material progress because (1) it happened to be what I was most interested in and (2) it's the most obvious and measurable form of progress. But I think that material, epistemic and moral progress are actually tightly intertwined in the overall history of progress. Science obviously supports technology. Freedom of thought and expression is needed for science. Economic freedom is needed for material progress. Technological progress provides the surplus that is needed to fund science, and invents the instruments that science needs too. Economic progress provides the means for a free society to defend itself militarily, and ultimately justifies and validates that society. So I don't think they can be separated.

Long-term, I'd like to study moral and epistemic progress. I'd love to do a history of science, for instance. On moral progress, I'd love to read (or write!) about how we ended practices like slavery, dueling, and trial by ordeal; how we developed concepts like rule of law and individual rights; how we moved from tribalism to universalism and recognized the humanity of all races and sexes. Some of this is covered very well in Pinker's recent books (Better Angels and Enlightenment Now) but more could be done.

Re the Long Reflection:

I haven't read Ord's take on this, but the concept as you describe it strikes me as not quite right. For one, to pause on material progress would come at a terrible cost: all of the lives we could be saving and extending, all the people we could be lifting out of poverty, all of the things we can't even anticipate that would come from more wealth, technology and infrastructure.

For another, it seems to imply a very high degree of being able to anticipate and predict the future, which I think we just don't have. I think David Deutsch captures this better than I can; from The Beginning of Infinity (pp 202–204):

… a recurring theme in pessimistic theories throughout history has been that an exceptionally dangerous moment is imminent. Our Final Century makes the case that the period since the mid twentieth century has been the first in which technology has been capable of destroying civilization. But that is not so. Many civilizations in history were destroyed by the simple technologies of fire and the sword. Indeed, of all civilizations in history, the overwhelming majority have been destroyed, some intentionally, some as a result of plague or natural disaster. Virtually all of them could have avoided the catastrophes that destroyed them if only they had possessed a little additional knowledge, such as improved agricultural or military technology, better hygiene, or better political or economic institutions. Very few, if any, could have been saved by greater caution about innovation. In fact most had enthusiastically implemented the precautionary principle.…

As we look back on the failed civilizations of the past, we can see that they were so poor, their technology was so feeble, and their explanations of the world so fragmentary and full of misconceptions that their caution about innovation and progress was as perverse as expecting a blindfold to be useful when navigating dangerous waters. Pessimists believe that the present state of our own civilization is an exception to that pattern. But what does the precautionary principle say about that claim? Can we be sure that our present knowledge, too, is not riddled with dangerous gaps and misconceptions? That our present wealth is not pathetically inadequate to deal with unforeseen problems? Since we cannot be sure, would not the precautionary principle require us to confine ourselves to the policy that would always have been salutary in the past – namely innovation and, in emergencies, even blind optimism about the benefits of new knowledge?

When you look back at the history of progress, one theme is that it's generally impossible to anticipate where progress will come from or what an advance will lead to. Who could have anticipated that studying electromagnetic radiation would give us ways to communicate long-distance, or to do non-invasive imaging inside the human body?

So to say, “let's not do these risky things, let's only do these safe things”, presumes that (a) we know what risks we are subject to and (b) we know what activities will lead towards or away from them, and towards or away from solutions. But I just don't think we can predict those things, not at the level that a Long Reflection would imply.

If we had paused for Reflection in 2010, instead of founding Moderna and BioNTech to pursue mRNA vaccine technology, where would we be today vs. covid?

In general, science, technology, infrastructure, and surplus wealth are a massive buffer against almost all kinds of risk. So to say that we should stop advancing those things in the name of safety seems wrong to me.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T21:58:04.483Z · EA · GW

I don't know much about it beyond that Wikipedia page, but I think that something like this is generally in the right direction.

In particular, I would say:

  • Technology is not inherently risk-creating or safety-creating. Technology can create safety, when we set safety as a conscious goal.
  • However, technology is probably risk-creating by default. That is, when our goal is anything other than safety—more power, more speed, more efficiency, more abundance, etc.—then it might create risk as a side effect.
  • Historically, we have been reactive rather than proactive about technology risk. People die, then we do the root-cause analysis and fix it.
  • Even when we do anticipate problems, we usually don't anticipate the right ones. When X-rays were first introduced, people had a moral panic about men seeing through women's clothing on the street, but no one worried about radiation burns or cancer.
  • Even when we correctly anticipate problems, we don't necessarily heed the warnings. At the dawn of the antibiotic age, Alexander Fleming foresaw the problem of resistance, but that didn't prevent doctors from way overprescribing antibiotics for many years.
  • We need to get better at all of the above in order to continue to improve safety as we simultaneously pursue other technological goals: more proactive, more accurate at predicting risk, and more disciplined about heeding the risk. (This is obviously so for x-risk, where the reactive approach doesn't work!)
  • I see positive signs of this in how the AI and genetics communities are approach safety in their fields. I can't say whether it's enough, too much, or just right.

Anyway, DTD seems like a much better concept than the conventional “let's slow down progress across the board, for safety's sake.” This is a fundamental error, for reasons David Deutsch describes in The Beginning of Infinity. 

But that's also where I might (I'm not sure) disagree with DTD, depending on how it's formulated. The reason to accelerate safety-creating technology is not because “it may be too difficult to prevent the development of a risky technology.” It's because most risky technologies are also extremely valuable, and we don't want to prevent them. We want them, we just want to have them safely.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T21:46:22.194Z · EA · GW

I should add, though, that I think there is an important truth in the concern about whether progress makes us happier. Material progress doesn't make us happier on its own: it also requires good choices and a healthy psychology.

Technology isn't inherently good or bad, it is made so by how we use it. Technology generally gives us more power and more choices, and as our choices expand, we need to get better at making choices. And I'm not sure we're getting better at making choices as fast as our choices are expanding.

The society-level version of this is that technology can be used for evil at a society level too, for instance, when it enables authoritarian governments or destructive wars. And just as at the individual level, I'm not sure our “moral technology” is advancing at the same rate as our physical technology.

So, I do see problems here. I just don't think that technology is the problem! Technology is good and we need more of it. But we also need to improve our psychological, social, and moral “technology”.

More in this dialogue: https://pairagraph.com/dialogue/354c72095d2f42dab92bf42726d785ff 

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T21:19:19.856Z · EA · GW

Off the top of my head:

  • Maximum life expectancy. We've pushed up life expectancy at birth enormously, and life expectancy at all ages has increased somewhat. But 80–90 years is still “old” and we haven't cured aging itself.
  • Art? I haven't looked into it much, but I don't really know of any significant improvement in fine arts for a very long time—not in style/technique and not even in the technology (e.g., methods of casting a bronze sculpture). I'd also suggest that music has gotten less sophisticated, but this is super-subjective and treads in culture-war territory, so I'm just going to throw it out there as a wild-ass hypothesis for someone to follow up on at some point.
  • Education? High school graduation rates are up, and world literacy rates are up, but I'm not really sure about overall educational achievement?
  • Health care price/affordability: medicine itself has advanced tremendously, but the pricing on basic services is all out of whack and the way we pay for them is a tangled mess.
  • Housing affordability, maybe? I'm not sure.

If you said 50 years instead of 100, there's a longer and more obvious list. There really hasn't been any major breakthrough in manufacturing, agriculture, energy, or transportation in that time, and some things (like passenger flight speeds and airport convenience) have clearly regressed.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T21:10:51.636Z · EA · GW

In brief, I think: (1) subjective measures of well-being don't tell us the full story about whether progress is real, and (2) the measures we have are actually inconsistent, with some showing positive benefits of progress, others flat, and a few slightly negative (but most of them not epidemics).

To elaborate, on the second point first:

The Easterlin Paradox, to my understanding, dissolved over time with more and better data. Steven Pinker addresses this pretty well in Enlightenment Now, which I reviewed here: https://rootsofprogress.org/enlightenment-now

Our World in Data has a section on this, showing that happiness is correlated with income both within and between countries, and over time: https://ourworldindata.org/happiness-and-life-satisfaction#the-link-between-happiness-and-income

Regarding rates of mental illness, the data don't show a consistent increasing trend, and certainly nothing like the “epidemic” we sometimes hear about:

But to return to the first point, I think we have to be careful in using metrics like self-reported life satisfaction to evaluate progress.

Emotional responses tend to be short-term and relative. They report a derivative, not an integral. That does not, however, mean that the derivative is all that matters! Rather, it means that our emotions don't tell us about everything that matters.

In the last few hundred years, we have eradicated smallpox, given women the ability to control their reproduction and choose their careers, liberated most of humanity from back-breaking physical labor and 80+ hour work weeks, opened the world to travel and cultural exchange, and made the combined knowledge, art, and philosophy of the world available to almost everyone. (And that's just a small sample of the highlights.)

I think these things are self-evidently good. If a subjective measure of well-being doesn't report that people are happier when they aren't sentenced to hard labor on a farm, when they aren't trapped within a few miles of their village, when they and their families don't starve from famine caused by drought, and when their children don't die before the age of five from infectious disease… then all that proves is that people have forgotten what those things are like and don't know how good they have it.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T20:32:21.344Z · EA · GW

Oh, I should also point to the SSC response to “ideas getting harder to find”, which I thought was very good: https://slatestarcodex.com/2018/11/26/is-science-slowing-down-2/

In particular, I don't think you can measure “research productivity” as percent improvement divided by absolute research input. I understand the rationale for measuring it this way, but I think for reasons Scott points out, it's just not the right metric to use.

Another way to look at this is: one generative model for exponential growth is a thing that is growing in proportion to its size. One way this can happen is that the growing thing invests a constant portion of its resources into growth. But in that model, you expect to see the resources used for growth to be exponentially increasing. IMO this is what we see with R&D.

Another place you can see this is in the growth of a startup. Startups can often grow revenue exponentially, but they also hire exponentially. If you used a similar measure of “employee productivity” parallel to “research productivity”, then you'd say it is going down, because an increasing number of employees is needed to maintain a constant % increase in revenue.

Further, what these examples ought to make clear is that exponentially increasing inputs to create exponential growth is actually totally sustainable. So, I don't see it as a cause for alarm at all, but rather (as Scott says) the natural order of things.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T01:13:58.204Z · EA · GW

I think there are a couple things with the bicycle. One is that it depended on materials and manufacturing techniques much more than is obvious (and more than I even brought out in that post): bearings, hollow metal tubes, gears and chains, rubber, etc.

The other is that it's really just the overall story of progress: in a sense there was lots of low-hanging fruit for thousands of years before the Industrial Revolution.

But if you want to understand progress now, 300 years in, when the markets are much more efficient, so to speak, the analysis is different. Now there are lots of fruit-pickers everywhere looking for fruit. So there's less obvious stuff lying around. Which is why we need to open up new technical fields, to discover whole new orchards of fruit (some of which will be low-hanging).

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T01:06:59.294Z · EA · GW

I think basically you have to look at where an innovation sits in the tech tree.

Energy technologies tend to be fundamental enablers of other sectors. J. Storrs Hall makes a good case for the need to increase per-capita energy usage, which he calls the Henry Adams Curve: https://rootsofprogress.org/where-is-my-flying-car

But also, a fundamentally new way to do manufacturing, transportation, communication, or information processing would enable a lot of downstream progress.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T01:04:41.487Z · EA · GW

My perception of EA is that a lot of it is focused on saving lives and relieving suffering. I don't see as much focus on general economic growth and scientific and technological progress.

There are two things to consider here. First, there is value in positives above and beyond merely living without suffering. Entertainment, travel, personal fitness and beauty, luxury—all of these are worth pursuing. Second, over the long run, more lives have been saved and suffering relieved by efforts to pursue general growth and progress than direct charitable efforts. So we should consider the balance between the two.

To EA's credit, I think the community does understand this much better than other proponents of altruism and charity! And some EA organizations put resources into long-term scientific progress, which is great.

One thing I'm puzzled by is why there doesn't seem to be a strong focus within EA on institutional reform (or not as strong as I would expect). A root-cause analysis on most human suffering, if it went deep enough, would blame government and cultures that don't foster science, invention, industry, and business. It seems that the most high-leverage long-term plan to reduce human suffering would be to spread global rationality and capitalism.

Comment by jasoncrawford on AMA: Jason Crawford, The Roots of Progress · 2020-12-03T19:41:29.816Z · EA · GW

I think ideas get progressively harder to find within any given field as it matures. However, when we create new fields or find new breakthrough technologies, it opens up whole new orchards of low-hanging fruit.

When the Web was created, there were lots of new ideas that were easy to find: “put X on the web” for many values of X. After penicillin was invented, there was a similar golden age of antibiotics: “check out X mold or Y soil sample and check it for effectiveness against Z disease”. At times like this you see very rapid progress in certain applications.

Similarly, imagine if we got atomically precise manufacturing (APM). There would be a whole set of easy-to-find ideas: “manufacture X using APM.” Or if we got an easy way to understand and manipulate genes, there would be a set of easy-to-find ideas of the form “edit X gene to cure Y disease or enhance Z trait.”

I think the Great Stagnation is not a failure to extract all the value from existing fields, but rather a failure to open up new fields, to have new breakthroughs decades ago.

Further reading: https://rootsofprogress.org/teasing-apart-the-s-curves

Also: https://rootsofprogress.org/where-is-my-flying-car