Other flavors of FOOM

post by mwcvitkovic · 2020-01-17T08:03:59.914Z · score: 7 (4 votes) · EA · GW · 1 comments

This is a question post.

Robin Hanson says working on AI alignment today is justifiable only in proportion to the risk of a FOOM scenario. (A.k.a. hard takeoff, a.k.a. lumpy AI timeline.) I agree, even though the discussion may have moved on a bit.

But "lumpy" timelines don't seem restricted to AI. Runaway growth of genetically engineered organisms (BLOOM?) seems equally plausible. People have been thinking about climate tipping points for ages.

Can someone point me to any relevant writing on this? I haven't been able to find anything discussing the utility of studying FOOM-like scenarios (i.e. catastrophically rapid changes due to new technology) in general, rather than just in AI. I'm sure it's out there - just not sure what to Google.

Answers

1 comments

Comments sorted by top scores.

comment by Aaron Gertler (aarongertler) · 2020-01-17T23:14:51.969Z · score: 3 (2 votes) · EA(p) · GW(p)

I suppose that Drexler's work on nanotechnology (e.g. Engines of Creation) may qualify as "writing on a FOOM-like scenario". I haven't read it, but my impression is that he theorized about massive economic growth caused by new technology, to the point of human life being fundamentally transformed. The book also gets into risk; Drexler coined the term "gray goo".