Posts

Grantees: how do you structure your finances & career? 2022-08-04T22:21:17.858Z
Here are the finalists from FLI’s $100K Worldbuilding Contest 2022-06-06T18:42:34.864Z
[Fiction] Improved Governance on the Critical Path to AI Alignment by 2045. 2022-05-18T15:50:29.408Z
How could Twitter be tweaked to promote more rational conversations, now that Elon is on the board? 2022-04-06T16:34:31.457Z
[Linkpost] How To Get Into Independent Research On Alignment/Agency 2022-02-14T21:40:18.180Z
Longtermism in 1888: fermi estimate of heaven’s size. 2021-12-25T04:48:38.106Z
I want EA-charity gift cards! 2021-12-07T01:45:15.038Z
Can EA leverage an Elon-vs-world-hunger news cycle? 2021-11-02T00:14:21.060Z
The Toba Supervolcanic Eruption 2021-10-29T17:02:10.425Z
X-Risk, Anthropics, & Peter Thiel's Investment Thesis 2021-10-26T18:38:51.475Z
Nuclear Strategy in a Semi-Vulnerable World 2021-06-28T17:35:30.846Z

Comments

Comment by Jackson Wagner on The Future Fund’s Project Ideas Competition · 2022-08-14T01:16:59.527Z · EA · GW

(I think this is a good idea!  For anyone perusing these FTX project ideas in the future, here is a post I wrote exploring drawbacks and uncertanties that prevent people like me from getting excited about independent research as a career.)

Comment by Jackson Wagner on The case for Green Growth skepticism and GDP agnosticism · 2022-08-13T19:58:02.820Z · EA · GW

There are a couple of bait-and-switch  moves in this post that I don't understand:

1. Paris-targets urgency vs big-picture eco-philosophy
"We're not on track to meet our Paris climate agreements; if we want to meet these targets we'll need a big transition right away" creates a reasonable sense of urgency attached to the issue of climate change.  But then this urgency inexplicably carries over to big-picture philosophical ideas like "we are on a finite Planet Earth... at some point [maybe centuries or millennia from now], we will run out of crucial resources and we'll have to transition to zero resource-use growth", when it seems like we might have plenty of time to solve problems like the eventual scarcity of certain metals (including by doing far-future stuff like space settlement & asteroid mining).

2. Exactly how much degrowth are we talking here?
You don't clarify exactly what "post-growth" or "degrowth" means.  Sometimes it sounds like you are advocating a massive worldwide economic depression of similar impact to the Covid-19 lockdowns, but lasting much longer.  (Yes, you say that it wouldn't be as bad because only certain industries would be shut down, but the recession would also have to be more severe than 2020 in many ways, since the 2020 lockdown-recession didn't actually reduce emissions by much.  So, figure the impact might feel about the same overall.)

But other times, you say that actually the ultimate goal is "avoiding an ecological collapse and its associated economic collapse", giving the impression that you favor approximately whatever mix policies lead to the best long-run outcome for human civilization -- striking the right balance between economic harms from global warming and economic harms from global warming prevention.   Estimates from the UN IPPC say that even 3-4 degrees of warming (aka, blowing past the Paris Agreement targets) would only penalize the economy by a few percent by 2100.  So (unless you think that all these climate studies are wildly wrong), it seems like it is only worth paying a small cost to prevent global warming: stuff like subsidizing green power (as in the bill just passed by the United States -- we closed half the gap between the satus quo and our Paris goals for only $300 billion!), approving more nuclear plants, implementing a carbon tax, and so forth.  The couple-percent-of-GDP damages that mainstream climate science expects, don't seem like they are worth embarking on a many-percent-of-GDP, society-wide sacrifice of human wellbeing and development.  So maybe "degrowth" is just a sexy, radical-sounding word for these sensible global-warming mitigation policies like carbon taxation and the like?  In that case, I don't understand your choice of vocabulary but I am otherwise totally with you.

Comment by Jackson Wagner on The case for Green Growth skepticism and GDP agnosticism · 2022-08-13T19:15:48.027Z · EA · GW

Regarding your "Do we need growth in rich countries?" section -- this strikes me as a failure of imagination.  You are willing to look back across history and say that life has gotten vastly better in many ways with advancing science and industry- -- modern medicine, the conveniences of travel and telecommunications, etc.  But you don't seem willing to make the obvious extrapolation into the future -- in a more prosperous and more energy-abundant world, don't you expect that society could become even better?  People could afford better medical care, society could more easily afford to redistribute and create equality, society might even become more participatory and democratic?

People in 1960 would have made your same argument -- "The USA and Europe don't need growth; we're already so prosperous, we've basically achieved everything you could possibly want."  But looking back, we can tell that they were wrong -- life has improved in important ways since 1960, and even since 1990!  What makes your argument any more likely to stand the test of time?

In other words -- sure, the USA doesn't do quite as well on quality-of-life metrics as some countries in northern Europe which have a slightly lower GDP.  It would be great to learn from those countries.  But also, neither the USA or Europe represent the highest potential of human civilization -- so much more is possible!  For a vision of what this better future might look like, here is an optimistic, utopian story I put together with some friends of mine that tries to illustrate what a fairer, more democratic, and more abundant world could be constructed.  Here are some relevant quotes from that project:

Q. What is a new non-AI technology that has played an important role in the development of your world?

A. Improved governance technology has helped societies to better navigate the “bulldozer vs vetocracy” axis of community decision-making processes. Using advanced coordination mechanisms like assurance contracts, and clever systems (like Glen Weyl’s “SALSA” proposal) for pricing externalities and public goods, it’s become easier for societies to flexibly make net-positive changes and fairly compensate anyone affected by downsides. This improved governance tech has made it easier to build lots of new infrastructure while minimizing disruption. Included in that new infrastructure is a LOT of new clean power.

Solar, geothermal, and fusion power provide most of humanity’s energy, and they do so at low prices thanks to scientific advances and economies of scale. Abundant energy enables all kinds of transformative conveniences:

  • Cheap desalinization changes the map, allowing farming and habitation of previously desolate desert areas. Whole downtown areas of desert cities can be covered with shade canopies and air-conditioned with power from nearby solar farms.
  • Carbon dioxide can be captured directly from the air at scale, making climate change a thing of the past.
  • Freed from the pressing need to economize on fuel, vehicles like airplanes, container ships, and self-driving cars can simply travel at higher speeds, getting people and goods to their destinations faster.
  • Indoor farming using artificial light becomes cheaper; instead of shipping fruit from the opposite hemisphere, people can enjoy local, fresh fruit year-round.

Q. What is a new social institution that has played an important role in the development of your world?

A. New institutions have been as impactful over recent decades as near-human-level AI technology. Together, these trends have had a multiplicative effect — AI-assisted research makes evaluating potential reforms easier, and reforms enable society to more flexibly roll out new technologies and gracefully accommodate changes. Futarchy has been transformative for national governments; on the local scale, “affinity cities” and quadratic funding have been notable trends.
In the 2030s, the increasing fidelity of VR allows productive remote working even across international and language boundaries. Freed from needing to live where they work, young people choose places that cater to unique interests. Small towns seeking growth and investment advertise themselves as open to newcomers; communities (religious beliefs, hobbies like surfing, subcultures like heavy-metal fans, etc) select the most suitable town and use assurance contracts to subsidize a critical mass of early-adopters to move and create the new hub. This has turned previously indistinct towns to a flourishing cultural network.
Meanwhile, Quadratic Funding (like a hybrid of local budget and donation-matching system, usually funded by land value taxes) helps support community institutions like libraries, parks, and small businesses by rewarding small-dollar donations made by citizens.
The most radical expression of institutional experimentation can be found in the constellation of “charter cities” sprinkled across the world, predominantly in Latin America, Africa, and Southeast Asia. While affinity cities experiment with culture and lifestyle, cities like Prospera Honduras have attained partial legal sovereignty, giving them the ability to experiment with innovative regulatory systems much like china’s provinces.

Comment by Jackson Wagner on The case for Green Growth skepticism and GDP agnosticism · 2022-08-13T19:02:36.111Z · EA · GW

Regarding your "resource scarcity" section -- I thought that Harrington 2020 was pretty ridiculous for a variety of reasons.  It simply re-runs the original "Limits to Growth" simulations, without engaging with any of the criticism those models have received, or even noting that the real world's performance has been far better than those models predicted.  Furthermore, the Harrington paper blatantly moves the goal-posts, swapping out the environmental problems of the 70s (things like acid rain and industrial pollution and so forth) for climate-related CO2 metrics.  Quoting from my comment about Harrington 2020 on another EA Forum post:
 

This new paper is taking a 1972 computer model of the world economy and seeing how well it matches current trends.  They claim the match is pretty good, but they don't actually just plot the real-world data anywhere, they merely claim that the predicted data is within 20% of the real-world values.  I suspect they avoided plotting the real-world data because this would make it more obvious that the real world is actually doing significantly better on every measure.  Look at the model errors ("∆ value") in their Table 2:

So, compared to every World3-generated scenario (BAU, BAU2, etc), the real world has:
- higher population, higher fertility, lower mortality (no catastrophic die-offs)
- more food and higher industrial output (yay!)
- higher overall human welfare and a lower ecological footprint (woohoo!)

The only areas where humanity ends up looking bad are in pollution and "services per capita", where the real world has more pollution and fewer services than the World3 model.  But on pollution, the goal-posts have been moved: instead of tracking the kinds of pollution people were worried about in the 1970s (since those problems have mostly been fixed), this measure has been changed to be about carbon dioxide driving climate change.  Is climate change (which is predicted by other economists and scientists to cut a mere 10% of GDP by 2100) really going to cause a total population collapse in the next couple decades, just because some ad-hoc 1970s dynamical model says so?  I doubt it.  Meanwhile, the "services per capita" metric represents the fraction of global GDP spent on education and health -- perhaps it's bad that we're not spending more on education or health, or perhaps it's good that we're saving money on those things, but either way this doesn't seem like a harbinger of imminent collapse.
 
Furthermore, the World3 model predicted that things like industrial output would rise steadily until they one day experienced a sudden unexpected collapse.  This paper is trying to say "see, industrial output has risen steadily just as predicted... this confirms the model, so the collapse must be just around the corner!"  This strikes me as ridiculous: so far the model has probably underperformed simple trend-extrapolation, which in my view means its predictions about dramatic unprompted changes in the near future should be treated as close to worthless.

Comment by Jackson Wagner on New Cause area: The Meta-Cause [Cause Exploration Prize] · 2022-08-11T19:41:10.517Z · EA · GW

I'm a little confused about this post -- comparing between cause areas is already, to my mind, already one of the main things that Effective Altruism tries to do.  See for instance the recent shift towards "longtermism" and AI safety efforts becoming ever-more-central to the movement.  Lots of people are already doing this kind of "meta" work: from big institutions like the Global Priorities Institute, OpenPhil, the Future of Life Institute, to  countless ordinary folks like us who are trying to hash things out in conversations on the Forum.

I also kinda doubt this is what OpenPhil is looking for in their competition, since it seems focused on where they could move lots of highly-scalable funding, rather than what directions EA's research effort should be pointed in.  (Of course to some extent funding can be turned into effort... but only to some extent, and with a time delay, and etc.)

Comment by Jackson Wagner on [deleted post] 2022-08-11T19:30:07.940Z

Downvoting this, not because I don't like the post, but because it seems to be a duplicate of this post that you also made around the same time.

Comment by Jackson Wagner on Neartermists should consider AGI timelines in their spending decisions · 2022-07-27T02:29:52.814Z · EA · GW

Surely most neartermist funders think that the probability that we get transformative AGI this century is low enough that it doesn't have a big impact on calculations like the ones you describe?

There are a couple views by which neartermism is still worthwhile even if there's a large chance (like 50%) that we get AGI soon -- maybe you think neartermism is useful as a means to build the capacity and reputation of EA (so that it can ultimately make AI safety progress), or maybe you think that AGI is a huge problem but there's absolutely nothing we can do about it. But these views are kinda shaky IMO.

The idea that a neartermist funder becomes convinced that world-transformative AGI is right around the corner, and then takes action by dumping all their money into fast-acting welfare enhancements, instead of trying to prepare for or influence the immense changes that will shortly occur, almost seems like parody. See for instance the concept of "ultra-neartermism": https://forum.effectivealtruism.org/posts/LSxNfH9KbettkeHHu/ultra-near-termism-literally-an-idea-whose-time-has-come

Comment by Jackson Wagner on Why I'm skeptical of moral circle expansion as a cause area · 2022-07-15T03:22:29.365Z · EA · GW

I like the idea of expanding people's moral circle, I'm just not sure what interventions might actually work. The straightforward strategy is "just tell people they should expand their moral circle to include Group X", but I'm often doubtful that strategy this will win converts and lead to lasting change.

For example, my impression is that things like the rise and decline of slavery were mostly fueled by changing economic fundamentals, rather than by people first deciding that slavery was okay and then later remembering that it was bad. If you wanted to have an effect on people's moral circles, perhaps better to try and influence those fundamentals than trying to persuade people directly? But others have studied these things in much greater depth: https://forum.effectivealtruism.org/posts/o4HX48yMGjCrcRqwC/what-helped-the-voiceless-historical-case-studies

By analogy, I would expect that creating tasty, cost-competitive plant-based meats will probably do more to expand people's moral concern for farmed animals, than trying to persuade them directly about the evils of factory farming.

Since I think people's cultural/moral beliefs are basically downstream of the material conditions of society ("moral progress not driven by moral philosophy"), therefore I think that pushing directly for moral circle expansion (via persuasion, philosophical arguments, appeals to empathy, etc) isn't a great route towards actually expanding people's moral circles.

Comment by Jackson Wagner on Why I'm skeptical of moral circle expansion as a cause area · 2022-07-14T22:20:30.835Z · EA · GW

I think there are a lot of problems with the idea of directly pushing for moral circle expansion as a cause area -- for starters, moral philosophy might not play a large role in actually driving moral progress.  But I see the concept of moral circle expansion as a goal worth working towards  (sometimes indirectly towards!), and I think the discussion over moral circle expansion has been beneficial to EA -- for example, explorations of some ways our circle might be narrowing over time rather than expanding

 

I'd also like to mention that, of course, a cellular-automata simulation of different evolutionary strategies is very different from the complex behavior of real human societies.  There are definitely lots of forces that push towards tribalistic fighting between coalitions (ethno-nationalist, religious, political, class-based, and otherwise), but there are also forces that push towards cooperation and universalism:

  • The real world, unlike the fixed grid of the simulation, can be positive-sum thanks to new technologies that create material abundance.  Today's world might be more peaceful than the past because, after the industrial revolution, peace and cooperation (which is better for economic growth) became more profitable than conquest.
  • In the simulation, what's called "humanism" is really "cooperate with anyone you interact with, never defect", which sounds more like gullibility to me.  In real life, societies have a lot of ways to figure out how to build trust -- like tracking people's reputations and meritocratically promoting trustworthy players, or unifying around a common set of ideological beliefs.  I think that in real life it's possible to have a high-trust, humanist, but non-gullible society that succeeds by using meritocracy and good judgement to avoid getting scammed, and which makes sure to spend enough resources supporting itself and staying competitive (even while maintaining a background commitment to universalism) that it doesn't get overtaken by other groups.
  • In the simulation, after the ethno-nationalist squares take over, then I guess they will fight it out between the different ethnic groups, and then the final winning ethnic group will live happily ever after as a dominant monoculture?  But real life doesn't work this way -- by the very logic that helped the nationalist squares in the first place, a real-life monoculture would tend to fracture and divide into subgroups who would then proceed to fight with each other as before.  This tendency towards internecine fighting is a drag on the forces of division and (in theory) could be a boon to the forces of universalism.
Comment by Jackson Wagner on Recommendations for EA-themed sci-fi and fantasy? · 2022-07-14T07:27:03.516Z · EA · GW

In addition the to broader and more mature genre of "rationalist fiction" overall, definitely check out some of the specific EA-themed creative writing contests that have happened:

Comment by Jackson Wagner on What work has been done on the post-AGI distribution of wealth? · 2022-07-06T21:42:15.539Z · EA · GW

You might be interested in the Future of Life Institute's AI worldbuilding competition (including my 2nd-place-winning entry), which asked people to imagine an optimistic future with AGI and includes several questions where participants detail how economies and wealth inequality have changed in a post-AGI world.  (In addition to stuff about daily-life and economics, the competition also had a big emphasis on the geopolitical balance of power between nations, and how AGI will be governed.)

Comment by Jackson Wagner on Ideas to improve the Effective Altruism Movement · 2022-06-16T18:24:21.465Z · EA · GW

I think the biggest problem with this idea is that it would take an incredible amount of effort to analyze the altruistic impact of minor personal decisions.  (If I buy a GPU to play videogames, is that shortening or lengthening AI timelines?  Helping or hurting the development of Taiwan or China or etc?  Is this a luxurious waste of resources, or is gaming actually a relatively cheap hobby that allows me to donate more money overall compared to if I pursued more expensive forms of liesure?)

Right now, it takes the analytical bandwidth of the entire EA movement to struggle to build a consensus about the relative effectiveness of a handful of high-level interventions in specific areas (global development, biosecurity, factory farming, etc), and even then, the difficulty of analysis has pushed EA towards the idea that we should be exploring more "megaprojects" -- scalable interventions that consume a high amount of money relative to the necessary analytical effort to identify and implement the idea.

Analyzing the ethical impact of everyday decisions (like about where to live, how to commute, what to eat, who to vote for, etc) is essentially a pitch for "microprojects", and would be more suited to a world where there were very many more people interested in EA but much less funding available.

(All that said, I personally would love to peruse someone's analysis of some everyday lifestyle decisions by altruistic impact -- this has already been done well by farmed-animal-welfare people looking at the impact of eating different kinds of food/meat, and it also has some overlap with Ben Williamson's effort to research Effective Self-Help.  For instance, I would really be curious to read someone's take on things like my GPU-related questions above.)

Comment by Jackson Wagner on Digital people could make AI safer · 2022-06-10T17:38:03.053Z · EA · GW

Digital people seem a long way off technologically, versus AI which could either be a long way off or right around the corner.  This would argue against focusing on digital people, since there's only a small chance that we could get digital people first.

But on the other hand, the neuroscience research needed for digital people might start paying dividends far before we get to whole-brain emulation.  High-bandwidth "brain-computer interfaces" might be possible long before digital people, and BCIs might also help with AI alignment in various ways.  (See this LessWrong tag.)  Some have also argued that neuroscience research might help us create more human-like AI systems, although I am skeptical on this point.

Comment by Jackson Wagner on Things usually end slowly · 2022-06-07T22:09:22.866Z · EA · GW

Other commenters are arguing that next time things might be different, due to the nature of technological risks like AI.  I agree, but I think there's an even simpler reason to focus attention on rapid-extinction scenarios: we don't have as much time to prevent them!

If we were equally worried about extinction due to AI, versus extinction due to slow economic stagnation / declining birthrates / political decay / etc, we might still want to put most of our effort into solving AI.   As they say, "there's a lot of ruin in an empire" -- if human civilization was on track to dwindle away over centuries, that also means we'd have centuries to try and turn things around.

Comment by Jackson Wagner on Here are the finalists from FLI’s $100K Worldbuilding Contest · 2022-06-07T18:06:53.465Z · EA · GW

Ah, I am so sorry!  I must have conflated your entry with 281 -- fixed in the post!

Comment by Jackson Wagner on Breaking Up Elite Colleges · 2022-06-07T06:44:21.070Z · EA · GW

I am familiar with this line of thinking, and I am pretty sympathetic to it. (I don't think that literally breaking up universities, antitrust style, would lead to more research happening, but it might perhaps lead to research on more useful topics, or something like that. It might also help reduce cost of living for ordinary folks by limiting/taxing the amounts people spend on education-related signaling, which would be great.) I see "encouraging more competition in education", which includes both taxing incumbent top schools like Harvard and also encouraging the formation of many new types of schools, as something that could be helpful to humanity from a progress-studies perspective of encouraging general economic growth and human thriving.

For better or worse, Effective Altruism often prefers to prioritize extremely heavily on the most effective cause areas, which can leave a lot of progress-studies-ish causes without a good place in EA even when their effects are pretty huge. Things like YIMBY, metascience, prediction markets, anti-aging research, charter cities, increased high-skill immigration, etc, might be huge boons for humanity, but these general interventions can sometimes feel like they've been orphaned by the EA movement, like "middle-term" cause areas lost between longtermism (which dominates on effectiveness) and neartermism (which prefers things to be empirically proveable and relatively non-political).

I say all this to explain that usually I am fighting on behalf of the middle-termist causes, arguing that prediction markets are a great general intervention for civilization, where many EAs would prefer to just use some prediction techniques for understanding AI timelines, and not bother trying to scale up markets and improve society's epistemics overall.

But in this situation, the tables have turned!! Now I find myself in the opposite role -- I agree with you that encouraging competition in higher education would be good and I hope it happens, but I am like "Meh, is this really such a big problem that it should become an important EA cause area?" Instead of this general intervention, why not do something more focused, like deliberately exploiting the broken higher-education signaling game by purchasing influence at an elite university and then using that platform to focus more energy on core cause areas like AI safety: https://forum.effectivealtruism.org/posts/CkEsn3gjaiWJfwHHr/what-brand-should-ea-buy-if-we-had-to-buy-one?commentId=GKp8cwXSpXp6Jfb8H

Comment by Jackson Wagner on FLI launches Worldbuilding Contest with $100,000 in prizes · 2022-05-16T07:18:36.155Z · EA · GW

Returning to this thread to note that I eventually did enter the contest, and was selected as a finalist! I tried to describe a world where improved governance / decisionmaking technology puts humanity in a much better position to wisely and capably manage the safe development of aligned AI. https://worldbuild.ai/W-0000000088/

The biggest sense in which I'm "playing on easy mode" is that in my story I make it sound like the adoption of prediction markets and other new institutions was effortless and inevitable, versus in the real world I think improved governance is achievable but is a bit of a longshot to actually happen; if it does, it will be because a lot of people really worked hard on it. But that effort and drive is the very thing I'm hoping to help inspire/motivate with my story, which I feel somehow mitigates the sin of unrealism.

Overall, I am actually suprised at how dystopian and pessimistic many of the stories are. (Unfortunately they are mostly not pessimistic about alignment; rather there are just a lot of doomer vibes about megacorps and climate crisis.) So I don't think people went overboard in the direction of telling unrealistic tales about longshot utopias -- except to the extent that many contestants don't even realize that alignment is a scary and difficult challenge, thus the stories are in that sense overly-optimistic by default.

Comment by Jackson Wagner on Change your "Amazon Smile" charity to something effective · 2022-05-15T04:36:28.161Z · EA · GW

It gets even better! You can use the unobtrusive, single-purpose "Smile Always" browser extension and you'll never need to remember to specifically visit smile.amazon.com ever again: your browser will do it for you! https://chrome.google.com/webstore/detail/smile-always/jgpmhnmjbhgkhpbgelalfpplebgfjmbf?hl=en

The amazon feature really does support a huge number of charities -- I have mine set to the Berkeley Existential Risk Initiative.

Comment by Jackson Wagner on Against “longtermist” as an identity · 2022-05-13T22:44:04.186Z · EA · GW

Also, "Effective Altruism" and neartermist causes like global health are usually more accessible / easier for ordinary people first learning about EA to understand.   As Effective Altruism attracts more attention from media and mainstream culture, we should probably try to stick to the friendly, approachable "Effective Altruism" branding in order to build good impressions with the public, rather than the sometimes alien-seeming and technocratic "longtermism".

Comment by Jackson Wagner on Could economic growth substantially slow down in the next decade? · 2022-05-11T20:23:41.489Z · EA · GW

The original "Limits to Growth" report was produced during the 1970s amid an oil-price crisis and widespread fears of overpopulation and catastrophic environmental decline.  (See also books like "The Population Bomb" from 1968.)  These fears have mostly gone away over time, as population growth has slowed in many countries and the worst environmental problems (like choking smog, acid rain, etc) have been mitigated.

This new paper is taking a 1972 computer model of the world economy and seeing how well it matches current trends.  They claim the match is pretty good, but they don't actually just plot the real-world data anywhere, they merely claim that the predicted data is within 20% of the real-world values.  I suspect they avoided plotting the real-world data because this would make it more obvious that the real world is actually doing significantly better on every measure.  Look at the model errors ("∆ value") in their Table 2:

So, compared to every World3-generated scenario (BAU, BAU2, etc), the real world has:
- higher population, higher fertility, lower mortality (no catastrophic die-offs)
- more food and higher industrial output (yay!)
- higher overall human welfare and a lower ecological footprint (woohoo!)

The only areas where humanity ends up looking bad are in pollution and "services per capita", where the real world has more pollution and fewer services than the World3 model.  But on pollution, the goal-posts have been moved: instead of tracking the kinds of pollution people were worried about in the 1970s (since those problems have mostly been fixed), this measure has been changed to be about carbon dioxide driving climate change.  Is climate change (which is predicted by other economists and scientists to cut a mere 10% of GDP by 2100) really going to cause a total population collapse in the next couple decades, just because some ad-hoc 1970s dynamical model says so?  I doubt it.  Meanwhile, the "services per capita" metric represents the fraction of global GDP spent on education and health -- perhaps it's bad that we're not spending more on education or health, or perhaps it's good that we're saving money on those things, but either way this doesn't seem like a harbinger of imminent collapse.
 
Furthermore, the World3 model predicted that things like industrial output would rise steadily until they one day experienced a sudden unexpected collapse.  This paper is trying to say "see, industrial output has risen steadily just as predicted... this confirms the model, so the collapse must be just around the corner!"  This strikes me as ridiculous: so far the model has probably underperformed simple trend-extrapolation, which in my view means its predictions about dramatic unprompted changes in the near future should be treated as close to worthless.

Comment by Jackson Wagner on Why Helping the Flynn Campaign is especially useful right now · 2022-05-10T05:10:57.705Z · EA · GW

This really is a tight race!! Prediction markets at PredictIt and Metaculus are showing Carrick Flynn with just about a 50% chance to win. Political races don't get much more counterfactual than that! https://metaforecast.org/?query=flynn

(In addition to giving him a 47% chance in the primary, Metaculus gives him 40% odds to ultimately win both the primary the general and become a Representative. This implies that if he can make it through the primary, he has an 85% chance (40/47) of winning in November. So, most of the battle is happening this week.)

Comment by Jackson Wagner on If you had an hour with a political leader, what would you focus on? · 2022-05-09T02:36:03.298Z · EA · GW

The details of what's most tractable for your contact to work on might depend on specifics of their situation (whether they are Republican or Democrat, whether they are more influential with Congress or with the White House or with a State government, whether they are involved with any specialized congressional committees on particular topics, etc).  I agree that pandemic preparedness is probably the top general recommendation, but here is a list of some other areas where EA intersects with political issues:

AI, Pandemics, Nuclear War: Unfortunately I'm not sure if there are any shovel-ready AI alignment policies that EA wants to push for, despite the area's overwhelming importance.  On the biosecurity side, though, there is a lot of stuff that governments can do.  Pushing for some smart pandemic preparedness measures of the sort advocated by Guarding Against Pandemics might be my #1 recommendation.  I'm less familiar with the nuclear war / great-power-conflict space and how tractable that is.

Global Health  &  Animal Welfare: The animal welfare side of EA seems to be celebrating one win after another with their strategy of making moderate, well-researched corporate asks, funding ballot initiatives, and influencing various government food-safety standards.  Similarly, global health & development groups have historically gotten a lot of mileage by influencing existing foreign-aid spending to be used more effectively.  A lot of global health stuff is appealingly easy-to-understand, which might make it popular and non-controversial: "get rid of lead paint and air pollution", "make vaccines available against diseases", "help people who are suffering", etc. If you want to brush up on how to frame various EA causes in an especially positive and friendly way, Vox's Future Perfect column (which often covers EA global health and animal welfare topics) has a really great style.

General Scientific & Economic Development: The economic policy ideas of the "progress studies" movement aren't big traditional pillars of EA (yet!), but they tend to touch on a wide variety of subjects, so they generate a lot of shovel-ready political recommendations.  If you wanted, you could take a shotgun approach and try to just rapid-fire a bunch of suggestions from the playbook of "abundance agenda liberalism", leaving it up to the politician you're talking to to decide whether going YIMBY sounds more doable than trying to reform the FDA, or making it easier to build bigger ports and construct clean-energy infrastructure.

Improving Institutional Decisionmaking:  Personally, I'm most excited by big radical ideas mentioned by the FTX Future Fund, like prediction markets, quadratic funding, and charter cities.  Unfortunately, these probably aren't the most actionable or popular suggestions!  But there are a lot of other ideas out there: Approval voting seems like a good idea that will be broadly applicable.  Running better cost-benefit analyses on regulations is always a plus, as is deploying Phil-Tetlock-style forecaster training for important decisionmaking groups.  The most actionable ideas for your situation, however, might be something specific to whatever parts of government your contact has influence over -- like changing a specific rule about whether earmarks are allowed for inclusion in spending bills, or when a vote can be held on legislation.

Comment by Jackson Wagner on Help us make civilizational refuges happen · 2022-05-09T01:35:39.130Z · EA · GW

Many have observed that Elon Musk's Boring Company (which ostensibly is about improving the tunnel-digging state-of-the-art so that we can have lots of nice subway / tunnel infrastructure and alleviate traffic) seems like it would be quite helpful for digging airtight underground habitats on Mars, thus tying into Elon's SpaceX ambitions of settling the red planet.

Similarly, I feel that the project of constructing  really-high-quality civilizational refuges could benefit from the technology required to build space habitats like the ISS.  As excerpted from a longer Forum comment of mine about the overlaps between EA & space exploration:

I think the bunker project has a lot of overlap with existing knowledge about how to build life-support systems for space stations, and with near-future projects to create large underground moonbases and mars cities.  It would certainly be worth trying to hire some human-spaceflight engineers to consult on a future EA bunker project.  I even have a crazy vision that you might be able to turn a profit on a properly-designed bunker-digging business — attracting ambitious employees with the long-term SpaceX-style hype that you’re working on technology to eventually build underground Martian cities, and earning near-term money by selling high-quality bunkers to governments and eccentric billionaires.

Comment by Jackson Wagner on Space governance - problem profile · 2022-05-09T01:25:40.400Z · EA · GW

Following up my earlier comment with a hodgepodge of miscellaneous speculations and (appropriately!) leaving the Long Reflection / Von-Neumann stuff for later-to-never. Here are some thoughts, arranged from serious to wacky:

  • Here is a link to a powerpoint presentation summarizing some personal research that I did into how bad it would be if GPS was taken 100% offline for an extended period. I look into what could cause a long-term GPS failure (cyberattack or solar storm maybe, deliberate ASAT attacks most likely — note that GPS is far from LEO so kessler syndrome is not a concern), how different industries would be affected, and bad the how the overall impact would be. I find that losing GPS for a year would create an economic hit similar in scale to the covid-19 pandemic, although of course the details of how life would be affected would be totally different — most importantly, losing GPS likely wouldn’t be an isolated crisis, but would occur as part of a larger catastrophe like great-power war or a record-breaking solar storm.
  • I have a lot of thoughts about how EA has somewhat of a natural overlap with many people who become interested in space, and how we could do a better job of trying to recruit / build connections there. In lieu of going into lots of detail, I’ll quote from a facebook comment I made recently:

For a lot of ordinary folks, getting excited about space exploration is their way of visualizing and connecting with EA-style ideas about "humanity's long term future" and contributing to the overall advancement of civilization. They might be wrong on the object-level (the future will probably depend much more on technologies like AI than technologies like reusable rockets), but their heart is often in the right place, so I think it's bad for EA to be too dismissive/superior about the uselessness of space exploration. I believe that many people who are inspired by space exploration are often natural EAs at heart; they could just use a little education about what modern experts think the future will actually look like. It's similar to how lots of people think climate change is an imminent extinction risk, and it obviously isn't, but in a certain sense their “heart is in the right place” for caring about x-risk and taking a universal perspective about our obligations to humanity and the Earth, so we should try to educate/recruit them instead of just mocking their climate anxieties.

  • EA talks about space as a potential cause area. But I also think that NASA’s recent success story of transitioning from bloated cost-plus contracts to a system of competitive public-private partnerships has some lessons that the EA movement could maybe use. As the EA movement scales up (and becomes more and more funding-heavy / “talent-constrained”), and as we start digging into “Phase 2” work to make progress on a diverse set of technically complicated issues, it will become less possible to exercise direct oversight of projects, less possible to assume good-faith and EA-value-alignment on the part of collaborators, and so forth. Organizations like OpenPhil will increasingly want to outsource more work to non-EA contractors. This is mostly a good thing which reflects the reality of being a successful movement spending resources in order to wield influence and get things done. But the high-trust good-faith environment of early EA will eventually need to give way to an environment where we rely more on making sure that we are incentivizing external groups to give us what we want (using good contract design, competition, prizes, and other mechanisms). NASA’s recent history could provide some helpful lessons in how to do that.
  • Space resources: I am an engineer, not an economist, but it seems like Georgism could be a helpful framework for thinking about this? The whole concept of Georgism is that economic rents derived from natural resources should belong equally to all people, and thus should be taxed at 100%, leaving only the genuine value added by human labor as private profit. This seems like a useful economic system (albeit far from a total solution) if we are worried about “grabby” pioneers racing to “burn the cosmic commons”.  Just like the spectrum-auction processes I mentioned, individuals could bid for licenses to resources they wish to use (like an asteroid containing valuable minerals or a solar-power orbital slot near the sun), and then pay an ongoing tax based on the value of their winning bid. Presumably we could turn the tax rate up and down until we achieved a target utilization rate (say, 0.001% of the solar system’s resources each year); thus we could allocate resources efficiency while still greatly limiting the rate of expansion.
  • One potential “EA megaproject” is the idea of creating “civilizational refuges” — giant sealed bunkers deep underground that could help maintain civilization in the event of nuclear war, pandemic, or etc. I think this project has a lot of overlap with existing knowledge about how to build life-support systems for space stations, and with near-future projects to create large underground moonbases and mars cities. It would certainly be worth trying to hire some human-spaceflight engineers to consult on a future EA bunker project. I even have a crazy vision that you might be able to turn a profit on a properly-designed bunker-digging business — attracting ambitious employees with the long-term SpaceX-style hype that you’re working on technology to eventually build underground Martian cities, and earning near-term money by selling high-quality bunkers to governments and eccentric billionaires.
Comment by Jackson Wagner on Space governance - problem profile · 2022-05-09T00:02:45.232Z · EA · GW

Hi!  I’m an aerospace engineer at the bay-area startup Xona Space Systems & a big fan of Effective Altruism.  Xona basically works on creating a next-generation, commercial version of GPS. Before that I helped build, launch, and operate a pair of cubesats at a small company called SpaceQuest, and before that I got a masters’ degree at CU Boulder. I’ve also been a longtime fan of SpaceX, kerbal space program, and hard sci-fi.

I think this is a good writeup that does a pretty good job of disentangling many of the different EA-adjacent ideas that touch on aerospace topics. In this comment I will talk about different US government agencies and why I think US policy is probably the more actionable space-governance area than broad international agreements; hopefully I’ll get around to writing future comments on other space topics (about the Long Reflection, the differences between trying to influence prosaic space exploration vs Von Neumann stuff, about GPS and Xona Space Systems, about the governance of space resources, about other areas of overlap between EA and space),  but we'll see if I can find the time for that...

Anyways, I’m surprised that you put so much emphasis on international space agreements through the UN[1], and relatively little on US space policy.  Considering that the USA has huge and growing dominance in many space areas, it’s pretty plausible that US laws will be comparably influential to UN agreements even in the long-term future, and certainly they are quite important today. Furthermore, US regulations will likely be much more detailed / forceful than broad international agreements, and US space policy might be more tractable for at least American EAs to influence. For example, I think that Artemis Accords (signed by 19 countries so far, which represent 1601 of the 1807 objects launched into space in 2021) will probably be more influential at least in the near-term than any limited terms that the upcoming UN meeting could get universal agreement on — the UN is not about to let countries start claiming exclusive-economic-zone-esque territory on other planets, but the Artemis Accords arguably does this![2]

With that in mind, here is an incomplete list of important space-related US agencies and what they do. Some of these probably merit inclusion in your list of “key organizations you could work for”:

  • Naturally, NASA makes many decisions about the overall direction of space exploration. There are big debates about long-term strategic goals: Should we target the Moon or Mars (or learn how to construct increasingly large space stations) for human exploration and settlement? Should space exploration be driven mostly by government itself, or should the government just be one of many customers trying to encourage the development of a private space economy? Which early R&D technologies (like in-space nuclear power, advanced ion propulsion, ISRU techniques, life support equipment) should we fund now in order to help us settle space later? How should we balance and prioritize among goals like human space settlement, robotic planetary exploration, space-telescope astronomy, etc? NASA’s decisions are very influential because they fund & provide direction for private companies like SpaceX and Blue Origin, and their international partnerships mean that smaller space agencies of other Western countries often join NASA initiatives. Of course NASA has to follow the direction of Congress on some of the big-picture decisions, but NASA has lots of leeway to make their own lower-level decisions and to influence Congress’s thinking by making recommendations and essentially lobbying for what NASA thinks is the best approach to a given issue. NASA is not a regulatory agency, but besides directing much actual space activity, they also often create influential international partnerships (like the International Space Station) and agreements (like the Artemis Accords) which might be influential on the far-future.
  • Similarly, DARPA and the US Air Force + Space Force clearly make many important decisions relevant to anti-satellite / arms-race / international-norm-setting considerations. Like NASA, they also invest in important R&D projects, like the current DARPA project to demonstrate nuclear propulsion.
  • The FCC is the USA’s main space regulatory agency. They are in charge of allocating licenses allowing satellite operators to use radio frequencies.[3]  They are also responsible for licensing the launch of satellite constellations (including the funny rules where you have to launch half of what you apply for within 3 years or risk losing your right to launch anything more). Finally, the FCC is the main regulator who is working to create a proper regulatory environment for mitigating space debris, a system that will probably involve posting large bonds or taking out liability insurance against the risk of debris. (Bonds / insurance could also provide a prize-like funding mechanism for third parties to capture and deorbit hazardous, defunct satellites.)
  • The FAA, who mostly regulate airplane safety, are also in charge of licensing the launch and reentry of rockets, capsules, etc. This seems less relevant to the long-term-future than the FCC’s regulation of satellite operations, but who knows — since the FAA today regulates air traffic management and commercial space tourism, they might someday end up in charge of human flights to Mars or all around the solar system, and the norms they establish might go on to influence human space settlement even further afield.
  • Although the FCC is in charge of regulating space debris, it is STRATCOM (the nuclear-ICBM-command people) which currently provides satellite operators with timely collision-risk alerts. This responsibility is slowly being migrated to the Office of Space Commerce under NOAA, and also increasingly handled by commercial space-situational-awareness providers like LeoLabs.
  • I’m not sure who exactly makes the big-picture norm-setting diplomacy decisions about US space policy, like Kamala Harris’s recent speech pledging that the USA will eschew testing antisatellite weapons. I presume these decisions just come from White House staff in consultation with relevant experts.

In a similar spirit of “paying attention to the concrete inside-view” and recognizing that the USA is by far the leader in space exploration, I think it’s further worth paying attention to the fact that SpaceX is very well-positioned to be the dominant force in any near-term Mars or Moon settlement programs. Thus, influencing SpaceX (or a handful of related companies like Blue Origin) could be quite impactful even if this strategy doesn’t feel as EA-ish as doing something warm and multilateral like helping shape a bunch of EU rules about space resources:

  • SpaceX is pretty set on their Mars plan, so it would likely be futile to try to convince them to totally change their core objective, but influencing SpaceX’s thoughts about how a Mars settlement should be established and scaled up (from a small scientific base to an economically self-sufficient city), how it should be governed, etc, could be very important.
  • If SpaceX had some general reforms it wanted to advocate for — such as about space debris mitigation policy — their recommendations might have a lot of sway with the various US agencies with which they have a close relationship.
  • SpaceX might be more interested in listening to occasionally sci-fi-sounding rationalist/EA advice than most governing bodies would. Blue Origin is also interesting in this sense; they are sometimes reputed to have rigid management and might be less overall EA-sympathetic than an organization led by Elon Musk, but they seem very interested in think-tank-style exploration of futurist concepts like O’Neill Cylinders and using space resources for maintaining long-run economic growth, so they might be eager to advocate for wise far-future space governance.
  1. ^

    Universal UN treaties, like those on nuclear nonproliferation and bioweapons, seem best for when you are trying to eliminate an x-risk by getting universal compliance. Some aspects of space governance are like this (like stopping someone from launching a crazy von neumann probe or ruining space with ASAT attacks), but I see a many space governance issues which are more about influencing the trajectory taken by the leader in space colonization (ie, SpaceX and the USA). Furthermore, many agreements on things like ASAT could probably be best addressed in the beginning with bilateral START-style treaties, hoping to build up to universal worldwide treaties later.

  2. ^

    The Accords have deliberately been pitched as low-key thing, like “hey, this is just about setting some common-sense norms of cooperation and interoperability, no worries”, but the provisions about in-space resource use and especially the establishment of “safety zone” perimeters around nation’s launch/landing sites, is in the eyes of many people basically opening the door towards claiming national territory on celestial bodies.

  3. ^

    The process of getting spectrum is currently the riskiest and most onerous part of most satellite companies’ regulatory-approval journeys. Personally, I think that this process could probably be much improved by switching out the current paperwork-and-stakeholder-consultation-based system for some fancy mechanism that might involve auctioning self-assessed licenses or something. But fixing the FCC’s spectrum-licensing process is probably not super-influential on the far-future, so whatever.

Comment by Jackson Wagner on The AI Messiah · 2022-05-06T01:23:53.085Z · EA · GW

I wasn't really trying to say "See, messianic stories about arriving gods really work!", as to say "Look, there are a lot of stories about huge dramatic changes, AI is not more similar to the story of Christianity as it is to stories about new technologies or plagues or a foreign invasion."  I think the story of European world conquest is particularly appropriate not because it resembles anyone's religious prophecies, but because it is an example where large societies were overwhelmed and destroyed by the tech+knowledge advantages of tiny groups.  This is similar to AI, which would start out outnumbered by all of humanity but might have a huge intelligence + technological advantage.

Responding to your request for times when knowledge of European invasion was actionable for natives:  The "Musket Wars" in New Zealand were "a series of as many as 3,000 battles and raids fought among Māori between 1807 and 1837, after Māori first obtained muskets and then engaged in an intertribal arms race in order to gain territory or seek revenge for past defeats".  The bloodshed was hugely net-negative for the Māori as a whole, but individual tribes who were ahead in the arms race could expand their territory at the expense of enemy groups.

Obviously this is not a very inspiring story if we are thinking about potential arms races in AI capabilities:

Māori began acquiring European muskets in the early 19th century from Sydney-based flax and timber merchants. Because they had never had projectile weapons, they initially sought guns for hunting.  Ngāpuhi chief Hongi Hika in 1818 used newly acquired muskets to launch devastating raids from his Northland base into the Bay of Plenty, where local Māori were still relying on traditional weapons of wood and stone. In the following years he launched equally successful raids on iwi in Auckland, Thames, Waikato and Lake Rotorua, taking large numbers of his enemies as slaves, who were put to work cultivating and dressing flax to trade with Europeans for more muskets. His success prompted other iwi to procure firearms in order to mount effective methods of defence and deterrence and the spiral of violence peaked in 1832 and 1833, by which time it had spread to all parts of the country except the inland area of the North Island later known as the King Country and remote bays and valleys of Fiordland in the South Island. In 1835 the fighting went offshore as Ngāti Mutunga and Ngāti Tama launched devastating raids on the pacifist Moriori in the Chatham Islands.

Comment by Jackson Wagner on The AI Messiah · 2022-05-06T00:56:04.013Z · EA · GW

Here are a couple thoughts on messianic-ness specifically:

  • With the classic messiah story, the whole point is that you know the god's intentions and values.  Versus of course the whole point of the AI worry is that we ourselves might create a godlike being (rather than a preexisting being arriving), and its values might be unknown or bizarre/incomprehensible.   This is an important narrative difference (it makes the AI worry more like stories of sorcerers summoning demons or explorers awakening mad Lovecraftian forces), even though the EA community still thinks it can predict some things about the AI and suggest some actions we can take now to prepare.
  • How many independent messianic claims are there, really?  Christianity is the big, obvious example.  Judaism (but not Islam?) is another.  Most religions (especially when you count all the little tribal/animistic ones) are not actually super-messianic -- they might have Hero's Journey figures (like Rama from the Ramayana) but that's different from the epic Christian story about a hidden god about to return and transform the world.

I am interpreting you as saying:
"Messianic stories are a human cultural universal, humans just always fall for this messianic crap, so we should be on guard against suspiciously persuasive neo-messianic stories, like that radio astronomy might be on the verge of contacting an advanced alien race, or that we might be on the verge of discovering that we live in a simulation."  (Why are we worried about AI and not about those other equally messianic possibilities?  Presumably AI is the most plausible messianic story around?  Or maybe it's just more tractable since we're designing the AI vs there's nothing we can do about aliens or simulation overlords.)

But per my second bullet point, I don't think that Messianic stories are a huge human universal.  I would prefer a story where we recognize that Christianity is by far the biggest messianic story out there, and it is probably influencing/causing the perceived abundance of other messianic stories in culture (like all the messianic tropes in literature like Dune, or when people see political types like Trump or Obama or Elon as "savior figures").  This leads to a different interpretation:

"AI might or might not be a real worry, but it's suspicious that people are ramming it into the Christian-influenced narrative format of the messianic prophecy.  Maybe people are misinterpreting the true AI risk in order to fit it into this classic narrative format; I should think twice about anthropomorphizing the danger and instead try to see this as a more abstract technological/economic trend."

This take is interesting to me, as some people (Robin Hanson, slow takeoff people like Paul Christiano) have predicted a more decentralized version of the AI x-risk story where there is a lot of talk about economic doubling times and whether humans will still complement AI economically in the far future, instead of talking about individual superintelligent systems making treacherous turns and being highly agentic.  It's plausible to me that the decentralized-AI-capabilities story is underrated because it is more complicated / less viral / less familiar a narrative.  These kinds of biases are definitely at work when people, eg, bizarrely misinterpret AI worry as part of a political fight about "capitalism".  It seems like almost any highly-technical worry is vulnerable to being outcompeted by a message that's more based around familiar narrative tropes like human conflict and good-vs-evil morality plays.

But ultimately, while interesting to think about, I'm not sure how far this kind of "base-rate tennis" gets us.  Maybe we decide to be a little more skeptical of the AI story, or lean a little towards the slow-takeoff camp.  But this is a pretty tiny update compared to just learning about different cause areas and forming an inside view based on the actual details of each cause.

Comment by Jackson Wagner on The AI Messiah · 2022-05-05T17:52:44.172Z · EA · GW

"Humanity has seen many claims of this form." What exactly is your reference class here? Are you referring just to religious claims of impending apocalypse (plus EA claims about AI technology? Or are you referring more broadly to any claim of transformative near-term change?

I agree with you that claims of supernatural apocalypse have a bad track record, but such a narrow reference class doesn't (IMO) include the pretty technically-grounded concerns about AI. Meanwhile, I think that a wider reference class including other seemingly-unbelievable claims of impending transformation would include a couple of important hits. Consider:

  • It's 1942. A physicist tells you, "Listen, this is a really technical subject that most people don't know about, but atomic weapons are really coming. I don't know when -- could be 10 years or 100 -- but if we don't prepare now, humanity might go extinct."

  • It's January 2020 (or the beginning of any pandemic in history). A random doctor tells you "Hey, I don't know if this new disease will have 1% mortality or 10% or 0.1%. But if we don't lock down this entire province today, it could spread to the entire world and cause millions of deaths."

  • It's 1519. One of your empire's scouts tells you that a bunch of white-skinned people have arrived on the eastern coast in giant boats, and a few priests think maybe it's the return of Quetzalcoatl or something. You decide that this is obviously crazy -- religious-based forecasting has a terrible track record, I mean these priests have LITERALLY been telling you for years that maybe the sun won't come up tomorrow, and they've been wrong every single time. But sure enough, soon the European invaders have slaughtered their way to your capital and destroyed your civilization.

Although the Aztec case is particularly dramatic, many non-European cultures have the experience of suddenly being invaded by a technologically superior foe powered by an exponentially self-improving economic engine -- this sounds at least as similar to AI worries as your claim that Christianity and AI worry are in the same class. There might even be more stories of sudden European invasion than predictions of religious apocalypse, which would tilt your base-rate prediction decisively towards believing that transformational changes do sometimes happen.

Comment by Jackson Wagner on [Needs Funding] I invented a cheap, scalable tool for fighting obesity · 2022-04-28T01:38:39.810Z · EA · GW

Some questions I would have if I was an EA grantmaker:

  • Is this really super-scalable?  How many people would buy a dedicated gesture-detecting device?  Would it be better to write software for a device like a Fitbit or Apple Watch, which millions of people already own?
  • Wouldn't people learn to ignore the notifications over time?  If I put a post-it note on my fridge saying "stop snacking!", that might cause me to think twice a few times, but eventually I might just start ignoring the post-it.
  • Even if wearing the device was 100% effective at eliminating unconscious snacking, would this make a dent in obesity?  Wouldn't people just get hungrier and then eat more at meals?  The path between "use your willpower to snack a bit less" and "actually lose weight and keep it off" is absolutely notorious for being convoluted, impenetrable, and largely uncharted by modern scientific understanding.  My prior on proposed obesity interventions actually working is very low.
Comment by Jackson Wagner on The Fabian society was weirdly similar to the EA movement · 2022-04-26T23:35:51.775Z · EA · GW

You might be interested in this list of social-change movements by Mark Lutter (former head of Charter Cities Institute).  Excerpting the first third of the page:

Inspired by Patrick Collison's Fast page, I thought it worthwhile to build a list of examples of social change. One of they key challenges of the 21st century is rebuilding our institutions for the digital age. Examples of past successes and failures of social change can help inform that approach.

Fabian Society - A British socialist organization dedicated to advancing democratic socialism via a gradualist approach, rather than revolution, in democracies. Founded in 1884, many of the leading intellectuals of the era were associated with the Fabians, including, George Bernard Shaw and H.G. Wells, and Sidney and Beatrice Webb. It was influential and arguably successful in its efforts, founding the London School of Economics and Political Science, and influencing many leaders of former British Colonies, including India's Jawaharlal Nehru, Pakistan's Muhammad Ali Jinnah, and Singapore's Lee Kuan Yew.

Corn Laws Repeal- The corn laws were tariffs on imported food and corn in the first half of the 19th century in the United Kingdom. They kept prices high, benefitting domestic producers and landowners while hurting the average Brit. The repeal of the corn laws is seen as a decisive move to free trade and a victory for liberalism. It also represented a shift in power from rural areas to urban areas. The Anti-Corn Law League is one of the early examples of mass mobilization, writing op-eds, hosting speeches, mobilizing action, even electing men to parliament. It became a model for later reform movements.

YIMBYs: YIMBY's, or yes, in my backyard, is a pro-housing movement that has recently emerged among urban millennials. They're opposed to NIMBY's, and advocate for increasing density in urban areas to lower housing costs. The first groups were started in 2014 in the San Francisco Bay Area, the center of the housing crisis. The movement has gone international, with chapters in the United Kingdom and Canada. Despite it's nascence, there have been several prominent wins as cities including Berkeley, Sacramento, and Minneapolis are moving away from single family housing requirements.

Mont Pelerin Society: A network of scholars dedicated to preserving and advancing classical liberal ideas in the aftermath of World War II. Founded by luminaries including Friedrich Hayek, Frank Knight, Carl Popper, Ludwig von Mises, George Stigler, and Milton Friedman. The joke is that in the 1950's all libertarians knew each other, in part because the movement was so small and in part because it was well networked in part due to organizations like Mont Pelerin. The ideas of Hayek, Friedman, and Mont Pelerin are credited with the Thatcher and Reagan revolutions.

Meiji Restoration: A period of industrialization in Japan led by the state. Japan had closed themselves off from international trade for centuries, before being forced to open their borders by Commodore Perry in 1853. In 1868 power was concentrated under the Emperor in a modernization effort that ultimately proved successful. The policy changes included the removal of previous privileges' by the Samurai, knowledge sharing by attracting western workers and education, and an emphasis on industrialization. The modernization was successful with Japan winning a war against Russia in 1905.

See Mark Lutter's site for a bunch more!

Comment by Jackson Wagner on Consider Changing Your Forum Username to Your Real Name · 2022-04-26T19:18:08.689Z · EA · GW

Hugely seconded. When I was signing up for an account, I considered going anonymous (what if I want to discuss controversial things!), but I figured the upside career & social potential of using my real name outweighed the downside risk that cancel culture might someday come for Effective Altruism. Since then, my decision has been totally vindicated -- numerous people have reached out to me for conversations about EA stuff, or even ask if I'd like to apply for a job at their EA org. I feel like this would have happened less if I wasn't using my real name, since people wouldn't be able to take the intermediate getting-to-know-me step of googling for my linkedin, visiting https://jacksonw.xyz/, or etc. That intermediate step of internet research probably makes people more comfortable reaching out and making a connection.

Comment by Jackson Wagner on How could Twitter be tweaked to promote more rational conversations, now that Elon is on the board? · 2022-04-26T01:06:45.615Z · EA · GW

The "Stratchery" newsletter proposes a sophisticated scheme to split Twitter into two companies, with core Twitter retaining control of the social graph and underlying infrastructure, but relaxing their control of the end-user's UI experience, advertising, and content moderation.  Those endpoint presentation services would be provided by numerous companies competing in the free market.  Ultimately there is a vision for Twitter to evolve into essentially an internet standard for notifications, supporting many uses that sometimes look nothing like today's Twitter.

I'm not sure if it would be altruistically good for the world to loosen control in this way and open up Twitter via APIs (although it would certainly help avoid undue censorship on the free exchange of ideas).   But it's an interesting analysis of Twitter's business situation.

Comment by Jackson Wagner on Corporate governance reform efforts? · 2022-04-20T04:10:38.678Z · EA · GW

There are potentially two prongs of investigation here.  One would be changing the fundamental way that organizations are structured; this topic is explored eloquently in this cold-takes post, and I agree it seems very promising (although I don't know much about it).

Another side to "improving corporate governance" might include efforts to encourage corporate adoption of assorted management / forecasting / decisionmaking techniques at lower levels -- not fundamentally changing the shareholders/board/CEO/etc structure, but perhaps exploring things like using prediction markets in various contexts.  The benefit here would be twofold: first, by improving corporate decisionmaking, we would be spreading better management technology and marginally increasing economic growth.  Second and more importantly, corporate adoption of innovative new mechanisms (quadratic voting might also be relevant here, and I'm sure there are others), we help mature those mechanisms to the point that they become easier for other organizations including governments to start using.

Comment by Jackson Wagner on How to organise 'the one percent' to fix climate change · 2022-04-16T23:01:39.897Z · EA · GW

I am reading this as "we should create a social movement among The One Percent, which organizes together to cancel and oppress anyone who opposes strong climate action".  I think this is crazy for a lot of reasons:

  1. The idea of creating a social movement to advocate for a change in behavior/norms, and having it start small with early-adopters and "true believers", then grow over time as the benefits to joining the group become larger and larger with more members, is not a brilliant new idea.  Rather, it is how pretty much all social movements already operate.  The question you need to be thinking about isn't just the idea of starting a movement, but "How will my social movement manage to outcompete all the others?"  Your answer seems to be "we'll be more willing to use aggressive cancel-culture techniques against our enemies", and that probably works well in the late-game (when you're already in control of the government, media, etc, you are free to be as totalitarian as you wish), but works badly in the early-game.  What kind of early adopters would join such a spiteful movement with such a cynical long-term plan?  Probably (totally spitballing here) angry people who feel rejected by ordinary society -- not charismatic politicians and brilliant scientists and powerful entrepreneurs who all have better coalitions they could join elsewhere.  That kind of movement will have a hard time snowballing to world domination.  By contrast, Effective Altruism has had lots of success by being explicitly friendly and open-minded rather than combative and political.
  2. Who is "The One Percent"?  I don't think you ever define it.  Is this the most powerful 1% of people on the planet?  Or the richest 1% in terms of net worth?  Or are we talking yearly incomes?  Twitter followers?  You seem to be talking about rich people in the United States and Europe, but what about carbon emissions in China, India, and etc?  Even if we achieve totalitarian cancel-culture in the western world, will we be able to bully China and other countries into following along?  You might quibble, "look, these are minor differences -- the top 1% most influential people in the world are gonna be pretty similar now matter how you slice it".  But I think it's worse than that.  I think "The One Percent" is not even a natural or coherent group of people -- it's just a slogan.  If you look at graphs of net worth, wealth is clearly distributed on a power law.  This means that there is often just as big a difference between the median person and the 1%, as between the 1% and the 0.1%, or the 0.1% and the 0.01%.  I think this is a big mistake that left-leaning people make when they think about class conflict -- they assume that "the rich" is a coherent group with a set of common interests, but actually there are many tiers of increasing richness with different common interests, there aren't any hard divisions between the tiers (if you are in "the 2%", which side do you choose in a class conflict?), and this phenomenon means that class conflict is harder to inflame than many activists assume.
  3. I am reading you as proposing a campaign of repression and cultural change to make opposing climate action totally taboo.  But this kind of thing has many obvious downsides: by silencing opposition and making people afraid to speak their minds freely, you will silence necessary debates on society's direction.  If we reorient our entire culture around climate change, what will happen to people who believe that climate change actually isn't that bad and humanity might have bigger problems (like preparing for pandemics or working to avert nuclear war)?  Will those people be cancelled, and will crucial work on solving other global problems not get done?  Even within climate change, will there be room for necessary debate about strategy when everyone is rushing to demonstrate their allegiance to the party line?  (How important is nuclear power for mitigating climate change?  Does geoengineering have a role, and if so, what kinds of geoengineering?  Will we have to electrify 100% of transportation, even airplanes, or should we also aim to create net-zero synthetic fuel from carbon in the air?)  What about people who think that climate change is a big problem demanding urgent action, but who also think that exaggerated climate catastrophism is causing people to suffer from mental illness, like anxiety and depression, to the extent that people are refusing to have children because they are haunted by visions of an apocalyptic, uninhabitable planet, even though no scientist believes that such a dire future could ever occur due to global warming?

Overall, your post seems to express a mindset like "Everybody already knows that climate change is humanity's #1 problem, and that this is a dire crisis justifying almost anything to solve it.  Since it's obvious to everyone what the necessary course of action is, we don't need to indulge the luxuries of free discussion and scientific exploration and political debate, which all just leads to infighting and delay.  Instead, we just need to browbeat the world into all working together and doing what we know is right!"

This is the correct strategy in some situations, like if it's WW2 and you are being invaded by a fascist nation and the obvious response is to just try to fight the invaders with everything you've got.

But in my view, it's not the correct strategy for our current age and the problems humanity now faces.  Climate change is bad, but it is not apocalyptic.  Estimates often say things like "the USA could lose up to 10% of GDP by 2100!", which would be like having 5 extra recessions over the next 80 years of 0% growth, instead of normal years with 2% growth.  That would be pretty lame, and I hope we take strong measures to avert that, but IMO it isn't worth turning all of culture into a never-ending totalitarian propaganda cancel-fest, because creating such an oppressive culture would impede humanity's efforts to make progress on... pretty much every other problem we face in society.  (And I believe we face many dire problems in addition to climate change!)  Instead, I think we have to have the humility to admit that the correct answers AREN'T all obvious (for starters, people don't even agree about nuclear power or geoengineering), and we need to build movements that try to think hard and explore different potential solutions, and if anything encourage GREATER freedom, debate, and disagreement, instead of just browbeating.

Comment by Jackson Wagner on 13 ideas for new Existential Risk Movies & TV Shows – what are your ideas? · 2022-04-13T02:36:50.728Z · EA · GW

Reposting & paraphrasing some of my comments on an earlier thread about movies & documentaries:

Where are the realistic (contagion-like) disaster films?
Personally, I would love to see a well-made disaster movie about the real, modern conception of AI risk.  I am also surprised and disappointed by the fact that (even after covid!!) there are not more good movies about pandemics in the works.  (Especially when there are so many zombie and post-apocalyptic movies, which are like the less-realistic cousin of the would-be pandemic genre.)

I am also surprised and disappointed by the fact that there are not really any disaster movies about "modern warfare" or "world war 3" -- maybe a Tom-Clancy-style movie about how a miscommunication between the USA and China leads them to the brink of war, or just a movie realistically portraying what a future large-scale war might look like, with attacks on satellites and drone-swarms and cyberattacks on infrastructure and the like.  I think a realistic AI movie could be very helpful, but the effect on the world might be negative for some of these ideas.  A realistic movie about biorisks might be subject to infohazard concerns, while a realistic movie about modern warfare might inflame international tensions if not done very carefully.

Conversely, I would also love to see a movie or TV series depicting a realistic attempt at an optimistic, utopian near-future -- perhaps introducing the reader to promising new technologies and new types of social/governance institutions that could help solve major current problems.

Make a rationalist/EA modern-day version of "Cosmos":
If I was an EA grantmaker, I'd want to start small by maybe hiring an educational-youtube-video personality (like John Green's "Crash Course") to make an Effective Altruism series.  If that seemed to show good results, then I would escalate to funding a decent Netflix-style documentary movie, which I imagine could be had for something like $2-5 million -- "An Inconvenient Truth" had a budget of around $1.5 million.  Then, if everything was still going peachy, we could set our sights higher and consider a big Cosmos-style TV series with a big marketing push to really try and get the word out.  In a Cosmos-inspired TV show, each episode could tackle a different philosophical idea or global problem, perhaps roughly following the 80,000 Hours Podcast series "Effective Altruism: An Introduction" and "Effective Altruism: Ten Global Problems", sprinkling in some key highlights of the LessWrong sequences.  Interviews with experts would alternate with experimental demonstrations, historical anecdotes,  and CGI visualizations meant to make the abstract ideas of effective altruism vivid and memorable, just like Cosmos did so well.

Comment by Jackson Wagner on What would you do with a Facebook meme page with 250k followers? · 2022-04-13T01:45:36.437Z · EA · GW

Repost some of the best (and most generally accessible) stuff from the 50X smaller Dank EA Memes group? Hope that this gets more people interested in EA. https://m.facebook.com/groups/OMfCT/about/

Maybe advertise local EA meetups as well by the same logic.

Alternatively, go in a more political direction and post consequentialist memes about, eg, the FDA's overly cautious drug approval, the high efficacy of EA causes like global health/development aid, etc. Maybe steal memes from r/neoliberal. But if done poorly, this would run the risk of devolving into a highly obnoxious politicized facebook group. (The trolly problem format is already overused to make dumb political arguments that aren't even very consequentialist.)

For that reason, probably try to move away from the actual "trolly problem" format, and towards being a meme group where what's important is that the jokes have a consequentialist mindset.

Comment by Jackson Wagner on How could Twitter be tweaked to promote more rational conversations, now that Elon is on the board? · 2022-04-07T02:20:31.754Z · EA · GW

A list of fixes requested by Tyler Cowen of Marginal Revolution:

1. A better organization of DMs, including functional search.  And why do some of my DMs seem to disappear?

2. End-to-end encryption for DMs.

3. Available blue checks for more people.

4. Lately they have begun serving me up “popular” tweets from major tweeters multiple times.  I hate this.

5. Eliminate the quote tweet function, to limit pile-ons.

6. Sometimes my “scroll down” function gets stuck.  Unstick it.

7. I don’t myself prefer Promoted Tweets, and in theory yes I hate the bots.  But in practice, viewed only selfishly, neither has been a major problem for me.  I am not doubting they may be problems for others.

8. Longer-run, when AI is better and cheaper, how about a button “You didn’t subscribe to these tweets, but we think you really might like them.”  But apart from the main flow and screen.

9. I wish for a slightly smarter list of trending topics.  Yes I am greatly interested in the war in Ukraine, but I really don’t need “Zelenskyy says Russia is trying to hide ‘guilt in mass killing’ as the war in Ukraine continues”.

Comment by Jackson Wagner on Are there any AI Safety labs that will hire self-taught ML engineers? · 2022-04-06T23:51:50.833Z · EA · GW

Per Andy Jones over at LessWrong:

If you think you could write a substantial pull request for a major machine learning library, then major AI safety labs want to interview you today.

I work for Anthropic, an industrial AI research lab focussed on safety. We are bottlenecked on aligned engineering talent. Specifically engineering talent. While we'd always like more ops folk and more researchers, our safety work is limited by a shortage of great engineers.

I've spoken to several other AI safety research organisations who feel the same.

Comment by Jackson Wagner on I feel anxious that there is all this money around. Let's talk about it · 2022-04-06T22:31:59.768Z · EA · GW

I think I already shared this comment on facebook with you, but here I am re-upping my complaints about polis and feature suggestions for ways that it would be cool to explore groups and axes in more detail: https://github.com/compdemocracy/polis/discussions/1368
(UPDATE: I got a really great, prompt response from the developers of Polis.  Turns out I was misinterpreting how to read their "bullseye graph", and Polis actually provides a lot more info than I thought for understanding the two primary axes of disagreement.)

If you have your own thoughts after conducting various experiments with polis (I thought this was interesting and I liked seeing the different EA responses), perhaps you too should ping the developers with your ideas!

Comment by Jackson Wagner on How could Twitter be tweaked to promote more rational conversations, now that Elon is on the board? · 2022-04-06T21:32:30.708Z · EA · GW

I personally don't know (here is a not-super-informative article about it), although Facebook seemed to launch their project as a totally separate app unconnected to Facebook-the-website?  I'd guess they simply struggled to attract users to the new app.

I think that one ultimate goal of prediction markets is to break out of the "enthusiast crowd" of rationalists and Phil Tetlock types who are excited about prediction markets for their own sake, and instead become a wider social norm among journalists, experts, politicians, etc, that if you aren't willing to take a stand on prediction markets, you might be full of BS.  The enthusiast crowd might flock to apps like Metaculus and Polymarket, but the wider "pundit crowd" might have to have prediction markets somewhat forced upon them, as something they must engage with if they want to build/maintain their reputation.

Since Twitter is already such a dominant platform for journalists, breaking news, political debates, and celebrity figures like politicians and founders, Twitter seems like a natural fit for this purpose.  People already care deeply about Twitter metrics like their follower count and bluecheck status, much more than they care about karma points on a random new app like Forecast.  If you could earn your way to a bluecheck by doing well enough on predictions, or if all the cool smart pundits showed their prediction score on their profile, that could provide a lot of motivation to get normal people thinking more rationally about political debates and current events.

Ideally it might be best if Twitter could try to focus people's attention to a smaller number of higher-volume prediction markets in a more centralized, Metaculus-y way, rather than "everyone can start their own market" in the style of Facebook's Forecast, Reddit's Predictions, and Manifold Markets.  For example, imagine if instead of fighting Covid "misinformation" by directing people towards a bunch of official CDC proclamations, they instead directed people towards a page more like Metaculus' fortified essays, with CDC bullet points mixed in with live prediction markets on questions like "how much do the vaccines reduce covid risk?".  Similarly, Twitter could host high-volume markets on breaking news events like the invasion of Ukraine.  But a big downside of this might be that by centralizing the prediction markets more, Twitter would face more political pressure and controversy.  And of course it would be a difficult task to constantly create new markets on breaking-news topics.

Comment by Jackson Wagner on How could Twitter be tweaked to promote more rational conversations, now that Elon is on the board? · 2022-04-06T16:49:44.412Z · EA · GW

Instead of the current, AI-based system of content moderation, Twitter could experiment with different methods of community governance and judicial review.

Imagine a system where AI auto-censorship decisions could be appealed by staking some karma-points on the odds that a community moderator would support the appeal if they reviewed it.  Others could then stake their own karma points for or against, depending on how they thought the community moderator would rule.  An actual community moderator would only have to be brought in for the most contentious cases where the betting markets are between, say, 30% and 70% -- this would make the system more scalable since most appeals would get resolved by the community without ever escalating to a moderator.  

You could then have multiple levels of appeals and judges, creating another market on whether some kind of Twitter Supreme Court would uphold the moderator's decision.  (The above idea is ripped directly from Robin Hanson but I can't find the exact post where he describes it.  It also resembles the dispute-resolution mechanism of the UMA crypto coin.)

Making nuanced, human-based judgement scalable in this way could both directly improve the quality of twitter discourse, and help familiarize people with an innovative new social technology.  Also, by creating a system of community governance instead of AI-based censorship, it might offer a superior middle path compared to the current "AI-based censorship vs 4chan anarchy" debates about social media content moderation.

Comment by Jackson Wagner on How could Twitter be tweaked to promote more rational conversations, now that Elon is on the board? · 2022-04-06T16:41:34.760Z · EA · GW

Twitter could implement a play-money prediction market just like metaculus or manifold markets -- they could even consider buying one of these teams.  Ideally, starting or voting on a prediction market would be as easy as running a Twitter poll.  (Reddit recently did something similar.)  Having large, metaculus-style prediction markets on newsworthy events might directly help important online conversations become more productive, more reality-based, and less polarized.  And in the long run, familiarizing people with how prediction markets work might also encourage/legitimize the further adoption of prediction markets as information sources to inform decisionmaking.

Comment by Jackson Wagner on How could Twitter be tweaked to promote more rational conversations, now that Elon is on the board? · 2022-04-06T16:37:47.270Z · EA · GW

Twitter could create an easy-to-use, secure voting infrastructure for use by student groups, nonprofits, small businesses, unions, and other relatively low-stakes situations where you mostly just want to get a reasonably trustworthy voting system up and running easily.  Twitter could use this platform to advertise the merits of designs like approval voting and quadratic voting, boosting interest in those types of voting and building legitimacy for them to be adopted in higher-stakes contexts.

Comment by Jackson Wagner on EA and Global Poverty. Let's Gather Evidence · 2022-04-06T01:44:11.132Z · EA · GW

For whatever reason, the EA Forum's culture is to have a very friendly, kind of academic-ish / wikipedia-ish writing style, even in comments.  Personally I think this goes too far.  But when you are just spinning off random jokes and personal associations and offensive stuff, it becomes legitimately harder to understand:

- "Goyim" -- I'm pretty sure this means non-jews but I don't know the exact emotional connotations?  I guess this is a reference to high Ashkenazi IQ and the idea that there are a lot of jews in EA?  (Is it really the case that EA is overwhelmingly jewish?  I feel like EA is less jewish, and more british, than LessWrong rationalism.  Although that is just a gut impression and I don't think people would understand me if I started making jokes referring to OpenPhil as "the crown" or "parliament" or whatever.)  Anyways, what do you mean by this joke -- are you just saying "there are a lot of smart people in EA" and using jewishness as a synonym for "smart people"?  Or are you saying that EA has jewish values or is in some sense a fundamentally jewish project?  (As an ethnically british/german guy, I disagree and don't see what's so jewish about EA??)

- Insulting everyone with sub-130 IQ -- are you just trying to tell people how smart you are?  Or are you trying to express a worldview where IQ is the dominant factor in whether someone can correctly recognize that longtermism is the best cause area?  (As a longtermist myself, I am sad to report that I have a lot of very smart friends and coworkers who have yet to spontaneously convert to longtermism, or even to effective altruism broadly.)  Or are you saying that high-IQ people are generally more interested in complex-multi-step plans, so longtermism appeals to their personalities more?

- Calling things cults or religions -- famously, there are lots of different ways that things can be compared to cults or religions.  I get the sense that you are insulting rationality, but what exactly are you trying to communicate?  That rationality is too centralized around a few charismatic leaders?  That its ideas are non-falsifiable?

- For someone interested in recruiting "less bright but still very nice and helpful people", you seem pretty off-putting to that goal?  For someone afraid of "persecution and state repression", you seem pretty happy to fire off #nofilter hot takes?  These layers of irony / dissonance make it unclear what your message is and where you are coming from.

Besides the above points of confusion, your writing style also pattern-matches onto "rantings of an internet rando who is probably ramming everything into one hedgehog-y worldview".  EAs are (as you say) smart people whose time is valuable; they don't have time to engage with lots of people who seem belligerent and ideological on their off chance that their shitpost actually contains valuable insight, because even though it sometimes happens, the prior probability of encountering valuable insight is low.

 

Comment by Jackson Wagner on Forecasting Newsletter: March 2022 · 2022-04-06T00:14:56.052Z · EA · GW

All investments go to zero in the case of existential risk, so it's hard to price it correctly... I thought that the article was great, but I would have appreciated a more comprehensive treatment.

You may be interested in reading my long-form (albeit meandering) thoughts on this exact issue!  Here is the post, wherein I analyze and respond to an essay by Peter Thiel on this same subject of the interaction between markets and apocalyptic risks that would end those very markets.  Among other things, I wonder if (precisely because markets are incentivized to ignore X-risk), we can somehow use markets (including specially-created prediction markets) to indirectly measure X-risk or related quantities.

On the subject of Hanania's "Why War Forecasting is Hard", I'm surprised that he didn't mention the unpredictable effects of... I'm not sure what to call this, maybe "preference cascades" or just the nature of coalitional fights?  Experts & forecasters alike were surprised on the downside at how quickly the Afghanistan army gave up against the Taliban, and they were surprised on the upside at how stalwart the Ukrainian forces have been against Russia.  I don't think this is all just the physical complexity of war (ie, the difficulty of assessing whether tire maintenance is a crucial factor) or the difficulty of assessing deep cultural factors ("are Ukranians too westernized to be willing to fight", "do Russians really consider Ukraine a part of their ancestral homeland", and similar).  It seems like the decision of whether to fight or give up is perhaps just a volatile and uncertain thing, dependent on social perception of which way the wind is blowing and whether other people have decided to fight or give up.

ie, in another universe, if for whatever reason more Ukrainian leaders had fled the country (perhaps the government relocates from Kiev to Lviv) and there were fewer early stories of heroic resistance (like the tales about Snake Island, etc), perhaps many Ukranians would have started giving up and Russia would have had much more success with their invasion, despite no change whatsoever in the fundamental factors (military hardware or cultural stuff or etc).  This is just my own pundit-like speculation, I know, but it strikes me as perhaps a more plausible explanation of why war forecasting might be especially/uniquely tricky, compared to other similarly complex things like forecasting the success of a new rocket's first launch or the growth of a nation's economy -- rockets and economies can be just as vexing as unmaintained tires!

Comment by Jackson Wagner on How the Ukraine conflict may influence spending on longtermist projects · 2022-03-31T05:02:35.587Z · EA · GW

Agreed on all of these except climate change.  I think the Russia/Ukraine war will probably result in Europe and America investing more in energy technology (which will mostly be green and low-carbon energy), and will probably raise the price of oil (ie lowering the total amount of oil that gets burned), such that we might come out ahead on our climate goals.  I suppose my disagreement with you is that I see progress on climate/energy as significantly driven by technology and economics, so it is okay if we take a hit on international coordination and unity in exchange for getting better green-energy tech.  But others could certainly disagree here!

The effects on nuclear and biorisk seem pretty direct, as you outlined.  The effect on AI seems more indirect, but AI is so important that possibly this is still the biggest effect of the bunch.  If the world is just heading towards less trust / less coordination / more militarism / more great-power competition / more friction between the USA and China, that is probably bad for humanity in a lot of ways, including AI x-risk as you described.

Comment by Jackson Wagner on What is Intersectionality Theory? What does it mean for EA? · 2022-03-27T23:11:48.522Z · EA · GW

You said, "a good understanding of intersectionality might thus help improve the effectiveness of the community overall".  But I am left wondering if a "good understanding" of intersectionality is even possible, since the term seems vague and poorly-defined.

  • Does/should "intersectionality" refer specifically to the idea that different people encounter different overlapping types of discrimination?  Or does "intersectionality" merely mean that sometimes different issues overlap, in general, and it's nice to consider that?  If it's a specific social-justice idea about overlapping oppression, maybe that's relevant to making the EA community diverse and welcoming, but it wouldn't be relevant to the calculations about what cause areas are most effective (like your animal welfare example).  On the other hand, if it just refers to the general idea that sometimes things overlap, I'm not sure we need a special word for that phenomenon, or why this basic fact needs to be called a "Theory".
  • Where is the argument that intersectionality (either as it applies to social justice issues, or in the general overlapping case) is actually a significant concern?  There's no attempt to quantify how much the "whole is greater than the sum of the parts".  If the whole is just 1% greater than the sum of the parts, maybe it's no big deal and can be safely ignored when making rough estimates.  (ie, if we magically overcame all sexism to eliminate anti-women bias, and also all racism to overcome anti-minority bias, how much anti-minority-women bias would be left?  Maybe "intersectional" anti-minority-women bias is 50% or more of the problem, but maybe it's very small relative to the first-order problems of "non-intersectional" racism and sexism.  I've never seen anyone try to explore whether "intersectionality" is a huge deal or just a minor epicycle in the social-justice universe.)

Finally, to be honest, when I've heard people using the term "intersectional", they've often used it like this:  

"We all face different problems, and those might appear to be separate political issues (disability rights, gay rights, labor activism, etc).  But actually, although none of us face EVERY form of oppression, all of us face several forms.  Collectively, we should realize that we're all in this together -- we form a natural alliance of the collective oppressed versus the collective oppressors.  Therefore, it's naïve to have a non-political non-partisan movement (like your cute little disability-rights lobbying group) independent from the totalizing political crusade of mainstream social-justice leftism.  Instead, we should all band together as part of an "intersectional struggle" -- everyone support social-justice politics, then social justice politics will win, and then our political alliance will help disband ALL the forms of oppression.  The point is, it's better to join together in one big social-justice political alliance, and everyone take the party line on all important issues, rather than wasting our time on lots of little independent non-political efforts that don't support each other."

I recognize that's a long paragraph, but that's honestly the main context in which I've heard people use "intersectional".  The political logic is reasonable enough, I suppose, if a bit cynical and realpolitik.  But I think that joining a grand political alliance would be exactly the wrong thing for effective altruism at this time -- the "neutrality" of EA (both politically and in the sense of "cause neutrality") is IMO one of effective altruism's greatest virtues, which helps it attract smart people, focus clearly on what's true & important, make progress in areas that other groups can't, etc.  So, even if the idea of "things sometimes overlap" turns out to need a technical term, I'd personally be very hesitant to use the word "intersectionality", until I could be convinced that the association between "intersectionality" and "...therefore we should join a totalizing political crusade" was just a quirk of my own experience and not an association that any other people share.

Comment by Jackson Wagner on A new media outlet focused in part on philanthropy · 2022-03-12T23:59:24.666Z · EA · GW

For some additional context, on Puck, Teddy describes himself as "covering power, influence, and ego in Silicon Valley", which is maybe a bit less EA-centric than this post makes it sound.  Here is a recent interview with Teddy about covering the "billionare beat".  And a quote from that interview about attempting to give people neutral ground facts about the activities of the rich:

Whether you want to be outraged by billionaires’ philanthropy, political spending or tax avoidance, or whether you think that billionaires are God’s gift to the green earth, you need the facts. And I think that too often, we're deprived of them.  So I don't really approach the beat as a critic or defender of the system. I just think that there's an alarming lack of fact-based reporting about it, and that's a damn shame.

Here is a quote about how he covers billionaire philanthropy:

Much of the tech billionaire set is very thin-skinned about some of the questions that I ask. I don't say that necessarily as a criticism, but I think that lots of them think that the billionaire beat itself puts them inherently on the defensive.

Take the topic of philanthropy, which is something I write about a lot. I think a lot of wealthy people are not used to serious philanthropy journalism, as a concept. So the very idea that someone could be asking questions like, “How is your charitable enterprise structured?” Or, “How much money did you give away to this cause?” Or, “What is your net worth, and how is that reflected or not reflected in the amount of money you give away?”

They see those questions as a fundamental threat, not because they believe that they're unfair questions, but because the entire premise of the question is something that's foreign to them. They think about philanthropy as almost above criticism, above journalism. Like, “Yeah you can critique my business record, but don't critique what I'm doing for the kids.”

I think that misses the forest for the trees to some extent, when there's obviously a raging debate in this country about inequality and about whether the wealthy should be as wealthy as they are. I see philanthropy journalism as essential to answering those questions. They might disagree, but I don't work for them.

Here are two (unpaywalled) articles of Teddy's from the past year about big EA donors:

Comment by Jackson Wagner on The Bunker Project: are there considerations for social cohesiveness? · 2022-03-11T05:34:05.160Z · EA · GW

The idea for x-risk defense bunkers would be to create a "civilizational" refuge, much larger and much more secure (airtight, unusually deep underground, etc) than any military/government bunker. It would be designed to function for years or decades (for instance, it would probably contain a small nuclear power plant), and contain a population of hundreds or thousands of people -- plus all the critical industrial tools and scientific knowledge required to rapidly reboot civilization when the surface became habitable again.

First of all, this means that the logistical and design challenges are pretty significant... you'd have to worry about recycling resources like water and air, and maybe even growing food. It would be a lot like architecting a sci-fi "generation ship" or a Mars colony.

As for social dynamics: On the one hand, I'd hope that a population in the hundreds to thousands would make this much easier. The bunker might be crowded and stressful, but you'd be still living in something like a normal society, where you could go to different places each day and hang out with a variety of different social groups. You wouldn't be crammed into a tiny space with just five other people for months on end. Also, even if things went REALLY poorly and people started killing each other in the bunker, most people would still survive unless they did something suicidal/terroristic like destroying a piece of critical life-support infrastructure. A little social/political drama would probably be fine.

But on the other hand, the whole point is to create something capable of maintaining and restarting civilization all by itself. Would a thousand people sitting in a bunker have the skills to maintain industrial civilization? This seems pretty hard, even if the residents were well-chosen to create a diverse mix of skills and expertise. I guess they could always emerge from the bunker and return to subsistence agriculture, but that seems like a pretty precarious situation for the last 1000 humans on Earth!

So, I think the danger is less "the crew would go mad from isolation and kill each other", and more "our civilizational bunker might keep people alive but slowly fail at the task of actually preserving civilization, and then humanity could peter out afterwards".

Comment by Jackson Wagner on On presenting the case for AI risk · 2022-03-09T19:25:41.405Z · EA · GW

This is a great post; I'll try to change the way I talk about AI risk in the future to follow these tips.

I am reminded of blogger Dynomight's interesting story about how he initially got a bunch of really hostile reactions to a post about ultrasonic humidifiers & air quality, but was able to lightly reframe things using a more conventional tone and the hostility disappeared, even though the message and vast majority of the content was the same:

Previously my approach was to sort of tackle the reader and scream “HUMIDIFIERS → PARTICLES! [citation] [citation] [citation] [citation]” and “PARTICLES → DEATH! [citation] [citation] [citation]”. I changed it to start by conceding that ultrasonic humidifiers don’t always make particles and it’s not certain those particular particles cause harm, et cetera, but PEER-REVIEWED RESEARCH PAPERS say these things are possible, so it’s worth thinking about.

After making those changes, no one had the same reaction anymore.

In his case, the solution was to add some friendly caveats -- personally I think we do this plenty, at least in the semi-formal writing style of most EA Forum posts!  But the logic of building "up" from real-world details and extrapolation, rather than building "down" from visions of AI apocalypse (which probably sounds to most people like attempting to justify an arbitrary sci-fi scenario), might be an equally powerful tool for talking about AI risk.