Posts

Aidan O'Gara's Shortform 2021-01-19T01:20:48.155Z
Best resources for introducing longtermism and AI risk? 2020-07-16T17:27:29.533Z
How to find good 1-1 conversations at EAGx Virtual 2020-06-12T16:30:26.280Z

Comments

Comment by aidan-o-gara on Aidan O'Gara's Shortform · 2021-01-19T01:20:48.845Z · EA · GW

Three Scenarios for AI Progress

How will AI develop over the next few centuries? Three scenarios seem particularly likely to me: 

  • "Solving Intelligence": Within the next 50 years, a top AI lab like Deepmind or OpenAI builds a superintelligent AI system, by using massive compute within our current ML paradigm.
  • "Comprehensive AI Systems": Over the next century or few, computers keep getting better at a bunch of different domains. No one AI system is incredible at everything, each new job requires fine-tuning and domain knowledge and human-in-the-loop supervision, but soon enough we hit annual GDP growth of 25%.
  • "No takeoff": Looks qualitatively similar to the above, except growth remains steady around 2% for at least several centuries. We remain in the economic paradigm of the Industrial Revolution, and AI makes an economic contribution similar to that of electricity or oil without launching us into a new period of human history. Progress continues as usual.

For clarify my beliefs about AI timelines, I found it helpful to flesh out these concrete "scenarios" by answering a set of closely related questions about how transformative AI might develop:

  • When do we achieve TAI? AGI? Superintelligence? How fast is takeoff? Who builds it? How much compute does it require? How much does that cost? Agent or Tool? Is machine learning the paradigm, or do we have another fundamental shift in research direction? What are the key AI Safety challenges? Who is best positioned to contribute?

The potentially useful insight here is that answering one of these questions helps you answer the others. If massive compute is necessary, then TAI will be built by a few powerful governments or corporations, not by a diverse ecosystem of small startups. If TAI isn't achieved for another century, that affects which research agendas are most important today. Follow this exercise for awhile, and you might end up with a handful of distinct scenarios, and then you can judge the relative likelihood and timelines of each. 

Here's my rough sketch of what each of these mean. [Dumping a lot of rough notes here, which is why I'm posting as a shortform.]

  • Solving Intelligence: Within the next 20-50 years, a top AI lab like Deepmind or OpenAI builds a superintelligent AI system.
    • Machine learning is the paradigm that brings us to superintelligence. Most progress is driven by compute. Our algorithms are similar to the human brain, and therefore require similar amounts of compute.
    • It becomes a compute war. You're taking the same fundamental algorithms and spending a hundred billion dollars on compute, and it works. (Informed by Ajeya's report, IMO the most important upshot of which being that spending a truly massive amount of money can cover a sizeable portion of the difference between our current compute and the compute of the human brain. If human brain-level compute is an important threshold, then the few actors who could  spend $100B+ are have an advantage of decades against against actors who can only spend millions. Would like to discuss this further.)
    • This is most definitely not CAIS. There would be one or two or ten superintelligent AI systems,  but not two million.
    • Very few people  can contribute effectively to AI Safety, because to contribute effectively you have to be at one of only a handful of organizations in the world. You need to be in "the room where it happens", whether that's the AI lab developing the superintelligence or the government attempting to monitor the project. The handful of people who can contribute  are incredibly valuable.
    • What AI safety stuff matters?
      • Technical AI safety research. The people right now who are building AI that scales safely. It turns out you can do effective research now because our current methods are the methods that bring us to superintelligence, and whether or not our current research is good enough determines whether or not we survive.
      • Highest levels of government, for their ability to regulate AI labs. A project like this could be nationalized, or carried out under strict oversight from government regulators. Realistically I'd expect the opposite, that governments would be too slow to see the risks and rewards in such a technical domain.
      • People who imagine long-term policies for governing AI. I don't know how much  useful work exists here, but I have to imagine there's some good stuff about how to run the world undersuperintelligence. What's the game theory of multipolar scenarios? What are the points of decisive strategic advantage?
  • Comprehensive AI Systems: Over the next century or few, computers keep getting better at a bunch of different domains. No one AI system is incredible at everything, each new job requires fine-tuning and domain knowledge and human-in-the-loop supervision, but soon enough we hit annual GDP growth of 25%.
    • Governments go about international relations the same as usual, just with better weapons. There's some strategic effects of this that Henry Kissinger and Justin Ding understand quite well, but there's no instant collapse into one world government or anything. There's a few outside risks here that would be terrible (a new WMD, or missile defense systems that threaten MAD), but basically we just get killer robots, which will probably be fine.
      • Killer robots are a key AI safety training ground. If they're inevitable, we should be integrated within enemy lines in order to deploy safely.
    • We have lots of warning shots.
    • What are the existential risks? Nuclear war. Autonomous weapons accidents, which I suppose could turn out to be existential?? Long-term misalignment: over the next 300 years, we hand off the fate of the universe to the robots, and it's not quite the right trajectory.
    • What AI Safety work is most valuable?
      • Run-of-the-mill AI Policy work. Accomplishing normal government objectives often unrelated to existential risk specifically, by driving forward AI progress in a technically-literate and altruistically-thoughtful way.
      • Driving forward AI progress. It's a valuable technology that will help lots of people, and accelerating its arrival is a good thing.
        • With particular attention to safety. Building a CS culture, a Silicon Valley, a regulatory environment, and international cooperation that will sustain the three hundred year transition.
      • Working in military AI systems. They're the killer robots most likely to run amok and kill some people (or 7 billion). Malfunctioning AI can also cause nuclear war by setting off geopolitical conflict. Also new WMDs would be terrible.
  • No takeoff: Looks qualitatively similar to the above, except growth remains steady around 2% for at least several centuries. We remain in the economic paradigm of the Industrial Revolution, and AI makes an economic contribution similar to that of electricity or oil without launching us into a new period of human history.
    • This seems entirely possible, maybe even the most likely outcome. I've been surrounded by people talking about short timelines from a pretty young age so I never really thought about this possibility, but "takeoff" is not guaranteed. The world in 500 years could resemble the world today; in fact, I'd guess most thoughtful people don't think much about transformative AI and would assume that this is the default scenario.
    • Part of why I think this is entirely plausible is because I don't see many independently strong arguments for short AI timelines:
      • IMO the strongest argument for short timelines is that, within the next few decades, we'll cross the threshold for using more compute than the human brain. If this turns out to be a significant threshold and a fair milestone to anchor against, then we could hit an inflection point and rapidly see Bostrom Superintelligence-type scenarios.
        • I see this belief as closely associated with the entire first scenario described above: Held by OpenAI/DeepMind, the idea that we will "solve intelligence" with an agenty AI running a simple fundamental algorithm with massive compute and effectively generalizing across many domains.
      • IIRC, the most prominent early argument for short AI timelines, as discussed by Bostrom, Yudkowsky, and others, was recursive self-improvement. The AI will build smarter AIs, meaning we'll eventually hit an inflection point of runaway improvement positively feeding into itself and rapidly escalating from near-human to lightyears-beyond-human intelligence. This argument seems less popular in recent years, though I couldn't say exactly why. My only opinion would be that this seems more like an argument for "fast takeoff" (once we have near-human level AI systems for building AI systems, we'll quickly achieve superhuman performance in that area), but does not tell you when that takeoff will occur. For all we know, this fast takeoff could happen in hundreds of years. (Or I could be misunderstanding the argument here, I'd like to think more about it.)
      • Surveys asking AI researchers when they expect superhuman AI have received lots of popular coverage and might be driving widespread acceptance of short timelines. My very subjective and underinformed intution puts little weight on these surveys compared to the object level arguments. The fact that people trying to build superintelligence believe it's possible within their lifetime certainly makes me take that possibility seriously, but it doesn't provide much of an upper bound on how long it might take. If the current consensus of AI researchers proves to be wrong about progress over the next century, I wouldn't expect their beliefs about the next five or ten centuries to hold up - the worldview assumptions might just be entirely off-base.
      • These are the only three arguments for short timelines I've ever heard and remembered. Interested if I'm forgetting anything big here.
      • Compare this to the simple prior that history will continue with slow and steady single-digit growth as it has since the Industrial Revolution, and I see a significant chance that we don't see AI takeoff for centuries, if ever. (That's before considering object level arguments for longer timelines, which admittedly I don't see many of, and therefore I don't put much weight on.)
    • I haven't fully thought through all of this, but would love to hear others thoughts on the probability of "no takeoff".

This is pretty rough around the edges, but these three scenarios seem like the key possibilities for the next few centuries that I can see at this point. For the hell of it, I'll give some very weak credences: 10% that we solve superintelligence within decades, 25% that CAIS brings double-digit growth within a century or so, maybe 50% that human progress continues as usual for at least a few centuries, and (at least) 15% that what ends up happening looks nothing like any of these scenarios. 

Very interested in hearing any critiques or reactions to these scenarios or the specific arguments within.

Comment by aidan-o-gara on Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain · 2021-01-18T23:33:29.172Z · EA · GW

This is really persuasive to me, thanks for posting. Previously I’d heard arguments anchoring AGI timelines to the amount of compute used by the human brain, but I didn’t see much reason at all for our algorithms to use the same amount of compute as the brain. But you point to the example of flight, where all the tricky issues of how to get something to fly were quickly solved almost as soon as we built engines as powerful as birds. Now I’m wondering if this is a pattern we’ve seen many times — if so, I’d be much more open to anchoring AI timelines on the amount of compute used by the human brain (which would mean significantly shorter timelines than I’d currently expect).

So my question going forward would be: What other machines have humans built to mimic the functionality of living organisms? In these cases, do we see a single factor driving most progress, like engine power or computing power? If so, do machines perform as well as living organisms with similar levels of this key variable? Or, does the human breakthrough to performing on-par with evolution come at a more random point, driven primarily by one-off insights or by a bunch of non-obvious variables?

Within AI, you could examine how much compute it took to mimic certain functions of organic brains. How much compute does it take to build human-level speech recognition or image classification, and how does that compare to the compute used in the corresponding areas of the human brain? (Joseph Carlsmith’s OpenPhil investigation of human level compute covered similar territory and might be helpful here, but I haven’t gone through it in enough detail to know.)

Does transportation offer other examples? Analogues between boats and fish? Land travel and fast mammals?

I’m having trouble thinking of good analogues, but I’m guessing they have to exist. AI Impacts’ discontinuities investigation feels like a similar type of question about examples of historical technological progress, and it seems to have proven tractable to research and useful once answered. I’d be very interested in further research in this vein — anchoring AGI timelines to human compute estimates seems to me like the best argument (even the only good argument?) for short timelines, and this post alone makes those arguments much more convincing to me.

Comment by aidan-o-gara on AMA: Elizabeth Edwards-Appell, former State Representative · 2021-01-09T22:58:49.831Z · EA · GW

What impact do you think you were able to have as a State Rep? Are there any specific projects or policies you’re particularly proud of?

Comment by aidan-o-gara on 2020 AI Alignment Literature Review and Charity Comparison · 2020-12-23T01:10:31.543Z · EA · GW

Yes, looks like LTFF  is also looking for funding. Edited, thanks. 

Comment by aidan-o-gara on 2020 AI Alignment Literature Review and Charity Comparison · 2020-12-22T09:31:48.538Z · EA · GW

Fascinating that very few top AI Safety organizations are looking for more funding. By my count, only 4 of these 17 organizations are even publicly requesting donations this year: three independent research groups (GCRI, CLR, and AI Impacts) and an operations org (BERI).  Across the board, it doesn't seem like AI Safety is very funding constrained. 

Based on this report, I think the best donation opportunity among these orgs is BERI, the Berkeley Existential Risk Initiative. Larks says that BERI "provides support to existential risk groups at top universities to facilitate activities (like hiring engineers and assistants) that would be hard within the university context."  According to BERI's blog post requesting donations, this support includes:

  • $250k to hire contracted researchers and research assistants for university and independent research groups.
  • $170k for additional support: productivity coaches, software engineers, copy editors, graphic designers, and other specialty services.
  • Continued to employ two machine learning research engineers to work alongside researchers at CHAI.
  • Hired Robert Trager and Joslyn Barnhart to work as Visiting Senior Research Fellows with GovAI, as well as hiring a small team of supporting research personnel.
  • Supported research on European AI strategy and policy in association with CSER.
  • Combined immediate COVID-19 assistance with long-term benefits.

BERI is also supporting new existential risk research groups at other top universities, including: 

  • The Autonomous Learning Laboratory at UMass Amherst, led by Phil Thomas
  • Meir Friedenberg and Joe Halpern at Cornell
  • InterACT at UC Berkeley, led by Anca Dragan
  • The Stanford Existential Risks Initiative
  • Yale Effective Altruism, to support x-risk discussion groups
  • Baobao Zhang and Sarah Kreps at Cornell

Donating to BERI seems to me like the only way to give more money to AI Safety researchers at top universities. FHI, CHAI, and CSER aren't publicly seeking donations seemingly because anything you directly donate might end up either (a) replacing  funding they would've received from their university or other donors, or (b) being limited in terms of what they're allowed to spend it on. If that's true, then the only way to counterfactually increase funding at these groups is through BERI. 

If you would like, click here to donate to BERI

Comment by aidan-o-gara on Founders Pledge Climate & Lifestyle Report · 2020-12-10T06:14:50.052Z · EA · GW

Thank you for sharing this, really love the Main Conclusions here. As usual with comments, most of what you’re saying makes sense to me, but I’d like to focus on one quibble about the presentation of your conclusions.

I think Figure 2 in the report could be easily be misinterpreted as strong evidence for a conclusion you later disavow: that by far the most important lifestyle choice for reducing your CO2 emissions is whether you have another child. The Key Takeaways section begins with this striking chart where the first bar is taller than all the rest added up, but the body paragraphs give context and caveats before finishing on a more sober conclusion. The conclusion makes perfect sense to me, but it’s the opposite of what I would’ve guessed looking at the first chart in the section. If you’re most confident in the estimates that account for government policy, you could make them alone your first chart, and only discuss the other (potentially misleading) estimates later.

I probably only noticed this because you’re discussing such a hot button issue. Footnotes work for dry academic questions, but when the question is having fewer kids to reduce carbon emissions, I start thinking about how Twitter and CNN would read this.

Anyways, hope that’s helpful, feel free to disagree, and thanks for the great research!

Comment by aidan-o-gara on Make a Public Commitment to Writing EA Forum Posts · 2020-11-19T00:35:52.183Z · EA · GW

Your blog is awesome, looking forward to reading these posts and anything else you put on the Forum!

Comment by aidan-o-gara on The emerging school of patient longtermism · 2020-08-09T02:08:09.660Z · EA · GW

I really like this kind of post from 80,000 Hours: a quick update on their general worldview. Patient philanthropy isn’t something I know much about, but this article makes me take it seriously and I’ll probably read what they recommend.

Another benefit of shorter updates might be sounding less conclusive and more thinking-out-loud. Comprehensive, thesis-driven articles might give readers the false impression that 80K is extremely confident in a particular belief, even when the article tries to accurately state the level of confidence. It’s hard to predict how messages will spread organically over time, but frequently releasing smaller updates might highlight that 80K’s thinking is uncertain and always changing. (Of course, the opposite could be true.)

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-08-04T06:10:21.049Z · EA · GW

I meant the body text of posts could be darker - I wouldn't change the buttons or other light-grey text.

Interesting that the study found serif fonts more readable. I'm not aware of conclusive evidence in either direction, I'd just heard folk wisdom that sans-serif is more readable on a computer screen.

My general opinion is that the comments section on this forum is extremely easy to read and clean to look at, some of my favorite formatting anywhere, but personally I find the body text of posts much more difficult to read than most sites. I wonder what most people think, I wouldn't expect everyone to have the same experience.

Comment by aidan-o-gara on Will Three Gorges Dam Collapse And Kill Millions? · 2020-07-26T19:37:07.310Z · EA · GW

Here's an informative prediction writeup from Metaculus user beala.

Comment by aidan-o-gara on Civilization Re-Emerging After a Catastrophic Collapse · 2020-07-23T06:30:05.516Z · EA · GW

Ben Garfinkel made an interesting comment here:

“...the historical record suggests that permanent collapse is unlikely. (Complex civilizations were independently developed multiple times; major collapses, like the Bronze Age Collapse or fall of the Roman Empire, were reversed after a couple thousand years; it didn't take that long to go from the Neolithic Revolution to the Industrial Revolution; etc.).”
Comment by aidan-o-gara on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-23T06:26:10.922Z · EA · GW

What resources would you recommend on ethical non-naturalism? Seems like a plausible idea I don’t know much about.

Comment by aidan-o-gara on What skill-building activities have helped your personal and professional development? · 2020-07-21T21:50:41.576Z · EA · GW

Volunteering to write research summaries for Faunalytics, an animal advocacy organization, improved my writing skills.

Reading Cal Newport's blog and books has improved my productivity skills. I'd recommend Deep Work to anyone, and college students might like his blogging on student success.

Reading FiveThirtyEight and playing with my own toy models of elections and sports taught me a lot about statistics, data, and prediction.

Comment by aidan-o-gara on Best resources for introducing longtermism and AI risk? · 2020-07-21T03:19:42.466Z · EA · GW

Cool! Thanks for asking for clarification, I didn't quite realize how much ambiguity I left in the question.

I'm mainly interested in persuading people I know personally who are already curious about EA ideas. Most of my successful intros in these situations consist of (a) an open-ended free flowing conversation, followed by (b) sending links to important reading material. Conversations are probably too personal and highly varied to advice that's universally applicable, so I'm most interested in the links and reading materials you send to people.

So, my question, better specified: What links do you send to introduce AI and longtermism?

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-07-21T03:12:13.060Z · EA · GW

Two consecutive hyphens should autocorrect to an em dash!

That way, a parenthetical clause in the middle of your sentence - like this one - isn't offset by "space hyphen space" on either side--or, even worse, by "hyphen hyphen". Instead, autocorrect two hyphens to a nice, clean em dash—like that.

I think this is a common feature for text editors - Microsoft Word definitely uses it.

Comment by aidan-o-gara on richard_ngo's Shortform · 2020-07-18T01:58:22.011Z · EA · GW

“...whether there's some divergence between what's most valuable for them and what's most valuable for infrequent browsers.”

I’d strongly guess that this is the case. Maybe Community posts should be removed from Forum favorites?

Comment by aidan-o-gara on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-16T19:23:52.126Z · EA · GW

Good idea, thanks! I've posted a question here.

More broadly, should AMA threads be reserved for direct questions to the respondent and the respondent's answers? Or should they encourage broader discussion of those questions and ideas by everyone?

I'd lean towards AMAs as a starting point for broader discussion, rather than direct Q&A. Good examples include the AMAs by Buck Shlegeris and Luke Muehlhauser. But it does seem that most AMAs are more narrow, focusing on direct question and answer.

[For example, this question isn't really directed towards Ben, but I'm asking anyways because the context and motivations are clearer here than they would be elsewhere, making productive discussion more likely. But I'm happy to stop distracting if there's consensus against this.]

Comment by aidan-o-gara on Best resources for introducing longtermism and AI risk? · 2020-07-16T17:40:58.812Z · EA · GW

If anyone's interested, here was my intro to grantmaking and global poverty:

...

If you'd prefer more mainstream ways of improving the world, here's some top organizations and job opportunities:

  • Grantmakers within effective altruism are researching the most impactful donation opportunities and giving billions to important causes. 
    • GiveWell researches top donation opportunities in global health and poverty. Founded by ex-hedge fund analysts, they focus on transparency, detailed public writeups, and justifying their decisions to outsiders. You might like their cost-effectiveness model of different charities. They're hiring researchers and a Head of People. 
    • The Open Philanthropy Project funds a wider range of causes - land use reform, pandemic preparedness, basic science research, and many more - in their moonshot approach of "hits-based giving". OpenPhil has billions to donate to its causes, because it's funded by Dustin Moskovitz, co-founder of Facebook and Asana.  
  • World-class organizations are working directly on all kinds of highly impactful problems (and they're hiring! :P)
    • GiveDirectly takes money and gives it to poor people, no strings attached. They typically hire from top private sector firms and have an incredibly well-credentialed team. They're recommended by GiveWell as an outstanding giving opportunity. 
    • Effective global poverty organizations include many for-profits (Sendwave (jobs), TapTap Send (jobs)) and non-profits (Evidence Action (job), ID Insight (jobs)). 
    • 80,000 has a big ol' job board
    • (You're probably not looking for a new job, but who knows, don't mind my nudge)
Comment by aidan-o-gara on Best resources for introducing longtermism and AI risk? · 2020-07-16T17:38:21.398Z · EA · GW

For example, I emailed the following to a friend who'd enjoyed reading Doing Good Better and wanted to learn more about EA, but hadn't further engaged with EA or longtermism. He has a technical background and (IMO) is potentially a good fit for AI Policy work, which influenced my link selection.

...

The single best article I'd recommend on doing good with your career is by 80,000 Hours, a non-profit founded by the Oxford professor who wrote Doing Good Better, incubated in Y-Combinator, and dedicated to giving career advice on how to solve pressing global problems. If you'd prefer, their founder explains the ideas in this podcast episode.

If you're open to some new, more speculative ideas about what "doing good" might mean, here's a few ideas about improving the long-run future of humanity:

  • Longtermism: Future people matter, and there might be lots of them, so the moral value of our actions is significantly determined by their effects on the long-term future. We should prioritize reducing "existential risks" like nuclear war, climate change, and pandemics that threaten to drive humanity to extinction, preventing the possibility of a long and beautiful future. 
    • Quick intro to longtermism and existential risks from 80,000 Hours
    • Academic paper arguing that future people matter morally, and we have tractable ways to help them, from the Doing Good Better philosopher
    • Best resource on this topic: The Precipice, a book explaining what risks could drive us to extinction and how we can combat them, released earlier this year by another Oxford philosophy professor
  • Artificial intelligence might transform human civilization within the next century, presenting incredible opportunities and serious potential problems
    • Elon Musk, Bill Gates, Stephen Hawking, and many leading AI researchers worry that extremely advanced AI poses an existential threat to humanity (Vox)
    • Best resource on this topic: Human Compatible, a book explaining the threats, existential and otherwise, posed by AI. Written by Stuart Russell, CS professor at UC Berkeley and author of the leading textbook on AI. Daniel Kahneman calls it "the most important book I have read in quite some time". (Or this podcast with Russell) 
    • CS paper giving the technical explanation of what could go wrong (from Google/OpenAI/Berkeley/Stanford)
    • How you can help by working on US AI policy, explains 80,000 Hours
    • (AI is less morally compelling if you don't care about the long-term future. If you want to focus on the present, maybe focus on other causes: global poverty, animal welfare, grantmaking, or researching altruistic priorities.)
  • Improving institutional decision-making isn't super straightforward, but could be highly impactful if successful. Altruism aside, you might enjoy Phil Tetlock's Superforecasting
  • 80,000 Hours also wrote profiles for working in climate change and nuclear war prevention, among many other things

[Then I gave some info about two near-termism causes he might like: grantmaking, by linking to GiveWell and the Open Philanthropy Project, and global poverty, by linking to GiveDirectly and other GiveWell top charities.]

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-07-15T08:34:49.612Z · EA · GW

Cool, thanks.

Comment by aidan-o-gara on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-15T00:04:12.461Z · EA · GW

Also (you might already be familiar with these as well): The Bottom Billion by Paul Collier, and Poor Economics by Banerjee and Duflo, who won the Nobel with Michael Kremer for work on randomized control trials in development economics.

Comment by aidan-o-gara on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-15T00:01:19.179Z · EA · GW

Hickel has some very interesting ideas, and I really enjoyed your writeup. I find plausible the central claim that neocolonialist foreign and economic policy has put hundreds of millions of people into poverty. I'm a bit unsure of his arguments about the harms of debt and trade terms (hopefully will return later), but the case that foreign-imposed regime change has been harmful seems really strong.

So, question: Might it be highly impactful to prevent governments from harmfully overthrowing foreign regimes?

  • Governments overthrow each other all the time. The United States has overthrown dozens of foreign governments over the last century, including recent interventions in Libya, Yemen, Palestine, and Iraq. The Soviet Union did the same, and modern day Russia aggressively interferes in foreign democratic elections, including the 2016 US election. China might do the same, but I can't find great evidence. I don't know about the UK and EU, I can't find obvious recent examples that weren't primarily US-led (e.g. Iraq, Libya). Official histories probably understate the number of overthrows because successful attempts can remain secret, at least for the years immediately following the overthrow.
  • Toppling governments can be extremely harmful. After 5 minutes of Googling, it seems foreign imposed regime changes might increase the likelihood of civil wars ("in roughly 40 percent of the cases of covert regime change undertaken during the Cold War, a civil war occurred within 10 years of the operation.") and human rights abuses ("In more than 55 percent of the cases of covert regime-change missions undertaken during the Cold War, the targeted states experienced a government-sponsored mass killing episode within 10 years of the regime-change attempt."). I'd also expect to find strong evidence of increased poverty, decreased economic growth, and worse health and education outcomes.

Clearly there's much more to be discussed here, but I'll post now and come back later. A few questions:

  • How tractable is changing the foreign policy of major governments? How does one do it? What are some examples of historical successes or failures?
  • Is this "neglected"? The concept doesn't apply super cleanly here, but my hunch might be that few people involved in foreign policy have EA values, meaning EA might have the "competitive edge" of pursuing an uncommon goal.
  • What are the risks to EA here? Government is generally contentious and polarized, and regime change is an extremely controversial issue. What specific ways could EA attempts to work on regime change or other foreign policy causes end up backfiring?

Some related conversation:

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-07-14T22:47:44.662Z · EA · GW

Command + K should add a hyperlink!

Comment by aidan-o-gara on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-14T17:27:33.311Z · EA · GW

FWIW, here's an introduction to longtermism and AI risks I wrote for a friend. (My friend has some technical background, he had read Doing Good Better but not engaged further with EA, and I thought he'd be a good fit for AI Policy research but not technical research.)

  • Longtermism: Future people matter, and there might be lots of them, so the moral value of our actions is significantly determined by their effects on the long-term future. We should prioritize reducing "existential risks" like nuclear war, climate change, and pandemics that threaten to drive humanity to extinction, preventing the possibility of a long and beautiful future. 
    • Quick intro to longtermism and existential risks from 80,000 Hours
    • Academic paper arguing that future people matter morally, and we have tractable ways to help them, from the Doing Good Better philosopher
    • Best resource on this topic: The Precipice, a book explaining what risks could drive us to extinction and how we can combat them, released earlier this year by another Oxford philosophy professor
  • Artificial intelligence might transform human civilization within the next century, presenting incredible opportunities and serious potential problems
    • Elon Musk, Bill Gates, Stephen Hawking, and many leading AI researchers worry that extremely advanced AI poses an existential threat to humanity (Vox)
    • Best resource on this topic: Human Compatible, a book explaining the threats, existential and otherwise, posed by AI. Written by Stuart Russell, CS professor at UC Berkeley and author of the leading textbook on AI. Daniel Kahneman calls it "the most important book I have read in quite some time". (Or this podcast with Russell) 
    • CS paper giving the technical explanation of what could go wrong (from Google/OpenAI/Berkeley/Stanford)
    • How you can help by working on US AI policy, explains 80,000 Hours
    • (AI is less morally compelling if you don't care about the long-term future. If you want to focus on the present, maybe focus on other causes: global poverty, animal welfare, grantmaking, or researching altruistic priorities.)

Generally, I'd like to hear more about how different people introduce the ideas of EA, longtermism, and specific cause areas. There's no clear cut canon, and effectively personalizing an intro can difficult, so I'd love to hear how others navigate it.

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-07-14T14:47:04.672Z · EA · GW

When reading the text of a post. You’re right, it’s totally good when scrolling downwards— I’m having trouble when writing comments, scrolling up and down between the text and my comment and getting blocked by the bars.

Comment by aidan-o-gara on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-14T11:02:10.499Z · EA · GW

Really enjoyed this, thanks! I'll have more thoughts soon, but in the meantime here's another post you might enjoy: "Growth and the case against randomista development" by John Halstead and Hauke Hillebrandt.

Comment by aidan-o-gara on Max_Daniel's Shortform · 2020-07-14T11:00:04.825Z · EA · GW

Very interesting writeup, I wasn't aware of Hickel's critique but it seems reasonable.

Do you think it matters who's right? I suppose it's important to know whether poverty is increasing or decreasing if you want to evaluate the consequences of historical policies or events, and even for general interest. But does it have any specific bearing on what we should do going forwards?

Comment by aidan-o-gara on Tips for Volunteer Management · 2020-07-14T09:51:29.607Z · EA · GW

Thanks for this! I’ve done a bit of volunteering and these suggestions seem very accurate and applicable. I’ll refer to this if I work with a volunteer program again.

Do you have any thoughts on when organizations benefit most from working with volunteers? When is it a bad idea, what makes the difference?

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-07-14T09:37:16.556Z · EA · GW

On mobile, you could shrink the menu bars on the top and bottom of your screen (where the top has the Forum logo, and bottom has “all posts” and other navigation bars). Smaller navbars -> More screen space for reading -> easier to read and comment.

Comment by aidan-o-gara on Andreas Mogensen's "Maximal Cluelessness" · 2020-07-05T07:25:10.227Z · EA · GW

How important do you think non-sharp credence functions are to arguments for cluelessness being important? If you generally reject Knightian uncertainty and quantify all possibilities with probabilities, how diminished is the case for problematic cluelessness?

(Or am I just misunderstanding the words here?)

Comment by aidan-o-gara on HStencil's Shortform · 2020-07-01T04:02:18.033Z · EA · GW

Glad to hear it! Very good idea to talk with a bunch of stats people, your updated tests are definitely beyond my understanding. Looking forward to talking (or not), and let me know if I can help with anything

Comment by aidan-o-gara on HStencil's Shortform · 2020-06-30T06:10:19.829Z · EA · GW

After taking a closer look at the actual stats, I agree this analysis seems really difficult to do well, and I don't put much weight on this particular set of tests. But your hypothesis is plausible and interesting, your data is strong, your regressions seem like the right general idea, and this seems proof of concept that this analysis could demonstrate a real effect. I'm also surprised that I can't find any statistical analysis of COVID and Biden support anywhere, even though it seems very doable and very interesting. If I were you and wanted to pursue this further, I would figure out the strongest case that there might be an effect to be found here, then bring it to some people who have the stats skills and public platform to find the effect and write about it.

Statistically, I think you have two interesting hypotheses, and I'm not sure how you should test them or what you should control for. (Background: I've done undergrad intro stats-type stuff.)

  • Hypothesis A (Models 1 and 2) is that more COVID is correlated with more Biden support.
  • Hypothesis B (Model 3) is that more Biden support is correlated with more tests, which then has unclear causal effects of COVID.

I say "more COVID" to be deliberately ambiguous because I'm not sure which tracking metric to use. Should we expect Biden support to be correlated with tests, cases, hospitalizations, or deaths? And for each metric, should it be cumulative over time, or change over a given time period? What would it mean to find different effects for different metrics? Also, they're all correlated with each other - does that bias your regression, or otherwise affect your results? I don't know.

I also don't know what controls to use. Controlling for state-level FEs seems smart, while controlling for date is interesting and potentially captures a different dynamic, but I have no idea how you should control for the correlated bundle of tests/cases/hospitalizations/deaths.

Without resolving these issues, I think the strongest evidence in favor of either hypothesis would be a bunch of different regressions that categorically test many different implementations of the overall hypothesis, with most of them seemingly supporting the hypothesis. I'm not sure what the right implementation is, I'd want someone with a strong statistics background to resolve these issues before really believing it, and this method can fail, but if most implementations you can imagine point in the same direction, that's at least a decent reason to investigate further.

If you actually want to convince someone to look into this (with or without you), maybe do that battery of regressions, then write up a very generalized takeaway along the lines of "The hypothesis is plausible, the data is here, and the regressions don't rule out the hypothesis. Do you want to look into whether or not there's an effect here?"

Who'd be interested in this analysis? Strong candidates might include academics, think tanks, data journalism news outlets, and bloggers. The stats seem very difficult, maybe such that the best fit is academics, but I don't know. News outlets and bloggers that aren't specifically data savvy probably aren't capable of doing this analysis justice. Without working with someone with a very strong stats background, I'd be cautious about writing this for a public audience.

Not sure if you're even interested in any of that, but FWIW I think they'd like your ideas and progress so far. If you'd like to talk about this more, I'm happy to chat, you can pick a time here. Cool analysis, kudos on thinking of an interesting topic, seriously following through with the analysis, and recognizing its limitations.

Comment by aidan-o-gara on HStencil's Shortform · 2020-06-30T02:30:38.076Z · EA · GW

This is really cool, maybe email it to 538 or Vox? I’ve had success contacting them to share ideas for articles before

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-06-29T05:46:55.709Z · EA · GW

I like this idea a lot. It probably lowers the effort bar for a top-level post, which I think is good.

Comment by aidan-o-gara on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-28T21:06:41.068Z · EA · GW

This makes sense. If 99% of humanity dies, the surviving groups might not be well-connected by transportation and trade. Modern manufacturing starts with natural resources from one country, assembles its products in the next, and ships and sells to a third. But if e.g. ships and planes can’t get fuel or maintenance, then international trade fails, supply chains break down, and survivors can’t utilize the full capacity of technology.

As gavintaylor says below, industrialization might need a critical mass of wealth to begin. (Maybe accumulated wealth affords freedom from sustenance work and allows specialization of labor?)

Though over thousands of years, the knowledge that progress is possible might end up encouraging people to rebuild the lost infrastructure.

Comment by aidan-o-gara on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-28T20:50:40.674Z · EA · GW

Yep, agreed

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-06-28T08:34:15.655Z · EA · GW

What if, when you highlight text within a post, a small toolbar pops up where you can click to quote the text in your comment box?

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-06-28T07:12:18.373Z · EA · GW

So like, when I'm logged into my account, I'll see every shortform post as top level?

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-06-28T07:08:08.350Z · EA · GW

Roots-based approach to the same outcome: Leave an open invitation and a Calendly link in your EAForum bio.

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-06-28T07:05:53.584Z · EA · GW

StackExchange might have some great principles to implement here, though I don't know much about it

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-06-28T07:03:12.424Z · EA · GW

That's an interesting idea for Forum v3: a wiki for all EA materials. Newcomers could go to the Forum and find Peter Singer, Doing Good Better, and links to 80,000 Hours research + new posts every day.

Related: "Should EA Buy Distribution Rights for Foundational Books?" by Cullen O'Keefe

Comment by aidan-o-gara on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-28T06:31:42.376Z · EA · GW

Some evidence for your counterargument: There was no agricultural revolution in humanity's first 500,000 years, yet an industrial revolution happened only 10,000 years later.

Seems like industrial revolutions are more likely than agricultural.

[I haven't listened to Jabari's talk yet, just a quick thought]

Comment by aidan-o-gara on EA is vetting-constrained · 2020-06-27T21:45:12.024Z · EA · GW

One year later, do you think Meta is still less constrained by vetting, and more constrained by a lack of high-quality projects to fund?

And for other people who see vetting constraints: Do you see vetting constraints in particular cause areas? What kinds of organizations aren't getting funding?

Comment by aidan-o-gara on What to do with people? · 2020-06-27T21:00:30.780Z · EA · GW

Interesting, this definitely seems possible. Are there any examples of EA projects that failed, resulting in less enthusiasm for EA projects generally?

Comment by aidan-o-gara on EA is risk-constrained · 2020-06-24T10:16:25.178Z · EA · GW

What kinds of risky choices do EAs avoid? Are you thinking of any specific examples, perhaps particular cause areas or actions?

Comment by aidan-o-gara on EA considerations regarding increasing political polarization · 2020-06-22T22:17:21.627Z · EA · GW

Maybe EA can't affect political polarization nearly as much as the other way around - political polarization can dramatically affect EA.

EA could get "cancelled", perhaps for its sometimes tough-to-swallow focus on prioritization and tradeoffs, for a single poorly phrased and high profile comment, for an internal problem at an EA org, or perhaps for lack of overall diversity. Less dramatically, EA could lose influence by affiliating with a political party, or by refusing to, and degrading norms around discourse and epistemics could turn EA discussions into partisan battlegrounds.

This post makes a lot of sense, I definitely think EA should consider the best direction for the political climate to grow in. We should also spend plenty of time considering how EA will be affected by an increasing polarization, and how we can respond.

EAs in policy and government seem particularly interesting here, as potentially highly subject to the effects of political climate. I'd love to hear how EAs in policy and government have responded or might respond to these kinds of dynamics.

One question: Should EA try to be "strategically neutral" or explicitly nonpartisan? Many organizations do so, from think tanks to newspapers to non-profits. What lessons can we learn from other non-partisan groups and movements? What are their policies, how do they implement them, and what effects do they have?

Thanks for this post, very interesting!

Comment by aidan-o-gara on How to find good 1-1 conversations at EAGx Virtual · 2020-06-20T20:40:07.245Z · EA · GW

Hey, that’s a great idea. Personally I don’t always turn on video, because of everything from connection issues to not wanting to get out of bed. Videochat can be nice, but definitely isn’t necessary.

I‘ve edited the post to recommend noting that videochat is optional, and audio-only is perfectly good.

Thanks for the thought and the kind words! Hope you had a great conference too :)

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-06-20T06:43:47.581Z · EA · GW

Sans-serif font in body text! The comments section is absolutely beautiful to read, but I find the body text of posts very difficult. Most blogs and online news sources seem to use sans-serif, probably for readability.

Alternatively, give users the option to pick their own font. Also, maybe make text black instead of a lighter grey?

Comment by aidan-o-gara on Longtermism ⋂ Twitter · 2020-06-16T23:37:16.510Z · EA · GW

Agreed, and I support EA Forum norms of valuing quick takes in both posts and comments. Personally, the perceived bar to contributing feels way too high.

Comment by aidan-o-gara on Longtermism ⋂ Twitter · 2020-06-16T02:35:48.617Z · EA · GW

Welcome, and thanks for the contribution! I strongly agree with all three recommendations, and would point to #EconTwitter as a Twitter community that has managed to do all three very well.

Maintaining a strong code of conduct seems particularly useful. Different parts of Twitter have very different conversation norms, ranging from professional to degenerate and constructive to cruel. Norms are harder to build than to destroy, but ultimately individual people set the norm by what they tweet, so anyone can contribute to building the culture they want to see.

FWIW, my two cents would be to discourage more serious EA conversations from moving to Twitter. In my experience, it often brings out the worst in people and conversations. (It also has plenty of positives, and can be lots of fun.)