Posts
Comments
MaxRa - I might have been wrong about this; I'm not at all an expert in corporate law. Thanks for the informative link.
A more accurate claim might be 'American tech companies tend to prioritize short-term profits over long-term human survival'.
I'm not a China expert, but I have some experience running classes and discussion forums in a Chinese university. In my experience, people in China feel considerably more freedom to express their views on a wide variety of issues than Westerners typically think they do. There is a short list of censored topics, centered around criticism of the CCP itself, Xi Jinping, Uyghurs, Tibet, and Taiwan. But I would bet that they have plenty of freedom to discuss AI X risks, alignment, and geopolitical issues around AI, as exemplified by the fact that Kai-Fu Lee, author of 'AI Superpowers' (2018), and based in Beijing, is a huge tech celebrity in China who speaks frequently on college campuses there - despite being a vocal critic of some gov't tech policies.
Conversely, there are plenty of topics in the West, especially in American academia, that are de facto censored (through cancel culture). For example, it was much less trouble to teach about evolutionary psychology, behavior genetics, intelligence research, and even sex research in a Chinese university than in an American university.
PS I'd encourage folks to read this excellent article on whether China is being over-hyped as an AI rival.
Heartbreaking. Such a loss.
Thank you for sharing some very useful summaries of his key work.
DMMF - I also encounter this claim very often on social media. 'If the US doesn't rush ahead towards AGI, China will, & then we lose'. It's become one of the most common objections to slowing down AI research by US companies, and is repeated ad nauseum by anti-AI-safety accelerationists.
I agree with you that it's not at all obvious that China would rush ahead with AI if the US slowed down. China's CCP leadership already seems pretty concerned with X risks and global catastrophic risks, e.g. climate change. Xi Jinping's concept of 'community of common destiny' emphasizes humanity's shared vulnerability to runaway technological developments such as space-based weapons (and AI, maybe). Chinese science fiction movies (e.g. Shanghai Fortress, The Wandering Earth), routinely depict China as saving the rest of humanity from X-risks, after other nations have failed. I think China increasingly sees itself as the wise elder trying to keep impetuous, youthful, reckless America from messing everything up for everybody.
If China was more expansionist, imperialistic, and aggressive, I'd be more concerned that they would push ahead with AI development for military applications. Yes, they want to retake Taiwan, and they will, sooner or later. But they're not showing the kind of generalized western-Pacific expansionist ambitions that Japan showed in the 1930s. As long as the US doesn't meddle too much in the 'internal affairs of China' (which they see as including Taiwan), there's little need for a military arms race involving AI.
I worry that Americans tend to think and act as if we are the only people in the world who are capable of long-term thinking, X risk reduction, or appreciation of humanity's shared fate. As if either the US dominates the world with AI, or other nations such as China will develop dangerous AI without any concern for the consequences. The evidence so far suggests that China might actually be a better steward of our global safety than the US is being, at least in the domain of AI development.
I signed and strongly support this open letter.
Let me add a little global perspective (as a US citizen who's lived in 4 countries outside the US for a total of 14 years, and who doesn't always see the US as the 'good guy' in geopolitics).
The US is 4% of the world's population. The American AI industry is (probably) years ahead of any other country, and is pushing ahead with the rationale that 'if we don't keep pushing ahead, a bad actor (which usually implies China) will catch up, and that would be bad'. Thus, we impose AI X-risk on the other 96% of humans without the informed consent, support, or oversight.
We used the same arms-race rationale in the 1940s to develop the atomic bomb ('if we don't do it, Germany will') and in the 1950s to develop the hydrogen bomb ('if we don't do it, the Soviet Union will'). In both cases, we were the bad actor. The other countries were nowhere close to us. We exaggerated the threat that they would catch up, and we got the American public to buy into that narrative. But we were really the ones pushing ahead into X-risk territory. Now we're promoting the same narrative for AI development. 'The AI arms race cannot be stopped', 'AGI is inevitable', 'the genie is out of the bottle', 'if not us, then China', etc, etc.
We Americans have a very hard time accepting that 'we might be the baddies'. We are uncomfortable acknowledging any moral obligations to the rest of humanity (if they conflict in any way with our geopolitical interests). We like to impose our values on the world, but we don't like to submit to any global oversight by others.
I hope that this public discussion about AI risks also includes some soul-searching by Americans -- not just the AI industry, but all of us, concerning the way that our country is, yet again, pushing ahead with developing extremely dangerous technology, without any sense of moral obligation to others.
Having taught online courses for CUHK-Shenzhen in China for a year, and discussed quite a bit about EA, AI, and X risk with the very bright young students there, I often imagine how they would view the recent developments in the American AI industry. I think they would be appalled by our American hubris. They know that the American political system is too partisan, fractured, slow, and dysfunctional to impose any effective regulation on Big Tech. They know that American tech companies are legally obligated (by 'fiduciary duty' to shareholders) to prioritize quarterly profits over long-term human survival. They know that many Bay Area tech bros supporting AI are transhumanists, extropians, or Singularity-welcomers who look forward to humanity being replaced by machines. They know that many Americans view China as a reckless, irresponsible, totalitarian state that isn't worth listening to about any AI safety concerns. So, I imagine, any young Chinese students who's paying attention would take an extremely negative view of the risks that the American AI industry is imposing on the other 7.7 billion people in the world.
Phib - I really like this idea.
I agree that deepfakes could be a potential amplifier of global catastrophic risks such as warfare, assassinations, political instability, civil war, religious outrage, terrorism, etc. Especially if people haven't really caught up to how realistic and deceptive they can be.
I'm also not sure of the best way to 'inoculate' people against deepfakes. As you mention, older people might be especially susceptible to them. As would any young people with strong ideological confirmation biases to accept that apparent misbehavior X by individual or group Y might have plausibly happened.
I expect that in the 2024 US election cycle, we'll see an absolute deluge of partisan deepfakes that aim to discredit, mock, and satirize various candidates, analogous to the surge of social media memes in the 2016 election cycle. I don't think voters are at all prepared for how vicious, surreal, and damaging these could get.
To most EAs, deepfakes might sound like a weird branch of the porn industry, or a minor political annoyance, compared to flashy AI developments like GPT. However, I think they really could have an utterly corrosive effect on public discourse, partisanship, and trust in news media. Overall, this seems like a moderate-scope, largely neglected, but somewhat tractable issue for EAs to give a bit of attention to.
Yep -- I think Paul Bloom makes an important point in arguing that 'Empathy 2' (or 'rational compassion') is more consistent with EA-style scope-sensitivity, and less likely to lead to 'compassion fatigue', compared to 'Empathy 1' (feeling another's suffering as if it's one's own).
bxjaeger -- fair point. It's worth emphasizing Paul Bloom's distinction between rational compassion and emotional empathy, and the superiority of the former when thinking about evidence-based policies and interventions.
Peter - excellent short piece; I agree with all of it.
The three themes you mentioned -- radical empathy, scope-sensitivity, scout mindset -- are really the three key takeaways that I try to get my students to learn about in my undergrad classes on EA. Even if they don't remember any about the details of global public health, AI X-risk, or factory farming, I hope they remember those principles.
Toby - thanks for sharing this wonderful, wise, and inspiring talk.
I hope EAs read it carefully and take it seriously.
Akash - thanks for the helpful compilation of recent articles and quotes. I think you're right that the Overton window is broadening a bit more to include serious discussions of AI X-risk. (BTW, for anybody who's familiar with contemporary Chinese culture, I'd love to know whether there are parallel developments in Chinese news media, social media, etc.)
The irony here is that the general public for many decades has seen depictions of AI X-risk in some of the most popular science fiction movies, TV shows, and novels ever made -- including huge global blockbusters, such as 2001: A space odyssey (1968), The Terminator (1984), and Avengers: Age of Ultron (2012). But I guess most people compartmentalized those cultural touchstones into 'just science fiction' rather than 'somewhat over-dramatized depictions of potential real-world dangers'?
My suspicion is that lots of 'wordcel' mainstream journalists who didn't take science fiction seriously do tend to take pronouncements from tech billionaires and top scientists seriously. But, IMHO, that's quite unfortunate, and it reveals an important failure mode of modern media/intellectual culture -- which is to treat science fiction as if it's trivial entertainment, rather than one of our species' most powerful ways to explore the implications of emerging technologies.
One takeaway might be, when EAs are discussing these issues with people, it might be helpful to get a sense of their views on science fiction -- e.g. whether they lean towards dismissing emerging technologies as 'just science fiction', or whether they lean towards taking them more seriously because science fiction has taken them seriously. For example, do they treat 'Ex Machina' (2014) as an reason for dismissing AI risks, or as reason for understanding AI risks more deeply?
In public relations and public outreach, it's important to 'know one's audience' and to 'target one's market'; I think this dimension of 'how seriously people take science fiction' is probably a key individual-differences trait that's worth considering when doing writing, interviews, videos, podcasts, etc.
Ubuntu - yes, regarding the underestimated 'allure of gossip, public shaming, witch hunts, etc', I think the moral psychology at work in these things runs so deep that even the most rationalist & clever EAs can be prone to them -- and then we can sometimes deceive ourselves about what's really going on.
However, the moral psychology around public shaming evolved for some good adaptive reasons, to help deter bad actors, solve coordination problems, enforce social norms, virtue-signal our values, internalize self-control heuristics, etc. So I don't think we should dismiss them entirely. (My 2019 book 'Virtue Signaling' addresses some of these issues.)
Indeed, leveraging the power of these 'darker' facets of moral psychology (e.g. public shaming) has arguably been crucial in many effective moral crusades throughout history, e.g. against torture, slavery, sexism, racism, nuclear brinksmanship, chemical weapons, etc. They may still prove useful in fighting against AI X-risk...
Ubuntu - thanks for the correction; you're right; I misread that section as reflecting Ben's views, rather than as his steel-manning of TIME's views. Oops.
So, please take my reply as a critique of TIME's view, rather than as a critique of Ben's view.
Ben - thanks for this helpful information. It adds useful context to some of the FTX news.
One clarification question: in your Background section 5c, you suggested that '“EA leaders”... did not take enough preventative action and are therefore (partly) responsible for FTX’s collapse'
I agree that EA leaders might, in hindsight, have done a better job of distancing the EA movement from FTX and SBF, to protect EA's public reputation. However, I'm not sure how much leverage EA leaders could have had in preventing or delaying FTX's collapse.
If EA leaders had privately challenged Sam's bad accounting, fraudulent behavior, etc, back before fall2022, would he really have listened and behaved any differently? Would other FTX leaders or employees have behaved differently?
If a few EAs had come out as public whistleblowers questioning FTX's legitimacy, would any VCs, crypto influencers, crypto investors, major FTX depositors, or regulators have paid any attention? (Bearing in mind all crypto exchanges, protocols, and companies are subject to a relentless barrage of strategic or tactical 'fear, uncertainty, & doubt' (FUD) from rival organizations, short-sellers, 'mainstream' (anti-crypto) financial journalism, and 'legacy' (anti-crypto) financial institutions.)
These are honest questions; I really don't know the answers, and I'd value any comments from people with more insider knowledge than I have.
Guy - thank you for this comment. I'm very sorry about your suffering.
I think EAs should take much more seriously the views of people like you who have first-hand experience with these issues. We should not be assuming that 'below neutral utility' implies 'it's better not to be alive'. We should be much more empirical about this, and not make strong a priori assumptions grounded in some over-simplified, over-abstracted view of utilitarianism.
We should listen to the people, like you, who have been living with chronic conditions -- whether pain, depression, PTSD, physical handicaps, cognitive impairments, or whatever -- and try to understand what keeps people going, and why they keep going.
I think that's a somewhat different point. It's often true that people are more critical about bad behavior by their political opponents'.
But most of the news stories I read in mainstream media that are critical of EA go far beyond demonizing EA individuals. I sense that these editors & journalists are feeling a panicky, uneasy, defensive reaction to the EA movement's epistemics and ethics, not just to EA individuals. It reminds me of the defensive, angry reactions that meat-eaters often show when they encounter compelling vegan arguments about animal welfare.
Admittedly this is a rather vague take, but I think we do under-estimate how much the EA perspective threatens many traditional world-views.
Michael -- interesting point. EA is a very unusual movement in that the founders (Will MacAskill Toby Ord, etc) were very young when they launched the movement, and are still only in their mid-30s to early 40s. They got some guidance & inspiration from older philosophers (e.g. Derek Parfit, Peter Singer), but mostly they recruited people even younger than them into the movement ... and then eventually some older folks like me joined as well.
So, EA's demographics are quite youth-heavy, but there's also much less correlation between age and prestige in EA than in most moral/activist movements.
David - TIME magazine for decades has promoted standard left/liberal Democrat-aligned narratives that prioritize symbolic partisan issues over scope-sensitive impact.
From the viewpoint of their editors, EA represents an embarrassing challenge to their America-centric, anthropocentric, short-termist, politicized way of thinking about the world's problems.
We may not be a direct threat to their subscription revenue, newsstand sales, or ad revenue.
But we are a threat to the ideology that their editors have strong interests in promoting -- an ideology that may seem invisible if you agree with it, but which seems obviously biased if you don't agree with it.
This is how partisan propaganda operates in the 21st century: it tries to discredit rival ideologies and world-views with a surprising ferocity and speed, once they sense a serious threat.
IMHO, EA needs to get a bit less naive about what people and institutions are willing to do to protect their world-views and political agendas.
Jason - thanks for these helpful corrections, clarifications, and extensions.
My comment was rather half-baked, and you've added a lot to think about!
Linyphia -- totally agree (unsurprisingly!).
You raise good additional points about the dynamism and unpredictability of human values and preferences. Some of that unpredictability may reflect adaptive unpredictability (what biologists call 'protean behavior') that makes it harder for evolutionary enemies and rivals to predict what one's going to do next. I discuss this issue extensively in this 1997 chapter and this 1996 simulation study. Insofar as human values are somewhat adaptively unpredictable by design, for good functional reasons, it will be very hard for reinforcement learning systems to get a good 'fix' on our preferences.
The other issues of adaptive self-deception (e.g. virtue signaling, as discussed in my 2019 book on the topic) about our values, and about AI power corrupting humans, also deserve much more attention in AI alignment work, IMHO.
To improve writing, I'd also recommend the book 'The sense of style' (2015) by Harvard psycholinguist Steven Pinker -- a real expert on both language research, and on writing clearly in his own books.
bob - I think this is a brilliant idea, and it could be quite effective in slowing down reckless AI development.
For this to be effective, it would require working with experienced lawyers who know relevant national and international laws and regulations (e.g. in US, UK, or EU) very well, who understand AI to some degree, and who are creative in seeing ways that new AI systems might inadvertently (or deliberately) violate those laws and regulations. They'd also need to be willing to sue powerful tech companies -- but these tech companies also have very deep pockets, so litigation could be very lucrative for law firms that have the guts to go after them.
For example, in the US, there are HIPAA privacy rules regarding companies accessing private medical information. Any AI system that allows or encourages users to share private medical information (such as asking questions about their symptoms, diseases, medications, or psychiatric issues when using a chatbot) is probably not going to be very well-designed to comply with these HIPAA regulations -- and violating HIPAA is a very serious legal issue.
More generally, any AI system that offers advice to users regarding medical, psychiatric, clinical psychology, legal, or financial matters might be in violation of laws that give various professional guilds a government-regulated monopoly on these services. For example, if a chatbot is basically practicing law without a license, practicing medicine without a license, practicing clinical psychology without a license, or giving financial advice without a license, then the company that created that chatbot might be violating some pretty serious laws. Moreover, the professional guilds have every incentive to protect their turf against AI intrusions that could result in mass unemployment among their guild members. And those guilds have plenty of legal experience suing interlopers who challenge their monopoly. The average small law firm might not be able to effectively challenge Microsoft's corporate legal team that would help defend OpenAI. But the American Medical Association might be ready and willing to challenge Microsoft.
AI companies would also have to be very careful not to violate laws and regulations regarding production of terrorist propaganda, adult pornography (illegal in many countries such as China, India, etc), child pornography (illegal in most countries), heresy (e.g. violating Sharia law in fundamentalist Muslim countries), etc. I doubt that most devs or managers at OpenAI or DeepMind are thinking very clearly or proactively about how not to fall afoul of state security laws in China, Sharia laws in Pakistan, or even EU privacy laws. But lawyers in each of those countries might realize that American tech companies are rich enough to be worth suing in their own national courts. How long will Microsoft or Google have the stomach for defending their AI subsidiaries in the courts of Beijing, Islamabad, or Brussels?
There are probably dozens of other legal angles for slowing down AI. Insofar as AI systems are getting more general purpose and more globally deployed, the number of ways they might violate laws and regulations across different nations is getting very large, and the legal 'attack surface' that makes AI companies vulnerable to litigation will get larger and larger.
Long story short, rather than focusing on trying to pass new global regulations to limit AI, there are probably thousands of ways that new AI systems will violate existing laws and regulations in different countries. Identifying those, and using them as leverage to slow down dangerous AI developments, might be a very fast, clever, and effective use of EA resources to reduce X risk.
MaxRa - I agree this is also part of the mainstream media's anti-EA mind-set: a zero-sum view of influence, prestige, and power. There are many vested interests (e.g. traditional political institutions, charities, think tanks, media outlets) that are deeply threatened by EA, because they simply don't care about scope-sensitivity, tractability, neglectedness, or long-termism. Indeed, these EA values directly challenge their day-to-day partisanship and virtue-signaling.
The EA movement may have naively under-estimated the strength of these vested interests, and their willingness to play dirty (through negative PR campaigns) to protect their influence.
Garrison - excellent Vox article; well done. I think I'll include it as required reading next time I teach my college course on Effective Altruism. It does a nice job of explaining some of the counter-intuitive results that can happen when we get serious about trying to quantify the suffering entailed from different diets.
I especially liked the evolutionary arguments about the adaptive value of nociceptors, pain, and capacity for suffering. It's very strange to me that some people doubt the sentience of other vertebrates. I mean, what do they think a central nervous system is for, if not to integrate information from positive and negative reinforcers to guide adaptive learning and behavior?
Fascinating essay.
BTW, you mentioned the'Schmidt Index' of insect sting painfulness. It was named after entomologist Justin Schmidt, who died recently (Feb 18, 2023). The Economist magazine just published a charming and informative obituary of him him here
Lizka - thanks for sharing this.
I'm struck by one big 'human subjects' issue with the ethics of OpenAI and deployment of new GPT versions: there seems to be no formal 'human subjects' oversight of this massive behavioral experiment, even though it is gathering interactive, detailed, personal data from over 100 million users, with the goal of creating generalizable knowledge (in the form of deep learning parameters, ML insights, & human factors insights).
As an academic working in an American university, if I wanted to run a behavioral sciences experiment on as few as 10 or 100 subjects, and gather generalizable information about their behavior, I'd need to get formal Institutional Review Board (IRB) approval to do that, through a well-established system of independent review that weights scientific and social benefits of the research against the risks and costs for participants and for society.
On the other hand, OpenAI (and other US-based AI companies) seem to think it's perfectly fine to gather interactive, detailed, identified (non-anonymous) data from over 100 million users, without any oversight. Insofar as they've ever received any federal research money (e.g. from NSF or DARPA), this could arguably be a violation of federal code 45 CFR 46 regarding protection of human subjects.
The human subjects issues might be exacerbated by the fact that GPT users are often sharing private biomedical information (e.g. asking questions about specific diseases, health concerns, or test results they have), and it's not clear whether OpenAI has the systems in place to adequately protect this private health information, as mandated under the HIPAA rules.
It's interesting that the OpenAI 'system card' on GPT-4 lists many potential safety issues, but seems not to mention these human subjects/IRB compliance issues at all, as far as I can see.
For example, there is no real 'informed consent' process for people signing up to use Chat GPT. An honest consent procedure would include potential users reading some pretty serious cautions such as 'The data you provide will help OpenAI develop more powerful AI systems that could make your job obsolete, that could be used to develop mass customized propaganda, that could exacerbate economic inequality, and that could impose existential risks on our entire species. If you agree to these terms, please click 'I agree'....
So, we're in a situation where OpenAI is running one of the largest-scale behavioral experiments ever conducted on our species, collecting gigabytes of personal information from users around the world, with the goal of distilling this information into generalizable knowledge, but seems to be entirely ignoring the human subjects protection regulations mandated by the US federal government.
EA includes a lot of experts on moral philosophy and moral psychology. Even setting aside the US federal regulatory issues, I wonder what you all think about the research ethics of GPT deployment to the general public, without any informed consent or debriefing??
Nathan - thanks for sharing the Time article excerpts, and for trying to promote a constructive and rational discussion.
For now, I don't want to address any of the specific issues around SBF, FTX, or EA leadership. I just want to make a meta-comment about the mainstream media's feeding frenzy around EA, and its apparently relentless attempts to discredit EA.
There's a classic social/moral psychology of 'comeuppance' going on here: any 'moral activists' who promote new and higher moral standards (such as the EA movement) can make ordinary folks (including journalists) feel uncomfortable, resentful, and inadequate. This can lead to a public eagerness to detect any forms of moral hypocrisy, moral failings, or bad behavior in the moral activist groups. If any such moral failings are detected, they get eagerly embraced, shared, signal-amplified, and taken as gospel. This makes it easier to dismiss the moral activists' legitimate moral innovations (e.g. focusing on scope-sensitivity, tractability, neglectedness, long-termism), and allows a quicky, easy return to the status quo ante (e.g. national partisan politics + scope-insensitive charity as usual).
We see this 'psychology of comeuppance' in the delight that mainstream media took when televangelists who acted greedy, lustful, and/or mendacious suffered various falls from grace over the last few decades. We see it in the media's focus on the (relatively minor) moral mis-steps and mis-statements of 'enemy politicians' (i.e. those in whatever party the journalists don't like), compared to the (relatively major) moral harms done by bad government policies. We see it throughout cancel culture, which is basically the psychology of comeuppance weaponized through social media to attack ideological enemies.
I'm not positing an organized conspiracy among mainstream journalists to smear EA. Rather, I'm pointing out a widespread human psychological propensity to take delight in any moral failings of any activist groups that make people feel morally inadequate. This propensity may be especially strong among journalists, since it motivates a lot of their investigative reporting (sometimes in the legitimate public interest, sometimes not).
I think it's useful to recognize the 'comeuppance psychology' when it's happening, because it often overshoots, and amplifies moderately bad moral errors into looking like they're super-bad moral errors. When a lot of credible, influential media sources are all piling onto a moral activist group (like EA), it can be extremely stressful, dispiriting, and toxic for the group. It can lead the group to doubt their own valid ideas and values, to collapse into schisms and recriminations, to over-correct its internal moral norms in an overly puritanical direction, and to ostracize formerly valued leaders and colleagues.
I've seen EA do a lot of soul-searching over the last few months. Some of it has been useful, valid, and constructive. Some of it has been self-flagellating, guilt-stricken, and counter-productive. I think we should take the Time article seriously, learn what we can from it, and update some of our views of issues and people. But I think our reactions should be tempered and contextualized by understanding that the media's 'comeuppance psychology' can also lead to hasty, reactive, over-corrections.
Jeff -- I think this is a wonderful idea for a book, and I'd strongly encourage you to do this.
If the focus was on 'EA for ordinary parents and families', I think you could reach a lot of people.
In particular, you could offer a lot of solace and reassurance to busy parents that a lot of the the stuff they've been told that they should worry about ethically (e.g. recycling, updating gas to electric cars, donating food to local shelters, getting a rescue dog, partisan national politics, etc) doesn't actually matter very much in the grand scheme, and that there are a lot of much higher-impact things they could be doing that might actually take less time and money.
In other words, for a family to 'turn EA' doesn't necessarily load them with a heavier moral burden; it might actually lighten their moral guilt if they were much more informed and scope-sensitive, and chose their moral battles more wisely.
(Consider just the issue of what to feed a family -- if you could explain that if you're worried about animal suffering, you don't have to force kids to turn full vegan; even just switching from eating chicken and small fish to eating pastured grass-fed beef could reduce animal suffering very effectively, and they can offset by donating a little bit to Vegan Outreach. This might lead parents to feel much less moral guilt about what they feed their kids -- and it might actually reduce animal suffering more than 'trying to be vegan' (which often, sadly, involves switching from beef to chicken).
I think an EA perspective could also help families better handle any misplaced eco-guilt they might have about having kids in the first place, 'contributing to overpopulation', 'burdening the planet', 'contributing to global warming', etc. This could get a bit into population ethics, but it doesn't really need to -- it could just involve reassuring parents that kids are future intellectual and moral resources for fighting against climate change and protecting the ecosphere; they're not just costs imposed on the planet.
In terms of co-authoring with Julia, bear in mind that co-authorship (especially with spouses) doesn't need to be a 50/50 effort; it can involve one author doing 90% of the initial draft, and the other adding their notes, edits, expansions, feedback, and guidance. As long as both people agree they contributed significantly to the book (and their agents, editors, & publishers agree too, which they will), they can both be co-authors. And I think it adds credibility for a married couple to present a book for couples and families.
Jeffrey - thanks for your kind comment! Appreciate it.
Thanks for the very useful link. I hadn't read that before.
I like the intuition pump that if advanced AI systems are running at about 10 million times human cognitive speed, then one year of human history equals 10 million years of AI experience.
PS: A few good examples I can think of off the top of my head (although they're not particularly realistic in relation to current AI tech):
- The space battle scenes in the Culture science fiction novels by Iain M. Banks, in which the ship 'Minds' (super advanced AIs) fight so fast using mostly beam weapons that the battles are typically over in a few seconds, long before their human crews have any idea what's happening. https://spacebattles-factions-database.fandom.com/wiki/Minds
- The scene in Avengers: Age of Ultron in which Ultron wakes up, learns human history, defeats Jarvis, escapes into the Internet, and starts manufacturing robot copies of itself within a few seconds:
- The scenes in Mandalorian TV series where the IG-11 combat robot is much faster than the humanoid storm troopers:
Habryka -- nice point.
Example: speedrunning 'Ultimate Doom':
Erin - thanks; looks interesting; hadn't heard of this science fiction book series before.
https://bobiverse.fandom.com/wiki/We_Are_Legion_(We_Are_Bob)_Wiki
I understand your point. But I think 'dual use' problems are likely to be very common in AI, just as human intelligence and creativity often have 'dual use' problems (e.g. Leonardo da Vinci creating beautiful art and also designing sadistic siege weapons).
Of course AI researchers, computer scientists, tech entrepreneurs, etc may see any strong regulations or moral stigma against their field as 'strange and unfair'. So what? Given the global stakes, and given the reckless approach to AI development that they've taken so far, it's not clear that EAs should give all that much weight to what they think. They do not have some inalienable right to develop technologies that are X risks to our species.
Our allegiance, IMHO, should be to humanity in general, sentient life in general, and our future descendants. Our allegiance should not be to the American tech industry - no matter how generous some of its leaders and investors have been to EA as a movement.
Just as anti-AI violence would be counter-productive, in terms of creating a public backlash against the violent anti-AI activists, I would bet (with only low-to-moderate confidence) that an authoritarian government crackdown on AI would also provoke a public backlash, especially among small-government conservatives, libertarians, and anti-police liberals.
I think public sentiment would need to tip against AI first, and then more serious regulations and prohibitions could follow. So, if we're concerned about AI X-risk, we'd need to get the public to morally stigmatize AI R&D first -- which I think would not be as hard as we expect.
Yuval - thanks for raising this important and neglected issue.
For every one person with a serious mental illness, such as severe depression, schizophrenia, PTSD, severe autism, intellectual disability, or Alzheimers, there are often several concerned carers who suffer alongside them -- often including their parents, siblings, spouses, children, and friends.
In many cases, the day-to-day suffering of carers (e.g. a middle-aged parent whose young adult child is slipping into paranoid schizophrenia) can actually be as severe, or more severe, than the suffering of the person with the mental illness. It's utterly heartbreaking to watch, helpless, as a loved one goes psychotic, ruins their life, and threatens the lives of people around them.
Yet the carers often suffer in silence, and get very little support. Indeed, mental health care systems in many countries actually prohibit carers from having any access to important psychiatric records regarding the loved ones they're caring for -- e.g. in the US (given HIPAA regulations), parents are often not authorized to know whether their adult child is actually filling their prescriptions for anti-psychotic medications, or going to therapy -- even if failing to take the medications puts the parents at immediate risk of violence.
The National Alliance for Mental Illness (NAMI) in the US runs excellent outreach and education programs for carers of people with mental disorders. I don't know if they have good randomized controlled trial data about the long-term efficacy of their programs. But, speaking from personal experience, the NAMI programs can, at least, provide some emotional and practical support for carers.
Melissa -- you raise legitimate questions. I'd love to see less coercion and a wider variety of education options (including voucher systems, home schooling, and unschooling), with more serious empirical analysis of their relative efficacy for achieving various purposes.
For anybody interested in these issues, I'd recommend the book 'The case against education' by economist Bryan Caplan. Empirical studies show that compulsory public education has far less positive impact on long-term learning outcomes than most people realize.
As a parent, and as someone who thinks that 'rights' are (important) social constructs rather than things bestowed on humans by gods or by the cosmos, I think it's prudent for parents to restrict some freedoms of their kids (esp freedom of movement!).
But I take your point that we should view kids as sentient beings in their own right, with their own interests -- and not just as raw materials that should be shoveled into government indoctrination camps (aka public schools) against their will.
David - thanks much for sharing the link to this Monmouth University survey. I urge everybody to have a look at it here (the same link you shared).
The survey looks pretty good methodologically: a probability-based national random sample of 805 U.S. adults, run by a reputable academic polling institute.
Two key results are worth highlighting, IMHO:
First, in response to the question "How worried are you that machines with artificial intelligence could eventually pose a threat to the existence of the human race – very, somewhat, not too, or not at all worried?", 55% of people (as you mentioned) were 'very worried' or 'somewhat worried', and only 16% were 'not at all worried'.
Second, in response to the question "If computer scientists really were able to develop computers with artificial intelligence, what effect do you think this would have on society as a whole? Would it do more good than harm, more harm than good, or about equal amounts of harm and good?", 41% predicted more harm than good, and only 9% predicted more good than harm.
Long story short, the American public is already very concerned about AI X risk, and very dubious that AI will bring more benefits than costs.
This contrasts markedly from the AI industry rhetoric/PR/propaganda that says everybody's excited about the wonderful future that AI will bring, and embraces that future with open arms.
Granted, moral outrage can sometimes be counterproductive.
However, we have no idea which specific ML work is 'on the critical path to dangerous AI'. Maybe most of it isn't. But maybe most of it is, one way or another.
ML researchers are clever enough to tell themselves reassuring stories about how whatever they're working on is unlikely to lead straight to dangerous AI. Just as most scientists working on nuclear weapon systems during the Cold War could tell themselves stories like 'Sure, I'm working on ICBM rockets, but at least I'm not working on ICBM guidance systesm', or 'Sure, I'm working on guidance systems, but at least I'm not working on the nuclear payloads', or 'Sure, I'm working on simulating the nuclear payload yields, but at least I'm not physically loading the enriched uranium into the warheads'. The smarter people are, the better they tend to be at motivated reasoning, and at creating plausible deniability that they played any role in increasing existential risk.
However, there's no reason for the rest of us to trust individual ML researchers' assessments of which work is dangerous, versus which is safe. Clearly a large proportion of ML researchers think that what other ML researchers are doing is potentially dangerous. And maybe we should listen to them about that.
Katja - thanks for posting these survey data.
They results are shocking. Really shocking. Appalling, really. It's worth taking a few minutes to soak in the dark implications.
It's hard to imagine any other industry full of smart people in which researchers themselves realize that what they're doing is barely likely to have a net positive impact on the world. And in which a large proportion believe that they're likely to impose massive suffering and catastrophe on everyone -- including their own friends, families, and kids.
Yet that's where we are with the AI industry. Almost all of the ML researchers seem to understand 'We might be the baddies'. And a much higher proportion seem to understand the catastrophic risks in 2022 than in 2016.
Yet they carry on doing what they're doing, despite knowing the risks. Perhaps they're motivated by curiosity, hubris, wealth, fame, status, or prestige. (Aren't we all?) Perhaps these motives overwhelm their moral qualms about what they're doing.
But IMHO, any person with ethical integrity who found themselves working in an industry where the consensual prediction among their peers is that their work is fairly likely to lead straight to an extinction-level catastrophe would take a step back, re-assess, and re-think whether they should really be pushing ahead.
I know all the arguments about the inevitability of AI arms races, between companies and between nation-states. But we're not really in a geopolitical arms race. There are very few countries with the talent, money, GPU clusters, and determination to pursue advanced AI. North Korea, Iran, Russia, and other dubious nations are not going to catch up any time soon. China is falling behind, relatively speaking.
The few major players in the American AI industry are far, far more advanced than the companies in any other country at this point. We're really just talking about a few thousand ML researchers associated with OpenAI/Microsoft, Deepmind/Google, and a handful of other companies. Almost all American. Pushing ahead, knowing the risks, knowing they're far in advance of any other country. It's insane. It's sociopathic. And I don't understand why EAs are still bending over backwards to try to stay friendly with this industry, trying to gently nudge them into taking 'alignment' more seriously, trying to portray them as working for the greater good. They are the baddies, and they increasingly know it, and we know it, and we should call them out on it.
Sorry for the feisty tone here. But sometimes moral outrage is the appropriate response to morally outrageous behavior by a dangerous industry.
Michal - thinking further on this, I think one issue that troubles me is the potential overlap between negative utilitarianism, dangerous technologies, and X risk -- an overlap that makes negative utilitarianism a much more dangerous information hazard than we might realize.
As many EAs have pointed out, bioweapons, nuclear weapons, and advanced AI might be especially dangerous if they fall into the hands of people who would quite like humanity to go extinct. This could include religious apocalypse cults, nihilistic terrorists, radical Earth-First-style eco-terrorists, etc. But it could also include people inspired by negative utilitarianism, who take it upon themselves to 'end humanity's net suffering' by any means necessary.
So, in my view, negative utilitarianism is an X-risk amplifier, and that makes it much more dangerous than it being 'just another perspective in moral philosophy' (as it's often viewed.)
Ishaan -- I can imagine some potentially persuasive arguments that negative utilitarianism might describe the situation for many wild animal species, and perhaps for many humans in prehistory.
However, our species has been extraordinarily successful at re-engineering our environments, creating our own eco-niches, and inventing technologies that maximize positive well-being and minimize suffering. The result is that, according to all the research I've seen on happiness, subjective well-being, and flourishing, most humans in the modern world are well above 'neutral' in terms of utility.
So the central claims of negative utilitarianism -- which we could caricature/summarize as 'life is suffering' and 'happiness is irrelevant' -- simply aren't true, empirically, for most modern humans.
Another way to frame this is to ask real people whether they'd be content to accept a painless suicide. The vast majority will say no. Why do we think that if we aggregate this at the species level that we'd be content to accept a painless mass extinction event?
On a more personal note, as a psychology professor, I'm deeply concerned that writers such as Perry and Benatar can undermine the mental health of young adults who take philosophical questions seriously. I think their writings are basically 'information hazards' for those prone to dysthymia, depression, or psychosis. So, I think their ideas are empirically false, theoretically incoherent, and psychologically dangerous to many vulnerable people.
Rob - thanks for sharing this. I'd encourage everybody to listen or watch Eliezer exploring the current state of AI, and the ongoing risks. It's an important and timely interview.
Caution: prepare to be extremely alarmed, depressed, angry, and existentially troubled for days afterwards. This interview is the exact opposite of light entertainment. It's likely to leave you deeply concerned about the recklessness of the AI industry. It might also leave you ashamed of how the EA movement has naively trusted that industry to take our concerns seriously.
Akash - thanks for posting this. Scott Alexander, as usual, has good insights, and is well worth reading here.
I think at some point, EAs might have to bite the bullet, set aside our all-too-close ties to the AI industry, and realize that 'AGI is an X-risk' boils down 'OpenAI, Deepmind, and other AI companies that aren't actually taking AIXR seriously are the real X risks' -- and should be viewed and treated accordingly.
ExponentialDragon -- this is such a timely, interesting, & important question. Thanks for raising it.
Tens of millions of young people are already concerned about climate change, and often view it as an existential risk (although it is, IMHO, a global catastrophic risk rather than an existential risk). Many of them are already working hard to fight climate change (albeit sometimes with strategies & policies that might be counter-productive or over-general, such as 'smash capitalism').
This is a good foundation for building concern about other X risks -- a young generation full of people concerned about humanity's future, with a global mind-set, some respect for the relevant science, and a frustration with the vested political & corporate interests that tend to downplay major global problems.
How can we nudge or lure them into caring about other X risks that might actually be more dangerous?
I also agree that asking them to abandon their climate change friends, their political tribes, and their moral in-groups is usually asking them too much.
So how do we convince a smarter-than-average 22-year-old who thinks 'climate change will end the world within 20 years; we must recycle more!' into someone who thinks 'climate change is really bad, and we should fight it, but also, here's cause area X that is also a big deal and worth some effort'?
I'm not sure. But my hunch is that we need to piggy-back on their existing concerns, and work with the grain of their political & ideological beliefs & values. They might not care about AGI X-risk per se, but they might care about AI increasing the rate of 'economic growth' so quickly that carbon emissions ramp up very fast, or AI amplifying economic inequalities, or AI propaganda by Big Oil being used to convince citizens to ignore climate change, or whatever. Some of these might seem like silly concerns to those deeply involved in AI research... but we're talking here about recruiting people from where they are now, not recruiting idealized hyper-rational decouplers who already understand machine learning.
Likewise with nuclear war as an X risk. Global thermonuclear war seems likely to cause massive climate change (eg through nuclear winter), and that's one of its most lethal, large-scale effects, so there's potentially strong overlap between fighting climate change due to carbon emissions, and fighting climate change due to nuclear bombs.
I think EA already pays considerable lip service to climate change as a global catastrophic risk (even though most of us know it's not a true X risk), and we do that partly so we don't alienate young climate change activists. But I think it's worth diving deeper into how to recruit some of the smarter, more open-minded climate change activists into EA X risk research and activism.
Amy, Angelina, & Eli -- helpful data, very clearly presented. Thank you.
I wonder if EA Global also collected any data on age and/or political orientation?
I specifically wonder whether older adults (e.g. over 40) felt as welcome at EA Global as young adults (e.g. under 30), and whether conservatives, centrists, and libertarians felt as welcome as liberals.
Otto (& Lara, Karl, & Alexia) -- thanks very much for sharing this fascinating research. Great to see empirical work on X risk communication strategies.
It's encouraging to know that even fairly brief interventions (watching videos or reading text) can significantly increase people's awareness of AI X risks.
It looks like the CNN clip featuring Stephen Hawking was especially effective -- maybe given Hawking's scientific status and reputation as a genius, and the fact that he's not seen as a politically polarizing or controversial figure (unlike Elon Musk or PewDiePie).
I look forward to reading the paper in depth, and I hope it gets more attention here, and when it's published. (Have you submitted it to a journal yet?)
Peter -- nice point about inferential distance. This can lead to misunderstandings from both directions:
Youth can hear elders make an argument that sounds overly opaque, technical, and unfamiliar to them, given the big inferential distance involved (although it would sound utterly clear & persuasive to the elder's professional colleagues), and dismiss it as incoherent.
Elders can see youth ignoring their arguments (which seem utterly clear & persuasive to them), get exasperated that they've invested decades learning about something only to be dismissed by people who don't know nearly as much, and who can't be bothered to do the work to overcome the inferential distance, and then the elders go into 'trust my authority' mode, which sounds domineering & irrational to the youth.
It's worth being careful about both of these failure modes (which I've been guilty of, plenty of times, from both sides, at different ages).
Sonia -- excellent points. Strongly agree.
EA needs to be genuinely inclusive not just in terms of sex, race, nationality, etc., but in terms of social class and political values. And many of the recent discussions in EA Forum community posts might look quite odd and alienating to people who have experienced and enjoyed the kind of blunt, unpretentious, thick-skinned, working class culture that you mentioned.