Posts

2022 AI expert survey results 2022-08-04T15:54:09.651Z
Ajeya's TAI timeline shortened from 2050 to 2040 2022-08-03T00:00:47.744Z
More funding is really good 2022-06-25T22:00:49.858Z
Choosing causes re Flynn for Oregon 2022-05-18T03:59:52.535Z
Great Power Conflict 2021-09-15T15:00:47.914Z
The Governance Problem and the "Pretty Good" X-Risk 2021-08-28T20:00:49.156Z
Peter Singer – Good Charity, Bad Charity 2013-08-10T16:01:00.824Z
Nick Bostrom – Existential Risk Prevention as Global Priority 2013-02-01T17:00:24.561Z
Scott Alexander – A Modest Proposal 2008-11-26T17:00:16.133Z
Peter Singer – Famine, Affluence, and Morality 1972-03-01T17:00:52.630Z

Comments

Comment by Zach Stein-Perlman (zsp) on Classifying sources of AI x-risk · 2022-08-08T18:38:11.487Z · EA · GW

I'm a fan of typologies/trees like this.

If you liked this post, you might also be interested in: 

Comment by Zach Stein-Perlman (zsp) on Why does no one care about AI? · 2022-08-07T22:59:01.167Z · EA · GW

without mainstream support of the idea that AI is a risk, I feel like it's going to be a lot harder to get to where we want to be.

The consequences of everyone freaking out about AI are not obviously or automatically good. (I'm personally optimistic that they could be good, but people waking up to AI by default entails more racing toward powerful AI.)

Comment by Zach Stein-Perlman (zsp) on Why does no one care about AI? · 2022-08-07T22:47:44.780Z · EA · GW

If you speak to a stranger about your worries of unaligned AI, they'll think you're insane (and watch too many sci-fi films).

Have you actually tried this? (How many times?)

Talking to lots of normies about AI risk is on my to-do list, but in the meantime there are some relevant surveys, and while it's true that people don't often bring up AI as a thing-they're-concerned-about, if you ask them about AI a lot of people seem pretty concerned that we won't be able to control it (e.g., the third of these surveys finds that 67% of respondents are concerned about "artificial intelligence becoming uncontrollable").

(Kelsey Piper has written up the second survey linked above.)

Comment by Zach Stein-Perlman (zsp) on AI risks: the most convincing argument · 2022-08-06T21:19:31.364Z · EA · GW

See also AGI safety from first principles, AI Safety Public Materials, and maybe AI Safety Arguments.

Comment by Zach Stein-Perlman (zsp) on [link post] The Case for Longtermism in The New York Times · 2022-08-05T16:39:43.212Z · EA · GW

From the comments in the NYT, two notes on communicating longtermism to people-like-NYT-readers:

  1. Many readers are confused by the focus on humans.
  2. Some readers are confused by the suggestion that longtermism is weird (Will: "It took me a long time to come around to longtermism") rather than obvious.

Re 2, I do think it's confusing to act like longtermism is nonobvious unless you're emphasizing weird implications like our calculations being dominated by the distant future and x-risk and things at least as weird as digital minds filling the universe.

Comment by Zach Stein-Perlman (zsp) on 2022 AI expert survey results · 2022-08-05T00:14:59.691Z · EA · GW

Ah, yes, sorry I was unclear; I claim there's no good way to determine bias from the MIRI logo in particular (or the Oxford logo, or various word choices in the survey email, etc.).

Comment by Zach Stein-Perlman (zsp) on 2022 AI expert survey results · 2022-08-04T22:16:46.245Z · EA · GW
  1. I don't think we have data on selection bias (and I can't think of a good way to measure this).
  2. Yes, the 2019 survey's matched-panel data is certainly comparable, but some other responses may not be comparable (in contrast to our 2022 survey, where we asked the old questions to a mostly-new set of humans).
Comment by Zach Stein-Perlman (zsp) on Three pillars for avoiding AGI catastrophe: Technical alignment, deployment decisions, and coordination · 2022-08-04T18:09:12.336Z · EA · GW

Great post!

A related framing I like involves two 'pillars,' reduce the alignment tax (similar to your pillar 1) and pay the alignment tax (similar to your pillars 2 & 3). (See Current Work in AI Alignment.)

We could also zoom out and add more necessary conditions for the future to go well. In particular, eventually achieving AGI (avoiding catastrophic conflict, misuse, accidents, and non-AI x-risks) and using AGI well (conditional on it being aligned) carve nature close to its joints, I think.

Comment by Zach Stein-Perlman (zsp) on What reason is there NOT to accept Pascal's Wager? · 2022-08-04T14:38:00.863Z · EA · GW

I'm mostly a utilitarian too, and I mostly support taking gambles to maximize expected value...

...but I'm quite uncomfortable with having to make an arbitrarily large sacrifice (like submitting 3↑↑↑3 minds to torture) for an arbitrarily small probability (like 1/(3↑↑↑3)) of infinite value.

Moral or decision-theoretic uncertainty seems to make it reasonable to reject some wagers, at least.

Comment by Zach Stein-Perlman (zsp) on Ajeya's TAI timeline shortened from 2050 to 2040 · 2022-08-03T22:39:56.583Z · EA · GW

The original title included "median," but I removed it because it made the title so long that "2040" and the paperclip icon didn't fit on the frontpage (where titles are limited to a single line)! I thought "2040" and the link were more important than "median," so I settled for "median" in the body only.

Comment by Zach Stein-Perlman (zsp) on On the Vulnerable World Hypothesis · 2022-08-01T15:50:58.994Z · EA · GW

I haven't finished reading this post yet, but I noticed that you're only considering type-1 risk in Bostrom's typology. Type-2a, type-2b, and type-0 risks don't require "malicious actors" or "actors who want to cause such events" for catastrophe to occur. This is probably fine since surveillance is mostly a response to type-1 risk, but I want to note that there are vulnerabilities other than those you discuss.

Comment by Zach Stein-Perlman (zsp) on The first AGI will be a buggy mess · 2022-07-30T14:12:38.783Z · EA · GW

The belief is that as soon as we create an AI with at least human-level general intelligence, it will be relatively easy to use it’s superior reasoning, extensive knowledge, and superhuman thinking speed to take over the world.

This depends on what "human-level" means. There is some threshold such that an AI past that threshold could quickly take over the world, and it doesn't really matter whether we call that "human-level" or not.

overall it seems like “make AI stupid” is a far easier task than “make the AI’s goals perfectly aligned”.

Sure. But the relevant task isn't make something that won't kill you. It's more like make something that will stop any AI from killing you, or maybe find a way to do alignment without much cost and without sacrificing much usefulness. If you and I make stupid AI, great, but some lab will realize that non-stupid AI could be more useful, and will make it by default.

Comment by Zach Stein-Perlman (zsp) on Closing the Feedback Loop on AI Safety Research. · 2022-07-30T00:18:27.765Z · EA · GW

I don't know about "no way," but the consensus is that simulation isn't obviously very helpful because an AI could infer that it is simulated and behave differently in simulation, not to mention that sufficiently capable systems could escape simulation for the same reasons that 'keep the AI in a box' is an inadequate control strategy.

Simulation probably isn't useless for safety, but it's not obviously a top priority, and "the creation of an adequate AGI Sandbox" is prima facie intractable.

Comment by Zach Stein-Perlman (zsp) on Some core assumptions of effective altruism, according to me · 2022-07-29T19:00:37.238Z · EA · GW

Lists like this are necessarily imprecise unless we're specific about what it's a list of. Possibilities include necessary conditions for being an EA, or stuff the EA community/movement broadly accepts, or the axiomatic assumptions underlying those beliefs, or common perceptions of the community/movement.

Comment by Zach Stein-Perlman (zsp) on More funding is really good · 2022-07-26T15:50:44.886Z · EA · GW

And:

  • For $10,000,000,000,000, you can buy most of the major tech companies and semiconductor manufacturers.

That would really, really help us make AI go well. Until we can do that, more funding is astronomically valuable. (And $10T is more than 100 times what EA has.)

Comment by Zach Stein-Perlman (zsp) on Making Effective Altruism Enormous · 2022-07-26T02:53:47.107Z · EA · GW

This all sounds reasonable. But maybe if we're clever we'll find ways to spread EA fast and well. In the possible worlds where UGAP or 80K or EA Virtual Programs or the EA Infrastructure Fund didn't exist, EA would spread slower, but not really better. Maybe there's a possible world where more/bigger things like those exist, where EA spreads very fast and well. 

Comment by Zach Stein-Perlman (zsp) on Remuneration In Effective Altruism · 2022-07-26T02:48:35.151Z · EA · GW

I agree that there are some psychological and practical benefits to making and having more money, but I don't think you're "essentially donating 98–99%," since even if you create value 50–100 times your salary, there's no way for you to capture 50–100 times your salary, even if you were totally selfish. The fraction you're "essentially donating" is more like , where "max possible salary" is the amount you would earn if you were totally selfish.

Comment by Zach Stein-Perlman (zsp) on Making Effective Altruism Enormous · 2022-07-25T01:12:37.782Z · EA · GW

(I strongly agree that we should be nice and welcoming. I still think trying to make EA enormous quickly is good if you can identify reasonable such interventions.)

Comment by Zach Stein-Perlman (zsp) on Odds of recovering values after collapse? · 2022-07-24T19:57:00.598Z · EA · GW

This is more pessimistic than I expected/believe. (I didn't post my own answer just because I think it depends a lot on what collapse looks like and  I haven't thought much about that, but I'm pretty sure I'd be more optimistic if I thought about it for a few hours.) Why do you think we're likely to get worse values?

Comment by Zach Stein-Perlman (zsp) on Odds of recovering values after collapse? · 2022-07-24T19:43:24.116Z · EA · GW
  • It depends on what causes the collapse
  • I believe that variance between possible levels of coordination after collapse matters more than variance between possible values after collapse (and I'm agnostic on tractability)
Comment by Zach Stein-Perlman (zsp) on Making Effective Altruism Enormous · 2022-07-24T16:49:03.875Z · EA · GW

Strong agree; EA being enormous would be good.

I hope we successfully make EA enormous quickly; I hope we pursue making EA enormous interventions beyond just being more welcoming on the margin.

Comment by Zach Stein-Perlman (zsp) on EA Shouldn't Try to Exercise Direct Political Power · 2022-07-21T16:54:15.521Z · EA · GW

I agree. Sometimes it sounds like the OP would support political interventions like GAP's PAC ("seeking to influence existing non-EA elected officials would be more effective"); I just wanted to note that EA political interventions aren't inevitably Democratic.

Comment by Zach Stein-Perlman (zsp) on EA Shouldn't Try to Exercise Direct Political Power · 2022-07-21T16:35:44.355Z · EA · GW

The *concrete outcome* of Effective Altruism exercising direct political power would be for EA to become a faction of the Democratic party.

It remains to be seen how effective they'll be, but note that e.g. GAP is nonpartisan.

[Edit: oh, the OP already mentioned this!]

Comment by zsp on [deleted post] 2022-07-20T08:24:08.342Z

Scott preferred not to crosspost this since it's not really about EA. I recommend unpublishing.

Comment by Zach Stein-Perlman (zsp) on Enlightenment Values in a Vulnerable World · 2022-07-18T21:24:47.660Z · EA · GW

Max and Sharmake, note that Bostrom does not claim in this piece (or anywhere, as far as I know) that the vulnerable world hypothesis is true. So "global government is the only escape hatch" isn't really his position. (Also note that we could have strong domain-specific global governance without a global government.)

Comment by Zach Stein-Perlman (zsp) on Enlightenment Values in a Vulnerable World · 2022-07-18T21:18:37.753Z · EA · GW

Thanks for your reply.

I disagree that you just "got a little sloppy"; you exaggerate Bostrom's policy  recommendations elsewhere too and generally frame the relevant parts of your piece as arguing against Bostrom rather than as arguing against someone who advocates positions that Bostrom merely analyzes. Most readers would get the sense that "Bostrom claims that his global surveillance solution to anthropogenic risks is a one-size-fits all antidote"; this is false.

And of course I agree--and Bostrom would agree--that there are many possible solutions and countermeasures to dangerous biotechnology. But if we're assuming that a particular technology is a black ball presenting a type-1 vulnerability, as Bostrom does for the sake of illustration in one paragraph, we are necessarily assuming that (1) it devastates civilization by default, so we are necessarily assuming that eg PPE won't save us, and (2) it is available to a large number of actors by default, so we need something like mass surveillance to preempt use. So I think you're saying something reasonable, but not really disagreeing with Bostrom.

Comment by Zach Stein-Perlman (zsp) on Enlightenment Values in a Vulnerable World · 2022-07-18T17:27:00.253Z · EA · GW

Bostrom [claims] that even a small credence in the VWH requires continuously controlling and surveilling everyone on earth.

Bostrom does not claim this. Period. (Your reading is good-faith, but Bostrom is frequently misread on this topic by people criticizing him in bad faith, so it's worth emphasizing-- and it's just an important point.) He narrowly claims here that mass surveillance would be necessary given "a biotechnological black ball that is powerful enough that a single malicious use could cause a pandemic that would kill billions of people."

Another relevant Bostrom quote:

Comprehensive surveillance and global governance would thus offer protection against a wide spectrum of civilizational vulnerabilities. This is a considerable reason in favor of bringing about those conditions. The strength of this reason is roughly proportional to the probability that the vulnerable world hypothesis is true.

He goes on to discuss the downsides of surveillance and global governance. So your quotes like "Bostrom’s plan [purports to be a] one-size-fits-all antidote" are not correct, and Bostrom would agree with you that totalitarianism and surveillance present astronomical risks.

Comment by Zach Stein-Perlman (zsp) on More funding is really good · 2022-07-15T22:30:42.377Z · EA · GW

Update: GiveWell funds some interventions at more like $10K/life, which naively suggests that marginal cost per life is about $10K, but maybe those interventions had side effects of gaining information or enabling other interventions in the future and so had greater all-things considered effectiveness.

Comment by Zach Stein-Perlman (zsp) on Should I donate my earnings when my career is just beginning? · 2022-07-15T15:40:41.699Z · EA · GW

Consider saving first, then giving.

Comment by Zach Stein-Perlman (zsp) on Idea: call-your-representative newsletter · 2022-07-15T00:21:38.369Z · EA · GW

for many representatives, hearing from as few as 5 constituents on something small can make them sit up and take notice.

Is this true? Is there evidence?

Comment by Zach Stein-Perlman (zsp) on Criticism of EA Criticism Contest · 2022-07-14T18:44:31.900Z · EA · GW

Huh. I'm sorry. I hope that experience isn't representative of EA.

Comment by Zach Stein-Perlman (zsp) on Criticism of EA Criticism Contest · 2022-07-14T18:09:34.478Z · EA · GW

(+1)

Comment by Zach Stein-Perlman (zsp) on Criticism of EA Criticism Contest · 2022-07-14T17:56:37.412Z · EA · GW

After a PM conversation with Steve, and pending  reviewing Zvi's post more carefully, I'll note:

  • I agree that Zvi probably meant something pro-market. (I mostly disagree that EA should be much more pro-market than it already is, but that's not the main point here.)
  • Insofar as Zvi is attempting to make the reader believe without justification that EA is insufficiently pro-market, that's intellectually lazy, but he's probably just mentioning disagreements to set up the rest of his post rather than to convince, in which case it's not intellectually lazy. So I retract "intellectually lazy." (But it is frustrating to a reader who wants to know what Zvi really thinks and why, especially since this isn't the first time Zvi has criticized EA obliquely.)
Comment by Zach Stein-Perlman (zsp) on Criticism of EA Criticism Contest · 2022-07-14T16:03:30.585Z · EA · GW

Thanks for this post.

A few data points and reactions from my somewhat different experiences with EA:

  • I've known many EAs. Many have been vegan and many have not (I'm not). I've never seen anyone "treat [someone] as non-serious (or even evil)" based on their diet.
  • A significant minority achieves high status across EA contexts while loudly disagreeing with utilitarianism.
  • You claim that EA takes as given "Not living up to this list is morally bad. Also sort of like murder." Of course failing to save lives is sort of like murder, for sufficiently weak "sort of." But at the level of telling people what to do with their lives, I've always seen community leaders endorse things like personal wellbeing and non-total altruism (and not always just for instrumental, altruistic reasons). The rank-and-file and high-status alike talk (online and offline) about having fun. The vibe I get from the community is that EA is more of an exciting opportunity than a burdensome obligation. (Yes, that's probably an instrumentally valuable vibe for the community to have -- but that means that 'having fun is murder' is not endorsed by the community, not the other way around.)
  • [Retracted; I generally support noting disagreements even if you're not explaining them; see Zvi's reply] It feels intellectually lazy to "strongly disagree" with principles like "The best way to do good yourself is to act selflessly to do good" and then not explain why. To illustrate, here's my confused reading of your disagreement. Maybe you narrowly disagree that selflessness is the optimal psychological strategy for all humans. But of course EA doesn't believe that either. Maybe you think it does. Or maybe you have a deeper criticism about "The best way to do good yourself"... but I can't guess what that is.
  • Relatedly, you claim that you are somehow not allowed to say real critiques. "There are also things one is not socially allowed to question or consider, not in EA in particular but fully broadly. Some key considerations are things that cannot be said on the internet, and some general assumptions that cannot be questioned are importantly wrong but cannot be questioned." "Signals are strong [real criticism] is unwelcome and would not be rewarded." I just don't buy it. My experiences strongly suggest that the community goes out of its way to be open to good-faith criticism, in more than a pat-ourselves-on-the-back-for-being-open-to-criticism manner. I guess insofar as you have had different experiences that you decline to discuss explicitly, fine, you'll arrive at different beliefs. But your observations aren't really useful to the rest of us if you don't say them, including the meta-observation that you're supposedly not allowed to say certain things.

I think you gesture toward useful criticism. It would be useful for me if you actually made that criticism. You might change my mind about something! This post doesn't seem written to make it easy for even epistemically virtuous EAs to be able to change their minds, even if you've correctly identified some good criticism, though, since you don't share it.

Comment by Zach Stein-Perlman (zsp) on Punching Utilitarians in the Face · 2022-07-13T21:28:44.966Z · EA · GW

I think the crux is how the oracle makes predictions? (Assuming it's sufficiently better than random; if it's 50.01% accurate and the difference between the boxes is a factor of 1000 then of course you should just 2-box.) For example, if it just reads your DNA and predicts based on that, you should 1-box evidentially or 2-box causally. If it simulates you such that whichever choice you make, it would probably predict that you would make that choice, then you should 1-box. It's not obvious without more detail how "your good friend" makes their prediction.

Comment by Zach Stein-Perlman (zsp) on Announcing Non-trivial, an EA learning platform for teenagers · 2022-07-13T14:11:16.465Z · EA · GW

“Abstract stairs” was my best guess too. It doesn’t work for me, and I don’t get the second circle.

Comment by Zach Stein-Perlman (zsp) on Announcing Non-trivial, an EA learning platform for teenagers · 2022-07-12T15:30:47.087Z · EA · GW

Feedback: I am confused by the logo.

Comment by Zach Stein-Perlman (zsp) on Some research questions that you may want to tackle · 2022-07-11T23:45:53.065Z · EA · GW

You asked for an analysis "even from a total utilitarian, longtermist perspective." From that perspective, I claim that preventing extinction clearly has astronomical (positive) expected value, since variance between possible futures is dominated by what the cosmic endowment is optimized for, and optimizing for utility is much more likely than optimizing for disutility. If you disagree, I'd be interested to hear why, here or on a call.

Comment by Zach Stein-Perlman (zsp) on Some research questions that you may want to tackle · 2022-07-11T23:40:15.843Z · EA · GW

Sure, want to change the numbers by a factor of, say, 10^12 to account for simulation? The long-term effects still dominate. (Maybe taking actions to influence our simulators is more effective than trying to cause improvements in the long-term of our universe, but that isn't an argument for doing naive short-term interventions.)

Comment by Zach Stein-Perlman (zsp) on Some research questions that you may want to tackle · 2022-07-09T05:15:52.371Z · EA · GW

I don't think a complete case has been made (even from a total utilitarian, longtermist perspective) that at the current funding margin, it makes sense to spend marginal dollars on longtermism-motivated projects instead of animal welfare projects. I'd be very interested to see this comparison in particular

I think this is wildly overdetermined in favor of longtermism. For example, I think at the current margins, a well-spent dollar has a ~10^-13 chance of making the future go much better, with a value probably more than 10^50 happy human lives (and with a much greater expected value -- arguably infinite, but that's another conversation). So the marginal longtermist dollar is worth much more than 10^37 happy lives in expectation. (That's way more than the number of fish that have ever lived, but for the sake of having a number I think we can safely upper-bound the direct effect of the marginal animal-welfare dollar at 10^0 happy lives.) Given utilitarianism, even if you nudge my numbers quite a bit, I think longtermism blows animal welfare out of the water.

Of course, I don't think a longtermist dollar is actually ~10^40 times more effective than an animal-welfare one, because of miscellaneous side effects of animal welfare spending on the long-term future. But I think those side effects dominate. (I have heard an EA working on animal welfare say that they think the effects of their work are dominated basically by side effects on humans' attitudes.) And presumably the side effects aren't greater than the benefits of funding longtermist projects.

Comment by Zach Stein-Perlman (zsp) on New US Senate Bill on X-Risk Mitigation [Linkpost] · 2022-07-08T15:41:01.383Z · EA · GW

The text is now public. It's short and easily skimmable. The bill wouldn't do much directly but creates a committee that would create a report on catastrophic risk and what to do about it. This is plausibly an important part of the possible futures where the US government responds well to risks from emerging technology.

Comment by Zach Stein-Perlman (zsp) on More funding is really good · 2022-07-05T22:13:13.952Z · EA · GW

(Update: yup)

Comment by Zach Stein-Perlman (zsp) on Future Matters #3: digital sentience, AGI ruin, and forecasting track records · 2022-07-04T18:32:16.445Z · EA · GW

I read the first few paragraphs, and there are a few mistakes:

Robert Long’s Lots of links on LaMDAprovides an excellent summary of the saga and the ensuing discussion. We concur with Nick Bostrom’s assessment: “With recent advances in AI (and much more to come before too long, presumably) it is astonishing how neglected this issue still is.”

This strongly suggests that Bostrom is commenting on LaMDA, but he's discussing "the ethics and political status of digital minds" in general.

Eliezer Yudkowsky’s AGI ruin: a list of lethalities has caused quite a stir. He recently announced that MIRI had pretty much given up on solving AI alignment, and in this (very long) post, he states his reasons for thinking that humanity is therefore doomed.

Yudkowsky did not announce this (and indeed it's false; see, e.g., Bensinger's comment), and the "therefore" in the above sentence makes no sense.

Comment by Zach Stein-Perlman (zsp) on My Most Likely Reason to Die Young is AI X-Risk · 2022-07-04T16:01:45.719Z · EA · GW

Good post!

My personal take is that the numbers used are too low, and this matches my sense of the median AI Safety researchers opinion. My personal rough guess would be 25% x-risk conditional on making AGI, and median AGI by 2040, which sharply increase the probability of death from AI to well above natural causes.

I agree that your risk of dying from misaligned AI in an extinction event in the next 30 years is much more than 3.7%. Actually, Carlsmith would too -- he more than doubled his credence in AI existential catastrophe by 2070 since sharing the report (see, e.g., the "author's note" at the end of the arxiv abstract).

(Edit: modulo mic's observation, but still.)

Comment by Zach Stein-Perlman (zsp) on Strategic Perspectives on Long-term AI Governance: Introduction · 2022-07-04T15:34:12.005Z · EA · GW

Sure, of course. I just don’t think that looks like adopting a particular perspective.

Comment by Zach Stein-Perlman (zsp) on New US Senate Bill on X-Risk Mitigation [Linkpost] · 2022-07-04T04:18:28.943Z · EA · GW

I'm curious why some people strong-downvoted this and the LessWrong linkpost. Feel free to PM me if relevant.

Comment by Zach Stein-Perlman (zsp) on New US Senate Bill on X-Risk Mitigation [Linkpost] · 2022-07-04T03:15:47.786Z · EA · GW

We should reserve judgment until seeing what the bill really does, and whether some version of it will successfully become law, but this is exciting.

Comment by Zach Stein-Perlman (zsp) on New US Senate Bill on X-Risk Mitigation [Linkpost] · 2022-07-04T03:10:27.606Z · EA · GW

The bill is sponsored by Republican Senator Rob Portman and cosponsored Democratic Senator Gary Peters.

Comment by Zach Stein-Perlman (zsp) on New US Senate Bill on X-Risk Mitigation [Linkpost] · 2022-07-04T03:09:39.156Z · EA · GW

Here it is, although the text isn't there yet (I expect it'll be added there soon but I'm not an expert on congress.gov).

Comment by Zach Stein-Perlman (zsp) on Why AGI Timeline Research/Discourse Might Be Overrated · 2022-07-03T16:31:12.354Z · EA · GW

I'm currently thinking about questions including "how big is AI as a political topic" and "what does the public think"; any recommended reading?