Posts

Technological developments that could increase risks from nuclear weapons: A shallow review 2023-02-09T15:41:54.858Z
[Our World in Data] AI timelines: What do experts in artificial intelligence expect for the future? (Roser, 2023) 2023-02-07T14:52:27.370Z
“My Model Of EA Burnout” (Logan Strohl) 2023-02-03T00:23:41.996Z
[TIME magazine] DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution (Perrigo, 2023) 2023-01-20T20:37:02.924Z
Nearcast-based “deployment problem” analysis (Karnofsky, 2022) 2023-01-09T16:57:36.853Z
One Hundred Opinions on Nuclear War (Ladish, 2019) 2022-12-29T20:23:46.011Z
“I'm as approving of the EA community now as before the FTX collapse” (Duncan Sabien) 2022-12-15T21:12:08.038Z
The Pentagon claims China will likely have 1,500 nuclear warheads by 2035 2022-12-12T18:12:03.414Z
CERI Research Symposium Presentations (incl. Youtube links) 2022-09-24T03:45:04.609Z
Feedback I've been giving to junior x-risk researchers 2022-08-15T20:46:24.028Z
Spicy takes about AI policy (Clark, 2022) 2022-08-09T13:49:32.080Z
AI timelines by bio anchors: the debate in one place 2022-07-30T23:04:48.901Z
Odds of recovering values after collapse? 2022-07-24T18:20:04.515Z
Will Aldred's Shortform 2022-07-21T08:29:28.559Z
GPT-2 as step toward general intelligence (Alexander, 2019) 2022-07-18T16:14:10.508Z
Beware Isolated Demands for Rigor (Alexander, 2014) 2022-07-18T11:48:27.593Z
Existential Biorisk vs. GCBR 2022-07-15T21:16:26.311Z
[EAG talk] The likelihood and severity of a US-Russia nuclear exchange (Rodriguez, 2019) 2022-07-03T13:53:02.040Z
Nuclear risk research ideas: Summary & introduction 2022-04-08T11:17:09.884Z
Miscellaneous & Meta X-Risk Overview: CERI Summer Research Fellowship 2022-03-30T02:45:47.148Z
Nuclear Risk Overview: CERI Summer Research Fellowship 2022-03-27T15:51:47.016Z

Comments

Comment by Will Aldred on FLI open letter: Pause giant AI experiments · 2023-03-30T15:57:26.671Z · EA · GW

The Forbes article “Elon Musk's AI History May Be Behind His Call To Pause Development” discusses some interesting OpenAI history and an explanation for how this FLI open letter may have come to be.

(Note: I don’t believe this is the only explanation, or even the most likely one; if pushed I’d assign it maybe 20%, though with large uncertainty bars, of the counterfactual force behind the FLI letter coming into existence. Note also: I’ve signed the letter, I think it’s net positive.)

Some excerpts:

OpenAI was founded as a nonprofit in 2015, with Elon Musk as the public face of the organization. [...] OpenAI was co-founded by Sam Altman, who butted heads with Musk in 2018 when Musk decided he wasn’t happy with OpenAI’s progress. [...] Musk worried that OpenAI was running behind Google and reportedly told Altman he wanted to take over the company to accelerate development. But Altman and the board at OpenAI rejected the idea that Musk—already the head of Tesla, The Boring Company and SpaceX—would have control of yet another company.

“Musk, in turn, walked away from the company—and reneged on a massive planned donation. The fallout from that conflict, culminating in the announcement of Musk’s departure on Feb 20, 2018, [...],” Semafor reported last week.

When Musk left his stated reason was that AI technology being developed at Tesla created a conflict of interest. [...] And while the real reason Musk left OpenAI likely had more to do with the power struggle reported by Semafor, there’s almost certainly some truth to the fact that Tesla is working on powerful AI tech.

The fact that Musk is so far behind in the AI race needs to be kept in mind when you see him warn that this technology is untested. Musk has had no problem with deploying beta software in Tesla cars that essentially make everyone on the road a beta tester, whether they’ve signed up for it or not.

Rather than issuing a statement solely under his own name, it seems like Musk has tried to launder his concern about OpenAI through a nonprofit called the Future of Life Institute. But as Reuters points out, the Future of Life Institute is primarily funded by the Musk Foundation.

Of course, there’s also legitimate concern about these AI tools. [...]

Musk was perfectly happy with developing artificial intelligence tools at a breakneck speed when he was funding OpenAI. But now that he’s left OpenAI and has seen it become the frontrunner in a race for the most cutting edge tech to change the world, he wants everything to pause for six months.

Comment by Will Aldred on FLI open letter: Pause giant AI experiments · 2023-03-30T14:35:42.135Z · EA · GW

Scott Aaronson, a prominent quantum computing professor who's spent the last year working on alignment at OpenAI, has just written a response to this FLI open letter and to Yudkowsky's TIME piece: "If AI scaling is to be shut down, let it be for a coherent reason".

I don't agree with everything Scott has written here, but I found these parts interesting:

People might be surprised about the diversity of opinion about these issues within OpenAI, by how many there have discussed or even forcefully advocated slowing down.

...

Why six months? Why not six weeks or six years? [...] With the “why six months?” question, I confess that I was deeply confused, until I heard a dear friend and colleague in academic AI, one who’s long been skeptical of AI-doom scenarios, explain why he signed the open letter. He said: look, we all started writing research papers about the safety issues with ChatGPT; then our work became obsolete when OpenAI released GPT-4 just a few months later. So now we’re writing papers about GPT-4. Will we again have to throw our work away when OpenAI releases GPT-5? I realized that, while six months might not suffice to save human civilization, it’s just enough for the more immediate concern of getting papers into academic AI conferences.

...

Look: while I’ve spent multiple posts explaining how I part ways from the Orthodox Yudkowskyan position, I do find that position intellectually consistent, with conclusions that follow neatly from premises. The Orthodox, in particular, can straightforwardly answer all four of my questions above [...]

On the other hand, I'm deeply confused by the people who signed the open letter, even though they continue to downplay or even ridicule GPT’s abilities, as well as the “sensationalist” predictions of an AI apocalypse. I’d feel less confused if such people came out and argued explicitly: “yes, we should also have paused the rapid improvement of printing presses to avert Europe’s religious wars. Yes, we should’ve paused the scaling of radio transmitters to prevent the rise of Hitler. Yes, we should’ve paused the race for ever-faster home Internet to prevent the election of Donald Trump. And yes, we should’ve trusted our governments to manage these pauses, to foresee brand-new technologies’ likely harms and take appropriate actions to mitigate them.”

Comment by Will Aldred on MichaelA's Shortform · 2023-03-28T18:05:38.219Z · EA · GW

I’ve heard it argued that Singapore could be surprisingly important for reducing AI risk in part because China often copies Singaporean laws/policies.

Interesting!

(And for others who might be interested and who are based in Singapore, there's this Singapore AI Policy Career Guide.)

Comment by Will Aldred on Announcing the ERA Cambridge Summer Research Fellowship · 2023-03-16T20:14:17.497Z · EA · GW

I encourage those considering applying to note the following advice:

Don’t spend too long thinking about the pros and cons of applying to an opportunity (e.g., a job, grant, degree program, or internship). Assuming the initial application wouldn’t take you long, if it seems worth thinking hard about, you should probably just apply instead.

(From the post “Don’t think, just apply! (usually)”; I think this advice is especially appropriate for early career individuals, hence my signal boosting it here.)

Comment by Will Aldred on Suggestion: A workable romantic non-escalation policy for EA community builders · 2023-03-08T18:44:23.681Z · EA · GW

Attendees should focus on getting as much ea-related value as possible out of EA events, and we as organizers should focus on generating as much value as possible. Thinking about which hot community builder you can get with later distracts from that. And, thinking about which hot participant you can get with later on can lead to decisions way more costly than just lost opportunities to provide more value.

Strongly agree. Moreover, I think it's worth us all keeping in mind that the only real purpose of the EA community is to do the most good. An EA community in which members view, for example, EA Globals as facilitating afterparties at which to find hook ups, is an EA community which is likely to spend more {time, money, attention} on EAGs and other events than achieves the most good.


If the current resource level going toward EA community events does the most good,
I desire to believe that the current resource level going toward EA community events does the most good;
If less {time, money, attention} spent on EA community events does the most good,
I desire to believe that less {time, money, attention} spent on EA community events does the most good;
Let me not become attached to beliefs I may not want.

Comment by Will Aldred on Polyamory and dating in the EA community · 2023-02-14T12:40:32.570Z · EA · GW

Many thanks to all who contributed toward this post. I agree with many of your points, and I appreciate the roundedness and nuance you bring to the topic.

To add to your “If you’re considering poly” section, I’m excerpting below a Clearer Thinking podcast episode (timestamp: 54:03–73:12) which I think does a great job of discussing polyamory in a balanced way. (Spencer Greenberg is the host, and Sam Rosen is the guest – in the episode, Sam talks about his experience with being poly; I’ve bolded the parts which speak to me the most.)

SPENCER: So let's switch topics to polyamory.

...

SPENCER: I have to say, the only examples of polyamorous couples that have lasted a really long time (like five plus years) that I know of, have been the hierarchical form where they have a primary that they're very committed to, and then they have secondary partners. But that being said, I'm sure there are examples of the more flexible kind lasting a long time, I just, I'm not as aware of it.

...

SAM: I've actually found that when people try to get other people to become poly that aren't already poly that tends to — it's very hard to get someone who's not already comfortable with that dynamic to become comfortable with it. I don't know what's going on with that. But yeah, all of us were already poly and already pretty chill people and there's not much to fight about.

SPENCER: Okay, what about jealousy, though? Because that's the natural question is like, “Okay, you're spending the night with your girlfriend and your wife feels like seeing you that night.” You can imagine there's a lot of opportunities for jealousy to flare up. And I think to a lot of people, just the idea that their partner might be having sexual relations with another person might make them insanely jealous, just that concept by itself. So what are your thoughts on jealousy?

SAM: So I think that in the same way that if you're like, in a room with an annoying noise, or bad smell long enough, you don't smell it anymore. I think jealousy has a similar thing where you — if you are poly for long enough, you just kind of get used to that feeling, and it doesn't even feel bad. I don't even really feel jealousy like I used to anymore just because I've been poly for so long.

SPENCER: So at the beginning, did you feel significant jealousy?

SAM: Yeah, I felt a lot of jealousy at the beginning.

SPENCER: And so why did you keep pushing through that? Why did you continue being poly?

SAM: I just felt like the benefits of having fun, new partners outweighed the costs of jealousy and it was a simple cost-benefit analysis.

SPENCER: I agree with you. I don't think it's [love is] zero-sum. I think someone can genuinely, deeply love two people and it doesn't necessarily — loving one does not necessarily interfere with loving the other just like loving one sibling doesn't make you love the other sibling less.

SAM: And to push back against some polyamory rhetoric, a lot of poly people say it's infinite, it's not zero-sum at all. Like I don't think that I could love (romantically) four people at the same time. Like, I think that would just be — I don't think I would really deeply feel the same way about them because I could just not have the emotional energy. Like, maybe if we all lived together, I could see them all the time then I could do that. But there's something about — I don't have enough emotional energy in the day to think about all four people in a very positive way. There's something that feels like it's not truly non-zero-sum.

SPENCER: Right? And clearly, time is zero-sum. But you only have so much time. So the more partners you have you essentially are taking away time from another partner at some point, right?

SAM: Yeah, absolutely. And that's, I've found that having two partners is optimal for me, like a wife and a girlfriend. When I start having more I find that the relationships start degrading in quality because I don't pay enough attention to each individual partner. And that just is a fact about my time and how I'm able to divvy up my affection.

SPENCER: Well, you know, another factor I think that comes in with the idea of polyamory is stability. I think it's probably true (I'm curious if you agree) that polyamory, all else being equal, might be less stable than monogamy because you have a situation where, you know, there's just more parties involved, there's more people that could get upset about things, there's more likelihood of shifting dynamics.

SAM: There's just more moving parts that could get a monkey wrench into them.

SPENCER: Exactly. And also more possibility of emotional flare-ups because one person is like, “I don't get enough time and the other person's getting more time,” or jealousy, or your secondary suddenly wants to be your primary, right? And then it's like, well, what is that dynamic like?

SAM: Yeah, I think polyamory is a bit high-risk/high-reward in that sense that like, I think they're slightly less stable, but I think they’re kind of more fun. So I think it is true that it's…there's more risks of like flare-ups, as you say. And I think if you don't have the skill of handling interpersonal conflicts well, you just shouldn't be poly. I think that you should know yourself and think, “Am I the sort of person that can comfortably handle/communicate my needs without it being a shouting match?”

SPENCER: Yeah, it seems like really clear, honest communication is just absolutely essential if you're going to navigate the complexity of multiple people's emotions simultaneously, including your own. What other traits would you say are really important if you're going to try polyamory?

SAM: I think innate low jealousy is probably really, really helpful. Even though I got over my jealousy, I think if you just start out kind of low, it's probably easier.

SPENCER: Well, I would just add that I think some people are just naturally more monogamous and that some are more naturally polyamorous. Like, I know people that once they have a partner, they actually just seem to have no attraction to anyone else, and the idea of being with anyone else is just odious to them. Whereas other people, it seems like when they're with one partner, they still actually feel a lot of attraction to others. And you know, if they're ethical, they're not going to cheat, but they still have those feelings. And then I actually think there might be a third type, where it's something like, when they're in a monogamous relationship, they're just attracted to that one person, but as soon as they're in a non-monogamous relationship, they’re actually attracted to multiple people, so is there something like a switch that can flip based on what the rules in the relationship are?

SAM: Yeah, I have two thoughts. One is, I don't think polyamory will work for everyone.

SPENCER: What percentage of people do you think would be happiest in a polyamorous situation?

SAM: My guess is 10% of people.

SPENCER: Okay, yeah.

SAM: It's a pretty small number. Now, obviously, I could change my mind on this. But I don't think that — if you don't have good communication techniques, and have already low jealousy and things like that — that you can do it without it being a disaster for everyone. And also people get into polyamory under duress sometimes where a person's like,” I want to break up with you,” and the other person's like, “Well, let’s be poly instead?” And then that's a kind of a tragic, unhappy situation because you're like…almost being…you're not happy with the arrangement, you're kind of just agreeing to it.

SPENCER: Right, right. And I think that, you know, people can certainly get really badly hurt if they're pushed into a polyamorous situation that they don't feel good about. And one has to be really careful about that.

Comment by Will Aldred on Call me, maybe? Hotlines and Global Catastrophic Risk [Founders Pledge] · 2023-01-24T18:09:13.062Z · EA · GW

Hey (I just met you), I appreciate this post, both the content and (this is crazy) the outstanding title :)

Comment by Will Aldred on David Krueger on AI Alignment in Academia and Coordination · 2023-01-07T22:11:12.948Z · EA · GW

I don't think alignment is a problem that can be solved. I think we can do better and better. But to have it be existentially safe, the bar seems really, really high and I don't think we're going to get there. So we're going to need to have some ability to coordinate and say let's not pursue this development path or let's not deploy these kinds of systems right now.

Holden makes a similar point in "Nearcast-based 'deployment problem' analysis" (2022):

I don’t like the framing of “solving” “the” alignment problem. I picture something like “Taking as many measures as we can (see previous post) to make catastrophic misalignment as unlikely as we can for the specific systems we’re deploying in the specific contexts we’re deploying them in, then using those systems as part of an ongoing effort to further improve alignment measures that can be applied to more-capable systems.” In other words, I don’t think there is a single point where the alignment problem is “solved”; instead I think we will face a number of “alignment problems” for systems with different capabilities. (And I think there could be some systems that are very easy to align, but just not very powerful.) So I tend to talk about whether we have “systems that are both aligned and transformative” rather than whether the “alignment problem is solved.”

Comment by Will Aldred on How to create curriculum for self-study towards AI alignment work? · 2023-01-07T22:05:01.356Z · EA · GW
  1. Ngo's AGI safety fundamentals curriculum
  2. Karnofsky's 'Getting up to speed on AI alignment' guidance and reading list
Comment by Will Aldred on [Fiction] Improved Governance on the Critical Path to AI Alignment by 2045. · 2023-01-06T18:40:09.619Z · EA · GW

How we might make big improvements to decisionmaking via mechanisms like futarchy

On this point, see 'Issues with Futarchy' (Vaintrob, 2021). Vaintrob writes the following, which I take to be her bottom line: "The EA community should not spend resources on interventions that try to promote futarchy."

Comment by Will Aldred on Is it better to be a wild rat or a factory farmed cow? A systematic method for comparing animal welfare. · 2022-12-29T10:22:27.228Z · EA · GW

(I stumbled onto this post while exploring the existing literature adjacent to Bob Fischer's/Rethink Priorities' The Moral Weight Project Sequence.)

I found this post very interesting. Here are some pros and cons I've noted down on your factors, scores, and metric criteria scores:

Pros

  • Clearly enumerated strengths and weaknesses according to desiderata
  • Compatible with expressing uncertainty (e.g. via ranges)
  • Simple and single-axis
  • Potentially also compatible with rating human welfare

Cons

  • Leans towards promoting ‘ease of measurement’, which might miss important but hard-to-measure things
  • Likely to be sensitive to weightings which are not very robustly grounded
  • Unclear how to account for indirect and long-term effects
  • Largely incompatible with rating welfare of artificial beings
Comment by Will Aldred on Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg · 2022-12-28T18:56:57.217Z · EA · GW

See also the following posts, published a few months after this one, which discuss AGI race dynamics (in the context of a fictional AI lab named Magma):

Comment by Will Aldred on Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg · 2022-12-28T18:56:26.896Z · EA · GW

Thanks for this very illuminating post.

One thing:

I am concerned that at some point in the next few decades, well-meaning and smart people who work on AGI research and development, alignment and governance will become convinced they are in an existential race with an unsafe and misuse-prone opponent [emphasis added].

Most people who've thought about AI risk, I think, would agree that most of the risk comes not from misuse risk, but from accident risk (i.e., not realizing the prepotent AI one is deploying is misaligned).[1] Therefore, being convinced the opponent is misuse-prone is actually not necessary, I don't think, to believe one is in an existential race. All that's necessary is to believe there is an opponent at all.

  1. ^

    I'd define a prepotent AI system (or cooperating collection of systems) as one that cannot be controlled by humanity, and which is at least as powerful as humanity as a whole with respect to shaping the world. (By this definition, such an AI system need not be superintelligent, or even generally intelligent or economically transformative. It may have powerful capabilities in a narrow domain that enable prepotence, such as technological autonomy, replication speed, or social manipulation.)

Comment by Will Aldred on Will Aldred's Shortform · 2022-12-26T18:01:50.588Z · EA · GW

Looking back on leaving academia for EA-aligned research, here are two things I'm grateful for:

  1. Being allowed to say why I believe something.
  2. Being allowed to hold contradictory beliefs (i.e., think probabilistically).

In EA research, I can write: 'Mortimer Snodgrass, last author on the Godchilla paper (Gopher et al., 2021), told me "[x, y, z]".'

In academia, I had to find a previous paper to cite for any claim I made in my paper, even if I believed the claim because I heard it elsewhere. (Or, rather, I did the aforementioned for my supervisor's papers - I used to be the research assistant who cherry picked citations off Google Scholar.)

In EA research, I can write, 'I estimate that the model was produced in May 2021 (90% confidence interval: March–July 2021)', or, 'I'm about 70% confident in this claim', and even, 'This paper is more likely than not to contain an important error.'

In academia, I had to argue for a position, without conceding any ground. I had to be all-in on whatever I was claiming; I couldn't give evidence and considerations for and against. (If I did raise a counterargument, it would be as setup for a counter-counterargument.)

That's it. No further point to be made. I'm just grateful for my epistemic freedom nowadays.

Comment by Will Aldred on [deleted post] 2022-12-26T14:43:36.011Z

This perspective sounds reasonable to me. Thanks.

Comment by Will Aldred on [deleted post] 2022-12-25T14:11:20.381Z

(Note that I've addressed other parts of your comment in a separate response.)

Moreover I don't feel like I really understand your motivations for writing this essay.

I’ve sat with this part of your comment for a few minutes, and, to be honest, it makes me feel a bit… unsafe?

I am aware that some posts include a “Why did I write this?” section (see, e.g., this one), or similar, and that this is very helpful because it makes clear the decision relevance of the post (among maybe other things).

But I’m fairly sure that most posts don’t have a section like this, and that most authors don’t explicitly address their motivations for writing their post.

Like, I started putting pen to paper (or, rather, fingers to keyboard) on something I’d been thinking about lately, and that’s how the post came into existence. I think that a decent fraction of forum posts get written in roughly this manner?

(Some of my other motivation for writing this post was Cunningham's Law-related, which I think is also a pretty regular motivation?)

I personally find it motivating to publish things I’ve written: the feeling of creating something and having it be permanent somewhere helps in getting me actually sit down in the first place and put pen to paper. (Permanent, at least, within the bounds my foreseeable lifetime.) This is a behavior I want to reinforce, I believe it’s valuable.

But now your comment is making me second-guess this learning by writing habit that I thought I wanted to cultivate.

(If your reason for questioning my motivations is something along the lines of, “I think it’s net-negative that you’ve published this post, because it’s not productive or decision relevant, and so I don’t see why you wrote it in the first place. Moreover, it’s drawing attention away from other posts on this forum that are productive and decision relevant”, then I think I (and most other authors?) would have been happy for you to simply down-vote. (Note that I do think the other points in your comment on my post are valuable, and I appreciate you making them - I’m talking here about your questioning my motivations for writing the post at all.))

As things stand, I feel like I’m being made to reexamine and defend the existence of my post, and this feels like… an unwarranted burden?

I mean, getting into the specifics of the post, I think when I started writing I envisaged it being more about applause lights in general and less about diversity in particular, but the way the post ended up is just, kind of, how it happened to write itself.

Am I missing something? I currently feel genuinely unsure about publishing posts in future. (For instance, whether I should evaluate drafts once I’m done writing, and only publish if they meet a certain bar; whether I should move away from the sit-and-start-writing process that led to this post’s creation, and toward only writing posts that have clear theory of change/decision relevance from the get-go.)

Comment by Will Aldred on [deleted post] 2022-12-25T12:02:49.194Z

Direct response:

Many thanks for your comment, Peter.

surely there's way more that can be done to fix this other than "distort the bar" to let the presumably less good people in?

Reading this comment makes me realize that perhaps a significant fraction of the awkwardness and not-facing-the-issue that Alice was met with was due to people in the room realizing, upon hearing Alice’s thoughts, that it was too late to promote diversity, and that this made them feel icky. And that all that could be done was to do better next time around.

(It probably wasn’t clear in the post, but the context was that all the applications were in, and they’d all been evaluated and ranked already. At that point, it does seem that there’s not much - if anything - that can be done other than distorting the bar?)

For what it’s worth, I believe Alice and co. are working on things for their next internship round like targeted outreach to underrepresented groups at the bottom-of-the-pipeline level (to help more of these people get into x-risk reduction and/or EA, e.g., by participating in seminar programmes), and trying to especially encourage these folks to apply to internships/fellowships (to help more of these people move up the EA-aligned research pipeline).

I think this is the core problem with regard to diversity that we should work to address. Why are your best applicants male and white? Surely this isn't a fact about x-risk as a field - that there's something about being male and white that makes you better at addressing x-risk.

I agree that this is a problem to be worked on. I also notice that this discussion might be prone to talking-past-one-another on account of setting different targets. (For instance, I’m not sure whether I fully agree with you or only partially agree with you.)

As a toy example to illustrate my different targets point, let’s consider just technical AI safety. Now, I don’t know the exact numbers, but within this toy example let’s say that, of the undergraduate population as a whole that studies relevant subjects (like computer science): 80% are male; 65% are white. To me, this would imply that trying to go much above 20% female and 35% non-white working in technical AI safety is going to be difficult, and that there’ll be increasingly diminishing returns, just by the nature of the statistics at play here, to the resources (e.g., effort, time) put towards trying to achieve >20% and >35%. Like, getting to 50% and 50% would be extremely costly, I think, and so 50% would not be an appropriate target.

Therefore, I’m saying - as an independent impression - that it is a fact about technical AI safety as a field that we should expect most of the best researchers to be male and white (around 80% and 65%, respectively, within my example). There’s the separate problem of promoting diversity within the relevant populations (e.g., CS undergrads) that AI safety is drawing from, but I don't think that problem falls within AI safety field-builders’ purview.

Comment by Will Aldred on [deleted post] 2022-12-25T10:57:30.292Z

Meta-level response:

I notice that it would be helpful if there was some way of seeing whether it’s your first point,

IMO your thinking about diversity here really lacks nuance, basically misses the main problem, and leaves me feeling troubled

or your second point,

Note that, at least in the United States, doing this is usually illegal

that’s garnering most of the upvotes and agreement votes.

Because the second point is a statement of fact that, while being important in general, isn't so important to me. Whereas the first point could potentially majorly update my all-things-considered belief on the topic at hand, and cause me to change my actions.

 

(My guess, however, is that this doesn’t happen frequently enough for there to be, for example:

  • a new Forum norm like making separate points as separate comments
  • a new Forum feature like being able to add multiple points in a comment at which others can agree/disagree vote)
Comment by Will Aldred on The ‘Old AI’: Lessons for AI governance from early electricity regulation · 2022-12-19T17:03:50.364Z · EA · GW

See also Helen Toner's 80k podcast episode, in which she talks about the electricity-AI analogy (timestamp: 23:40–30:50).

Comment by Will Aldred on The Pentagon claims China will likely have 1,500 nuclear warheads by 2035 · 2022-12-12T18:32:10.689Z · EA · GW

Also, here's a meme I enjoyed:

Comment by Will Aldred on The Pentagon claims China will likely have 1,500 nuclear warheads by 2035 · 2022-12-12T18:12:49.730Z · EA · GW

People who found this post interesting may also want to check out Michael Aird's database of nuclear risk estimates and/or the Metaculus tournaments on nuclear risk (there's a short-range one and a long-range one).

Comment by Will Aldred on The Pentagon claims China will likely have 1,500 nuclear warheads by 2035 · 2022-12-12T18:12:32.296Z · EA · GW

Incidentally, while I am a fan of the Bulletin of Atomic Scientists' Nuclear Notebook, I'm not a huge fan of their flagship product, the Doomsday Clock.

Mainly because, whilst it has the appearance of being quantitative - the Doomsday Clock is currently at 100 seconds to midnight - it doesn't translate well into probabilities. At 100 seconds to midnight, how high, actually, is the risk of destruction? What about at 60 seconds to midnight? Is the scale linear? For more on this, I recommend Christian Ruhl's post, "Building a Better Doomsday Clock".

Comment by Will Aldred on Some comments on recent FTX-related events · 2022-11-12T18:40:18.649Z · EA · GW

The FTX Future Fund launched in Feb 2022, and Open Phil were hiring for a program officer for their new Global Health and Wellbeing program in Feb 2022 (see here).

For context, all FTX funds go/went to longtermist causes; Open Phil currently has two grantmaking programs (see here): one in Longtermism and the other one being the Global Health and Wellbeing program that launched - I assume - around Feb '22.

So my guess, though I'm not certain, is that the launches of FTX Future Fund and Open Phil's Global Health and Wellbeing program were linked, and that Open Phil did increase its neartermist:longtermist funding ratio when FTX funds became available.

Comment by Will Aldred on Some comments on recent FTX-related events · 2022-11-11T14:42:22.954Z · EA · GW

(It's interesting to note that, at present, my above comment is on -1 agreement karma after 50 votes. This suggests that the question of rebalancing the neartermist:longtermist funding ratio is genuinely controversial, as opposed to there being a community consensus either way.)

Comment by Will Aldred on Some comments on recent FTX-related events · 2022-11-11T10:01:37.240Z · EA · GW

Thanks for the post, I appreciate the clarity it brings.

Given FTX Foundation’s focus on existential risk and longtermism, the most direct impacts are on our longtermist work. We don’t anticipate any immediate changes to our Global Health and Wellbeing work as a result of the recent news.

Would it not make sense for Open Phil to shift some of its neartermist/global health funds to longtermist causes?

Although any neartermist:longtermist funds ratio is, in my opinion, fairly arbitrary, this ratio has increased significantly following the FTX event. Thus, seems to me that Open Phil should maybe consider acting to rebalance it.

(I'd be curious to hear a solid counterargument.)

Comment by Will Aldred on Paper summary: The Epistemic Challenge to Longtermism (Christian Tarsney) · 2022-10-12T01:42:42.147Z · EA · GW

See also Michael Aird's comments on this Tarsney (2020) paper. His main points are:

  • 'Tarsney's model updates me towards thinking reducing non-extinction existential risks should be a little less  of a priority than I previously thought.' (link to full comment)
  • 'Tarsney seems to me to understate the likelihood that accounting for non-human animals would substantially affect the case for longtermism.' (link)
  • 'The paper ignores 2 factors that could strengthen the case for longtermism - namely, possible increases in how efficiently resources are used and in what extremes of experiences can be reached.' (link)
  • 'Tarsney writes "resources committed at earlier time should have greater impact, all else being equal". I think that this is misleading and an oversimplification. See Crucial questions about optimal timing of work and donations and other posts tagged Timing of Philanthropy.' (link)
  • 'I think it'd be interesting to run a sensitivity analysis on Tarsney's model(s), and to think about the value of information we'd get from further investigation of: 
    • how likely the future is to resemble Tarsney's cubic growth model vs his steady model
    • whether there are other models that are substantially likely, whether the model structures should be changed
    • what the most reasonable distribution for each parameter is.' (link)
Comment by Will Aldred on Roodman's Thoughts on Biological Anchors · 2022-09-15T19:33:24.626Z · EA · GW

Roodman's proposed "restriction that the various frameworks agree" makes no sense.

I'm with you. I think Roodman must disagree with the idea of giving probabilies to different (and necessarily conflicting) models of the world, but to me this seems like an odd position to hold. I might also be missing something.

Comment by Will Aldred on 'Artificial Intelligence Governance under Change' (PhD dissertation) · 2022-09-15T19:16:38.234Z · EA · GW

Belated congrats on completing your PhD! I'm looking forward to reading the sections you've highlighted.

Comment by Will Aldred on Alex Lawsen On Forecasting AI Progress · 2022-09-06T14:49:39.991Z · EA · GW

We discuss [...] how to develop your own inside views about AI Alignment.

See also Neel Nanda's (2022):

Comment by Will Aldred on Neuroscience of time perception? · 2022-09-04T22:30:14.117Z · EA · GW

OP, you might want to check out 'Research Summary: The Subjective Experience of Time' (Schukraft, 2020).

Comment by Will Aldred on Climate Change & Longtermism: new book-length report · 2022-08-27T21:29:58.439Z · EA · GW

I notice a two-karma system has been implemented in at least one EA Forum post before, see the comments section to this "Fanatical EAs should support very weird projects" post.

Comment by Will Aldred on Introduction to Fermi estimates · 2022-08-26T14:52:26.495Z · EA · GW

Love the post. For more examples – some of them EA-oriented – of Fermi estimates / back of the envelope calculations (BOTECs), see Botec Horseman's Tweets.

(Note: Botec Horseman is neither myself nor Nuño.)

Comment by Will Aldred on Could realistic depictions of catastrophic AI risks effectively reduce said risks? · 2022-08-17T20:37:20.971Z · EA · GW

One data point here (I'm unsure how much weight to give it, probably not very much) is the 1983 movie The Day After, which is about the aftermath of nuclear war.

He [President Reagan] wrote in his diary that the film [The Day After] was "very effective and left me greatly depressed", and that it changed his mind on the prevailing policy on a "nuclear war". The film was also screened for the Joint Chiefs of Staff. A government advisor who attended the screening, a friend of Meyer's, told him: "If you wanted to draw blood, you did it. Those guys sat there like they were turned to stone." In 1987, Reagan and Soviet leader Mikhail Gorbachev signed the Intermediate-Range Nuclear Forces Treaty, which resulted in the banning and reducing of their nuclear arsenal. In Reagan's memoirs, he drew a direct line from the film to the signing.

(Wikipedia; emphasis added)

Comment by Will Aldred on A concern about the “evolutionary anchor” of Ajeya Cotra’s report on AI timelines. · 2022-08-16T14:57:55.203Z · EA · GW

Thanks for writing this post, have added it to my collection "AI timelines via bioanchors: the debate in one place".

Comment by Will Aldred on Radio Bostrom: Audio narrations of papers by Nick Bostrom · 2022-08-10T13:46:22.951Z · EA · GW

Tell us which papers should feature in the "Introduction to Nick Bostrom" series, and in what order. We're especially keen for suggestions on what the first three papers should be.

If the series is to be 3 papers long, I'd suggest:

  1. The Vulnerable World Hypothesis
  2. Are You Living in a Computer Simulation?
  3. The Transhumanist FAQ

If the series is to be a dozen papers long, I'd suggest:

  1. What is a Singleton?
  2. The Future of Human Evolution
  3. The Future of Humanity
  4. The Vulnerable World Hypothesis
  5. The Transhumanist FAQ
  6. Are You Living in a Computer Simulation?
  7. Astronomical Waste: The Opportunity Cost of Delayed Technological Development
  8. Where Are They? Why I hope the search for extraterrestrial life finds nothing
  9. A Primer on the Doomsday argument
  10. The Reversal Test: Eliminating Status Quo Bias in Applied Ethics
  11. Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?
  12. Sharing the World with Digital Minds
Comment by Will Aldred on What's the likelihood of irrecoverable civilizational collapse if 90% of the population dies? · 2022-08-09T14:51:42.883Z · EA · GW

(DM'ed you)

Comment by Will Aldred on Risks from atomically precise manufacturing - Problem profile · 2022-08-09T14:14:08.338Z · EA · GW

Widespread access to atomically precise manufacturing could lead to widespread ability to unilaterally produce things as destructive as nuclear weapons.

See below an excerpt from the atomically precise manufacturing (APM) section of Michael Aird's shallow review of tech developments that could increase risks from nuclear weapons (link):

The concern related to nuclear risk is that APM could potentially make proliferation and/or huge nuclear arsenal sizes much more likely. Specifically:

  • APM could make it much harder to monitor/control who has nuclear weapons, including even non-state groups or perhaps individuals.
  • APM could make it much more likely that non-state groups or individuals would not only attain one or a few but rather many nuclear weapons.
  • APM could make it so that nuclear-armed states can more cheaply, easily, and quickly build hundreds, thousands, tens of thousands, or even millions of nuclear weapons.
    • The enhanced ease and speed of doing this could undermine arms race stability.
      • This might also mean that even just the perception of APM being on the horizon could undermine strategic stability, even before APM has arrived.
    • And nuclear conflicts involving huge numbers of warheads are much more likely to cause an existential catastrophe, or otherwise increase existential risk, than nuclear conflicts involving the tens, hundreds, or thousands of nuclear weapons each nuclear-armed state currently has.
    • A counterpoint is that at least some nuclear-armed states already have the physical capacity to make many more nuclear warheads than they do (given enough time to build up nuclear manufacturing infrastructure), but having many more warheads doesn’t appear to be what these states are aiming for. This is some evidence that nuclear weapon production being easier to make might not result in huge arsenal sizes. But:
      • This is only some evidence.
      • And some nuclear-armed states (e.g., North Korea, Pakistan, and maybe India) may be developing nuclear weapons about as fast as they reasonably can, given their economies. As such, perhaps having access to APM would be likely to substantially affect at least their arsenal sizes.

(Note that we haven’t spent much time thinking about these points, nor seen much prior discussion of them, so this should all be taken as tentative and speculative.)

Comment by Will Aldred on What's the likelihood of irrecoverable civilizational collapse if 90% of the population dies? · 2022-08-07T20:34:03.865Z · EA · GW

This question feels similar in spirit to the one I asked a couple weeks ago, "Odds of recovering values after collapse?", so OP you might be interested in checking out that question and the responses to it.

Comment by Will Aldred on Odds of recovering values after collapse? · 2022-07-24T18:21:46.914Z · EA · GW

On Loki's Wagers: for an amusing example, see Yann LeCun's objection to AGI.

Comment by Will Aldred on Odds of recovering values after collapse? · 2022-07-24T18:21:21.659Z · EA · GW

My response to the question:

  • worse values - 50/100
  • similar values - 20/100
  • better values - 30/100
Comment by Will Aldred on Will Aldred's Shortform · 2022-07-22T15:28:29.542Z · EA · GW

We could spend all longtermist EA money, now.

(This is a sort-of sequel to my previous shortform.)

The EA funding pool is large, but not infinite. This statement is nothing to write home about, but I've noticed quite a few EAs I talk to view longtermist/x-risk EA funding as effectively infinite, the notion being that we're severely bottlenecked by good funding opportunities.

I think this might be erroneous.

Here are some areas that could plausibly absorb all EA funding, right now:

  • Biorisk
    • Better sequencing
    • Better surveillance
    • Developing and deploying PPE
    • Large-scale philanthropic response to a pandemic
  • AI risk
    • Policy spending (especially in the US)
    • AI chips
      • either scaling up chip production, or buying up top-of-the-range chips
    • Backing the lab(s) that we might want to get to TAI/AGI/HLMI/PASTA first

(Note: I'm definitely not saying we should fund these things, but I am pointing out that there are large funding opportunities out there which potentially meet the funding bar. For what it's worth, my true thinking is something closer to: "We should reserve most of our funding for shaping TAI come crunch time, and/or once we have better strategic clarity."

Note also: Perhaps some, or all, of these don't actually work, and perhaps there are many more examples I'm missing - I only spent ~3 mins brainstorming the above. I'm also pretty sure this wasn't a totally original brainstorm, and that I was remembering these examples having read something on a similar topic somewhere, probably here on the Forum, though I can't recall which post it was.)

Comment by Will Aldred on [deleted post] 2022-07-21T22:38:40.070Z

Will's favourite excerpts, continued:

An LSD trip might seem to extend for days when in fact it was over in an afternoon.

;P

Comment by Will Aldred on Paper summary: Staking our future: deontic long-termism and the non-identity problem (Andreas Mogensen) · 2022-07-21T15:37:16.686Z · EA · GW

Hi GPI, love these paper summaries. Just an FYI that "Mogensen" appears to be spelled incorrectly ("Morgensen") on your website alongside this Staking our future summary: https://globalprioritiesinstitute.org/paper-summaries/

Comment by Will Aldred on Will Aldred's Shortform · 2022-07-21T08:29:28.695Z · EA · GW

TAI makes everything else less important.

One of my CERI fellows asked me to elaborate on a claim I made that was along the lines of,* "If AI timelines are shorter, then this makes (direct) nuclear risk work less important because the time during which nuclear weapons can wipe us out is shorter."

There's a general point here, I think, which isn't limited to nuclear risk. Namely, AI timelines being shorter not only makes AI risk more important, but makes everything else less important. Because the time during which the other thing (whether that be an asteroid, engineered pandemic, nuclear war, nanotech-caused grey goo scenario, etc.) matters as a human-triggered x-risk is shortened.

To give the nuclear risk example:^

If TAI is 50 years away, and per-year risk of nuclear conflict is 0.5%, then risk of nuclear conflict before TAI is 1-(0.995^50) = 22%

If TAI is 15 years away, and per-year risk of nuclear conflict is 0.5%, then risk of nuclear conflict before TAI is 1-(0.995^15) = 7%

This does rely on the assumption that we'll be playing a different ball game after TAI/AGI/HLMI arrives (if not, then there's no particular reason to view TAI or similar as a cut-off point), but to me this different ball game assumption seems fair (see, e.g., Muehlhauser, 2019).

 

*My background thinking behind my claim here has been inspired by conversations with Michael Aird, though I'm not certain he'd agree with everything I've written in this shortform.

^A couple of not-that-important caveats:

  • "Before TAI" refers to the default arrival time of TAI if nuclear conflict does not happen.
  • The simple calculations I've performed assume mutual independence between nuclear-risk-in-given-year-x and nuclear-risk-in-given-year-y.
Comment by Will Aldred on [deleted post] 2022-07-14T12:24:02.226Z

P.S. I'm personally confused by the "Hail Mary" terminology...

... Wikipedia: "A Hail Mary pass is a very long forward pass in American football, typically made in desperation, with an exceptionally small chance of achieving a completion. Due to the difficulty of a completion with this pass, it makes reference to the Catholic 'Hail Mary' prayer for supernatural help."

However, Bostrom's Hail Mary approach was intended to have a high probability of success for achieving a small win (where small win = short pass, in football speak), i.e., the opposite of what Wikipedia describes.

Comment by Will Aldred on [deleted post] 2022-07-11T17:39:08.586Z

Notable exception(s) to the premise, "accelerated scientific and technological progress would be great": AI capabilities research. Also, synthetic biology and nanotechnology.

Nevertheless, though these are important exceptions, I feel that if one refutes cognitive enhancement based on these exceptions, then one has missed the point of Bostrom's core argument.

Comment by Will Aldred on Fanatical EAs should support very weird projects · 2022-06-30T15:14:02.588Z · EA · GW

There's a bit of a history of estimating how low the probabilities are that we can ignore.

See also "Exceeding Expectations: Stochastic Dominance as a General Decision Theory" (Tarsney, 2020); West (2021) summarises this Tarsney paper, which is pretty technical. A key sentence from West:

Tarsney argues that we should use an alternative decision criterion called stochastic dominance which agrees with EV in non-Pascallian situations, but, when combined with the above argument about uncertainty, disagrees with EV in Pascallian ones. 

Comment by Will Aldred on Four reasons I find AI safety emotionally compelling · 2022-06-29T22:30:28.380Z · EA · GW

I think I mainly agree with the other comments (from Devin Kalish and Miranda Zhang), but on net I'm still glad that this post exists. Many thanks for writing it :)

Specifically, I think positive/x-hope (as opposed to negative/x-risk) styled framings can be valuable. Because:

  1. There's some literature in behavioral psychology and in risk studies which says, roughly, that a significant fraction of people will kind of shut down and ignore the message when told about a threat. See, e.g., normalcy bias and ostrich effect.
  2. Mental contrasting has been shown to work (and the study replicates, as far as I can tell). Essentially, the research behind mental contrasting tells us that people tend to be strongly motivated by working toward a desired future.[1]
  3. Anecdotally, I've found that some people do just resonate strongly with "inspirational visions" and the idea of helping build something really cool.
  1. ^

    Technically, mental constrasting only works if the perceived chance of success is high. I'm not sure what percentage "high" in this context maps to, but there is an issue here for folks who have a non-low P(doom) via misaligned AI... To counter this, there is the option of resorting to the Dark Arts, self-deception in particular, but that discussion is beyond the scope of this comment. See the Dark Arts tag on LessWrong if interested.

Comment by Will Aldred on Limits to Legibility · 2022-06-29T21:32:31.062Z · EA · GW

I enjoyed reading this post, and I think I agree with your assessment:

It is pretty clear that some of the main cruxes of current disagreements about AI alignment are beyond the limits of legible reasoning. (The current limits, anyway.) 

(In addition to the Christiano-Yudkowsky example you give, one could also point to the Hanson-Yudkowsky AI-Foom Debate of 2008.)

In addition to "Epistemic Legibility" and "A Sketch of Good Communication," which you mention, I'd recommend "Public beliefs vs. Private beliefs" (Tyre, 2022) to others who enjoyed this post - Tyre explores a somewhat related theme.

Comment by Will Aldred on Let's not have a separate "community building" camp · 2022-06-29T14:32:09.679Z · EA · GW

I appreciate this post's high info to length ratio, and I'd be excited about more EAs taking actions along the lines of your corollaries at the end.

On another note, I find it interesting that for the amount of time community builders should be spending skilling up for direct work / doing direct work / skilling up in general:

  • You say roughly 20%
  • Emma Williamson says >50% (in her post from yesterday)