Cold Takes Audio is Holden Karnofsky reading posts from his new-ish blog site. I'd highly recommend it.michaela on A list of EA-related podcasts
Nonlinear Library has machine-read (but still pretty good) versions of a large and increasing number of posts from the EA Forum, LessWrong, and the Alignment Forum. See https://forum.effectivealtruism.org/posts/JTZTBienqWEAjGDRv/listen-to-more-ea-content-with-the-nonlinear-library [EA · GW] This is probably the podcast I've listened to most often since it came out, and will probably remain the podcast I listen to most often for the indefinite future.nunosempere on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?
For what it's worth, I don't disagree with you, though I do think that the steady state is a lower bound of value, not an upper bound.jared_m on Be a Stoic and build better democracies: an Aussie-as take on x-risks (review essay)
The prominent Aussie-American economist Justin Wolfers at the University of Michigan has been promoting Leigh’s book this week.
Given Wolfers’ broad following in the U.S., he may be introducing more economists/others to the idea of existential risks here: https://mobile.twitter.com/JustinWolfers/status/1465074869750702096
Also thanks, Matt, for your write-up!stefan_schubert on A case for nonviolent protest
In response to your aside - I totally agree with the reputational risks , which are very significant. I didn't specify but I always assumed the support would be behind-the-scenes, rather than public.
Secretly funding organisations of this type could also have reputational risks - if it was revealed, then it might backfire and fuel conspiracy theories.
In general, I think transparency is a good heuristic in this context.
More generally I think effective altruism should be cooperative [? · GW] and I think that secretly funding organisations that are perceived as using dubious, uncooperative tactics pushes against that.samuel-shadrach-1 on Wikipedia editing is important, tractable, and neglected
Scihub is probably atleast 1% as impactful as Wikipedia, and shouldn't take more than $1M to save forever.elliotjdavies on Notes on the risks and benefits of kidney donation
Who were going to be the donors for this event?
I was mostly thinking friends and family, but I was hoping the novelty factor could spread it to local communities
I don't know how legal "donate in anticipation of a kidney' is either
Wow yeah I have a feeling you'd get your name down in case-law either way.henrystanley on A case for nonviolent protest
Thanks for the thoughtful reply.
(I spotted that YouGov graph yesterday; agree that it's pretty compelling evidence for XR increasing concern about the environment.)linch on Linch's Shortform
What are the best arguments for/against the hypothesis that (with ML) slightly superhuman unaligned systems can't recursively self-improve without solving large chunks of the alignment problem?
Like naively, the primary way that we make stronger ML agents is via training a new agent, and I expect this to be true up to the weakly superhuman regime (conditional upon us still doing ML).
Here's the toy example I'm thinking of, at the risk of anthromorphizing too much:Suppose I'm Clippy von Neumann, an ML-trained agent marginally smarter than all humans, but nowhere near stratospheric. I want to turn the universe into paperclips, and I'm worried that those pesky humans will get in my way (eg by creating a stronger AGI, which will probably have different goals because of the orthogonality thesis). I have several tools at my disposal:
I'm not sure where I'm going with this argument. It doesn't naively seem like AI risk is noticeably higher or lower if recursive self-improvement doesn't happen. We can still lose the lightcone either gradually, or via a specific AGI (or coalition of AGIs) getting a DSA via "boring" means like mad science, taking over nukes, etc. But naively this looks like a pretty good argument against recursive self-improvement (again, conditional upon ML and only slightly superhuman systems), so I'd be interested in seeing if there are good writeups or arguments against this position.gidonkadosh on The Explanatory Obstacle of EA
Just thinking out loud: Diving deeper into each misconception and providing concrete examples (or even "simulations" for practice) might be a good idea for an EA pitching workshop