Predicting Polygenic Selection for IQ 2022-03-28T18:00:32.630Z
Reducing Nuclear Risk Through Improved US-China Relations 2022-03-21T11:50:39.165Z


Comment by Ryan Beck on EA is becoming increasingly inaccessible, at the worst possible time · 2022-07-23T13:27:32.258Z · EA · GW

I always interpreted the 10% as a goal, not a requirement for EA. That's a pretty high portion for a lot of people. I worry that making that sound like a cutoff makes EA seem even more inaccessible.

The way I had interpreted the community message was more like "an EA is someone that thinks about where their giving would be most effective or spends time working on the world's most pressing problems."

Comment by Ryan Beck on The value of content density · 2022-07-20T17:55:28.480Z · EA · GW

Thanks for writing this, as a new-ish user of the forum it's been frustrating trying to find previous posts that address questions I have or things I want to learn more about, only to find sprawling or multi-part posts with half hour or longer read-times that may or may not address the specific thing I'm interested in.

Also you mentioned jargon and I think there's room for a lot of improvement there, it seems to me like there's more jargon than is justified and it made the forum daunting for me. This previous post has some good recommendations and in my opinion it would be valuable for more people to try to simplify their language where possible.

Comment by Ryan Beck on Forecasting Newsletter: June 2022 · 2022-07-12T13:22:54.022Z · EA · GW

Great post as usual.

It looks like your Putin's health link goes to the wrong forecast.

Comment by Ryan Beck on Preventing a US-China war as a policy priority · 2022-06-23T02:34:02.704Z · EA · GW

I've found this short article useful in explaining the case for it. Basically it says that a guarantee of defense could embolden Taiwan to more aggressively pursue independence which could provoke China, while committing to not interfere could embolden China to invade. The US benefits from better relations with both countries if it walks a line between them and it may be better for peace between them if Taiwan has to tread carefully and China expects a high chance of the US fighting off an invasion of Taiwan.

Comment by Ryan Beck on Preventing a US-China war as a policy priority · 2022-06-22T19:13:32.039Z · EA · GW

Thanks for posting this, I'm glad to see more discussion of the issue and you've laid it out very nicely.

In the interest of thinking seriously about this potential deadly conflict, could you explain why you lean toward abandoning Taiwanese independence if war appears likely? Aside from principle based stances about protecting potential allies and the right of countries to continue governing themselves, I think my main worry is that giving in to bullying seems like it would incentivize future bullying. If the US and other nations declare that they no longer care about Taiwan, what stops superpowers in the future from using military aggression to stake a claim to some territory they had previously held at some point in the past few centuries?

On a related note, this same kind of approach would have suggested Ukraine give in to Russian demands and possibly even offer up the Donbas, which would likely have saved lives in 2022, but is it reasonable to expect that Russia would have been satisfied with that negotiation 5 or 10 years down the road?

The argument for abandoning Taiwan makes sense, ~25 million people's independence may not be worth the chances of billions being killed in a nuclear exchange, but my perception of China and Russia is that there's not some set of demands where you can give them what they want at the moment and then they're satisfied, it seems more likely that new points of contention keep cropping up over time whether you give in to their demands or not.

Comment by Ryan Beck on Reducing Nuclear Risk Through Improved US-China Relations · 2022-06-22T18:47:15.086Z · EA · GW

Your comment is 3 months old, but somehow I missed it back when I was posted and am just now seeing it, so I just wanted to say these are all good points, particularly about cooperation on other issues like your climate example!

Comment by Ryan Beck on Who's hiring? (May-September 2022) · 2022-06-10T14:26:38.500Z · EA · GW

Fantastic! Thank you!

Comment by Ryan Beck on Announcing a contest: EA Criticism and Red Teaming · 2022-06-07T14:39:48.921Z · EA · GW

Makes sense, thanks!

Comment by Ryan Beck on Announcing a contest: EA Criticism and Red Teaming · 2022-06-07T11:25:04.104Z · EA · GW

It's possible I missed it but I didn't see anything stating whether multiple submissions from one author are allowed, I assume they are though?

Comment by Ryan Beck on Who's hiring? (May-September 2022) · 2022-06-02T12:20:20.040Z · EA · GW

Very cool, thank you!

Comment by Ryan Beck on Who's hiring? (May-September 2022) · 2022-06-02T02:12:19.515Z · EA · GW

Is there a way to sort answers by newest? I'm not seeing that option. It would be useful for finding new answers I haven't seen yet.

Comment by Ryan Beck on Who wants to be hired? (May-September 2022) · 2022-05-27T20:46:41.169Z · EA · GW

Update July 19, 2022 - I've accepted an employment offer and am currently not seeking a new role

Location: Council Bluffs, Iowa

Remote: Yes

Willing to relocate: No





  • Cause Areas
    • Open to just about anything. I lean a little more toward global health and wellbeing but I think longtermist focuses are important too and would be happy to contribute to either.
  • Availability
    • I'd most likely be available within about two months of receiving an offer.
  • Looking for
    • I'm primarily looking for full time work, and I think I would enjoy doing anything that fits well with my skills, though I am looking mainly for something different from engineering.
    • I think I'd be a good fit for and enjoy roles that involve research, writing, and/or forecasting.
    • I'd also enjoy roles that involve coding or building tools, though I don't have a lot of formal experience with those things so I might not be as good of a fit there.
  • Experience with EA
    • I've been aware of and interested in EA for a few years, though I've only started looking into it more closely in the past few months. I'd describe myself as someone who was a lurker on the periphery of EA and is now trying to get more involved.

If anyone has any feedback or suggestions for me I'd appreciate it as well. Feel free to message me or submit feedback anonymously at this link.

Edited on June 1, 2022 to add the notes section.

Comment by Ryan Beck on [deleted post] 2022-05-03T05:40:18.970Z

I'm not sure how your first point relates to what I was saying in this post; but, I'll take a guess.

Sorry, what I said wasn't very clear. Attempting to rephrase, I was thinking more along the lines of what the possible future for AI might look like if there were no EA interventions in the AI space. I haven't seen much discussion of the possible downsides there (for example slowing down AI research by prioritizing alignment resulting in delays in AI advancement and delays in good things brought about by AI advancement). But this was a less-than-half-baked idea, thinking about it some more I'm having trouble thinking of scenarios where that could produce a lower expected utility.

It doesn't matter what outcome you assign zero value to as long as the relative values are the same since if a utility function is an affine function of another utility function then they produce equivalent decisions.

Thanks, I follow this now and see what you mean.

Comment by Ryan Beck on [deleted post] 2022-05-03T02:33:44.371Z

Did your outcomes 2 and 3 get mixed up at some point? I feel like the evaluations don't align with the initial descriptions of those, but maybe I'm misunderstanding.

Thanks for writing this though, this is something I've been thinking a little about as I try to understand longtermism better. It makes sense to be risk-averse with existential risk, but at the same I have a hard time understanding some of the more extreme takes. My wild guess would be that AI has a significantly higher chance of improving the well-being of humanity than it does causing extinction, like I said care is warranted with existential risk but at the same time slowing AI development delays your positive outcomes 2 and 3, and I haven't seen much discussion about the downsides of delaying.

Also I'm not sure about outcome 1 having zero utility, maybe that's standard notation but it seems unintuitive to me, like it kind of buries the downsides of extinction risk. To me it would seem more natural as a negative utility, relative to the positive utility currently existing in the world.

Comment by Ryan Beck on [$20K In Prizes] AI Safety Arguments Competition · 2022-04-27T14:50:01.922Z · EA · GW

It's a common misconception that those who want to mitigate AI risk think there's a high chance AI wipes out humanity this century. But opinions vary and proponents of mitigating AI risk may still think the likelihood is low. Crowd forecasts have placed the probability of a catastrophe caused by AI as around 5% this century, and extinction caused by AI as around 2.5% this century. But even these low probabilities are worth trying to reduce when what's at stake is millions or billions of lives. How willing would you be to take a pill at random from a pile of 100 if you knew 5 were poison? And the risk is higher for timeframes beyond this century.

I think the above could be improved with forecasts of extinction risk from prominent AI safety proponents like Yudkowsky and Christiano if they've made them but I'm not aware of whether they have or not.

Comment by Ryan Beck on Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"? · 2022-04-20T11:13:42.681Z · EA · GW

That's a good point, I agree. None of my suggestions really fit very well, it's hard to think of a descriptive name that could be easily used conversationally.

Comment by Ryan Beck on Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"? · 2022-04-19T16:50:37.038Z · EA · GW

It still seems like prefixing with "not" still runs into defining based on disagreement, where I would guess people who lean that way would rather be named for what they're prioritizing as opposed to what they aren't. I came up with a few (probably bad) ideas along that vein:

  • Immediatists (apparently not a made up word according to Merriam-Webster)
  • Contemporary altruists
  • Effective immediately

I'm relatively new so take my opinion with a big grain of salt. Maybe "not longtermist" is fine with most.

Comment by Ryan Beck on Impactful Forecasting Prize Results and Reflections · 2022-03-31T18:34:24.543Z · EA · GW

There are good points and helpful, thanks! I agree I wasn't clear about viewing the scenarios exclusively in the initial comment, I think I made that a little clearer in the follow up.

when I read 80% to reach saturation at 40% predictive power I read this as "capping out at around 40%" which would only leave a maximum of 20% for scenarios with much greater than 40%?

Ah I think I see how that's confusing. My use of the term saturation probably confuses things too much. My understanding is saturation is the likely maximum that could be explained with current approaches, so my forecast was an 80% chance we get to the 40% "saturation" level, but I think there's a decent chance our technology/understanding advances so that more than the saturation can be explained, and I gave a 30% chance that we reach 80% predictive power.

That's a good point about iterated embryo selection, I totally neglected that. My initial thought is it would probably overlap a lot with the scenarios I used, but I should have given that more thought and discussed it in my comment.

Comment by Ryan Beck on Impactful Forecasting Prize Results and Reflections · 2022-03-29T19:18:00.628Z · EA · GW

No problem!

Also if you're interested in elaborating about why my scenarios were unintuitive I'd appreciate the feedback, but if not no worries!

Comment by Ryan Beck on Impactful Forecasting Prize Results and Reflections · 2022-03-29T17:12:37.530Z · EA · GW

This was a cool contest, thanks for running it! In my view there's a lot of value in doing this. Doing a deep dive into polygenic selection for IQ was something I had wanted to do for quite a while and your contest motivated me to finally sit down and actually do it and to write it up in a way that would be potentially useful to others.

I think your initial criteria of how much a writeup changed your minds may have played a role in fewer than expected entries as well. Your forecasts on the set of questions seemed very reasonable and my own forecasts were pretty similar on the ones I had forecasted, so I didn't feel that I had much to contribute in terms of the contest criteria for most of them.

Hopefully that's helpful feedback to you or anyone else looking to run a contest like this in the future!

Comment by Ryan Beck on Predicting Polygenic Selection for IQ · 2022-03-28T20:05:25.287Z · EA · GW

That's really interesting, thanks! I wonder why India is so supportive of it in comparison to other countries.

Comment by Ryan Beck on Reducing Nuclear Risk Through Improved US-China Relations · 2022-03-23T13:39:04.338Z · EA · GW

Even if the ~300 new DF-41 silos discovered last year are each armed with only 3 warheads (the missile can carry ~10 max), and no other silos are built/discovered, that's still 900 warheads on top of the ~400 already in service.

I'm not well-versed in this area but reading through the Chinese nuclear notebook from November 2021 they seem kind of skeptical of claims like this and point out that China could also be intending the silos to be a "shell game". Quoting from the notebook:

And in November 2021, the Pentagon’s annual report to Congress projects that China might have 700 deliverable warheads by 2027, and possibly as many as 1,000 by 2030 (US DefenseDepartment 2021, 90).

Such increases would require the deployment of a significant number of additional launchers, including MIRV-equipped missiles. It seems likely that the new projection assumes that China plans to deploy large numbers of MIRV’ed missiles in the new missile silo fields that are currently under construction. But there are several unknown factors. First, how many of the new silos will be loaded? China might build more silos than missiles to create a “shell game” that would make it harder for an adversary to target the missiles. Second, how many of the missiles will be MIRV’ed, and with how many warheads? Many non-official sources attribute very high numbers of warheads to MIRVed missiles (for example, 10 warheads per DF-41), but the actual number will likely be lower to maximize the range of the missile (perhaps three to five each, perhaps less). This is because we believe that the main purpose of the massive silo construction program is to safeguard China’s retaliatory capability against a surprise first-strike. And the main purpose of the MIRV program is probably to ensure penetration of US missile defenses, rather than to maximize the warhead loading of the Chinese missile force. As the United States strengthens its offensive forces and missile defenses, China will likely further modify its nuclear posture to ensure the credibility of its retaliatory strike force, including deploying hypersonic glide vehicles.

Would you disagree with that assessment?

I know a lot of ways to reduce China-US nuclear risk even without non-starters to the pro-democracy crowd (e.g. giving up defence commitments to certain US allies). There seems to be some major civilizational inadequacy in this area; i.e. obvious ways to have a major reduction on the risk that just nobody's bothered to implement. I don't think economic tensions/trade wars are very relevant to nuclear risk compared to more important factors in the grand scheme of things to be frank.

I agree that the trade war issue is probably low impact, but I focused on it because it has few downsides and potential upsides for nuclear risk. What ways to reduce China-US nuclear risk do you suggest? From what I've seen so far (which is admittedly very little) it seems like there are very few feasible options to reduce nuclear risk with China, and most available options involve a lot of unknowns with regard to implementation and effectiveness and potentially have significant downsides.

Comment by Ryan Beck on Reducing Nuclear Risk Through Improved US-China Relations · 2022-03-23T03:00:41.338Z · EA · GW

Yeah definitely on the same page then! I agree with what you said there with the possible exception or caveat that I'm skeptical on improvements to the Taiwan issue and that if you find or know of any persuasive abyss-staring arguments on this topic (or write them yourself) I'd appreciate it if you share them with me because I'd be happy to be wrong in my skepticism and would like to learn more about any promising options.

Comment by Ryan Beck on Reducing Nuclear Risk Through Improved US-China Relations · 2022-03-23T02:38:37.306Z · EA · GW

To be clear I'm not arguing that people shouldn't think about it or try to solve it. I'm definitely in favor of more discussion on that topic and I'd love to read some high effort analysis from an EA perspective.

If I'm understanding correctly the main point you're making is that I probably shouldn't have said this:

There is little room for improvement here...

Which in that case that's a fair critique. I'm not well-informed enough to know the options here and their advantages and risks in great detail, so my perception that there's not much room for improvement could be way off base.

I'd summarize my position as having the perception that the Taiwan issue is a hard question that I'm not equipped to solve and I'm skeptical that there are significant improvements available there, so instead I focused on a topic that I view as low hanging fruit. Though I was probably wrong to characterize the Taiwan issue as futile or unimprovable, instead I should have characterized it as a highly complex issue that I'm not equipped to do justice to and I perceive as having substantial downsides to any shift in policy.

Comment by Ryan Beck on Reducing Nuclear Risk Through Improved US-China Relations · 2022-03-23T01:24:37.511Z · EA · GW

This is a good point, I completely agree that the trade war is of small importance relative to things like relations with Taiwan. My reason for focusing on the trade war though is because trade deescalation would have very few downsides and would probably be a substantial positive all on its own before even considering the potential positive effects it could have on relations with China and possibly nuclear risk.

To me the same can't be said for the Taiwan issue. The optimal policy here is far from clear to me. Strategic ambiguity is our intentional policy, and I'm not sure clarifying our stance would be preferable to that. Committing to defend Taiwan could allow Taiwan to do more provocative things, which could lead to war. Declaring we will not defend Taiwan could empower China to invade. I agree it's a significant issue that should be carefully considered, but it's also an issue that I'm sure international relations experts have spilled huge amounts of ink over so I'm not sure if there are any clearly superior policy improvements available in this area.

Comment by Ryan Beck on Reducing Nuclear Risk Through Improved US-China Relations · 2022-03-22T12:22:46.862Z · EA · GW

That's a good question, I've thought about this some before and while it's kind of messy I think the general gist of my thoughts is something like this (framed from a US perspective but I think it generalizes to most countries):

Tariffs should be avoided or minimized wherever possible due to them likely costing US citizens much more than they benefit them. However tariffs and sanctions can be important tools when a country does something very offensive, particularly when punitive measures are applied in cooperation with allies. Tariffs and sanctions should be targeted toward the offensive behavior and scale with the importance of the offense.

So my rough framework isn't that we should always avoid tariffs and sanctions, but that they should be limited, targeted to serve a purpose, and be in conjunction with our allies where possible. I think sanctions on China over the treatment of Uyghurs are justified and from what I've heard these have been targeted at the Xinjiang region and at Chinese entities involved.

Similarly, the Russian invasion warrants severe consequences, and sanctions are more effective here because they've been imposed in conjunction with allies. If China were to invade Taiwan or threaten to do so a similar response would be justified.

The big difference to me with the trade war was that it was based on a misguided attempt to fix our trade imbalance, which my impression is that most economists don't really see as a problem. The idea also seemed to be to use tariffs as a bargaining chip to negotiate better trade practices such as IP protection. But these tariffs were applied unilaterally and don't appear to be targeted at all, and never seemed likely to accomplish these goals. And in the meantime they've made things more expensive for Americans and have probably damaged relations with China with nothing to show for it.

Comment by Ryan Beck on Reducing Nuclear Risk Through Improved US-China Relations · 2022-03-21T22:36:40.309Z · EA · GW

That makes sense, I agree it's better to have more direct sources.

Comment by Ryan Beck on Reducing Nuclear Risk Through Improved US-China Relations · 2022-03-21T19:31:53.169Z · EA · GW

I don't know this area at all, but here is data from one review paper I found.

I didn't have access to your link but I found another version of it here.

To be honest I'm not familiar with the direct evidence either so I'm mostly relying on secondhand impressions and general descriptions of tariff burdens falling on consumers. I searched around briefly just now and found this paper (also cited in the paper you linked as Amiti et al. (2020b)) which reports:

Using another year of data including significant escalations in the trade war, we find that U.S. tariffs continue to be almost entirely borne by U.S. firms and consumers.

However it's not clear to me what the relationship is between tariff burden and the welfare loss estimates you mentioned in your comment. It seems to me like they could be measuring different things.