EA forum content might be declining in quality. Here are some possible mechanisms. 2022-09-24T22:24:42.199Z
Most problems fall within a 100x tractability range (under certain assumptions) 2022-05-04T00:06:58.744Z
How dath ilan coordinates around solving AI alignment 2022-04-14T01:53:44.839Z
The case for infant outreach 2022-04-01T04:25:54.116Z
Can we simulate human evolution to create a somewhat aligned AGI? 2022-03-29T01:23:06.970Z
Effectiveness is a Conjunction of Multipliers 2022-03-25T18:44:21.638Z
Penn EA Residency Takeaways 2021-11-12T09:34:09.904Z
"Hinge of History" Refuted 2021-04-01T07:00:03.864Z
Thomas Kwa's Shortform 2020-09-23T19:25:09.159Z


Comment by Thomas Kwa (tkwa) on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-30T18:03:21.234Z · EA · GW

"most prestigious" seems like unfair wording, prestigious != people that CEA most wants to be at an EAG

Comment by Thomas Kwa (tkwa) on Estimating the Current and Future Number of AI Safety Researchers · 2022-09-30T03:24:09.641Z · EA · GW

I think Alignment Forum double-counts researchers as most of them are not independent, especially if you count MATS separately (which I think had about 6 mentors and 31 fellows this summer). Looking at the top posts this year:

Paul Christiano works at ARC. Yudkowsky works at MIRI. janus is from Conjecture, Kraknova is at DeepMind, I don't know about nostalgebraist, Nanda is independent, Larsen works at MATS, Ajeya is at OpenPhil. So 4-5 of the top authors are double-counted.

Comment by Thomas Kwa (tkwa) on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T21:38:37.723Z · EA · GW

Poll: EA Forum content is declining in quality (specifically, the content that appears on the frontpage is lower quality than frontpage content from a year ago).

(Use agree/disagree voting to vote on this proposition. This is not a perfect poll but it gives some indication)

Comment by Thomas Kwa (tkwa) on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T21:15:20.972Z · EA · GW

Edit: I forgot to add, OP could have phrased this differently, saying that people with productive things to say (which I assume is what they may have meant by "better takes") would be busier doing productive work and have less time to post here. Which I don't necessarily buy, but let's roll with it. Instead, they chose to focus on EA orgs in particular.

The causal reason I worded it that way is that I wrote down this list very quickly, and I'm in an office with people who work at EA orgs and would write higher quality posts than average, so it was salient, even if it's not the only mechanism for having better things to do.

I also want to point out that "people who work at EA orgs" doesn't imply infinite conformity. It just means they fit in at some role at some organization that is trying to maximize good and/or is funded by OpenPhil/FTX (who fund lots of things, including lots of criticism). I frequently hear minority opinions like these:

  • Biosecurity is more pressing than alignment due to tractability
  • Chickens are not conscious and can't suffer
  • The best way to do alignment research is to develop a videogame as a testbed for multi-agent coordination problems
  • Alignment research is not as good as people think due to s-risk from near misses
  • Instead of trying to find AI safety talent at elite universities, we should go to remote villages in India
Comment by Thomas Kwa (tkwa) on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T14:37:05.579Z · EA · GW

"Quality of person" sounds bad to me too. I also find it weird that someone already gave the same feedback on the shortform and the OP didn't change it.

Thanks for pointing this out. I just edited the wording.

Comment by Thomas Kwa (tkwa) on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T13:11:14.209Z · EA · GW

I agree that this list is "lazy", and I'd be excited about someone doing a better analysis.

Comment by Thomas Kwa (tkwa) on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T12:36:40.279Z · EA · GW

Of the 15 people other than me who commented on the shortform, I only remember ever meeting 4. I would guess that for shortforms in general most of the attention comes from the feed.

Comment by Thomas Kwa (tkwa) on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-24T22:25:32.776Z · EA · GW

Pablo made a survey for the first 8 points, and people seem to agree most with 1 (newer EAs have worse takes on average) and 5 (meta/community stuff gets more attention), with mixed opinions about the rest.

Comment by Thomas Kwa (tkwa) on Thomas Kwa's Shortform · 2022-09-24T22:09:04.028Z · EA · GW

I wouldn't be quick to dismiss (3-5) and (7) as factors we should pay attention to. These sorts of memetic pressures are present in many communities, and yet communities vary dramatically in quality. This is because things like (3-5) and (7) can be modulated by other facts about the community:

  • How intrinsically susceptible are people to clickbait?
  • Have they been taught things like politics is the mind-killer and the dangers of platforms where controversial ideas outcompete broadly good ones?
  • What is the variance in how busy people are?
  • To what degree do people feel like they can weigh in on meta? To what degree can they weigh in on cause areas that are not their own?
  • Are the people on EA Forum mostly trying for impact, or to feel like they're part of a community (including instrumentally towards impact)?

So even if they cannot be solely reponsible for changes, they could have been necessary to produce any declines in quality we've observed, and be important for the future.

Comment by Thomas Kwa (tkwa) on EA Forum feature suggestion thread · 2022-09-24T21:31:28.108Z · EA · GW

Promoting shortforms to top-level posts, preserving replies. I wanted to do that with this, because reposting it as a top-level post wouldn't preserve existing discussion.

Comment by Thomas Kwa (tkwa) on The Next EA Global Should Have Safe Air · 2022-09-21T17:54:31.330Z · EA · GW

Having to wear masks would reduce the value of EAG by >20% for me, mostly due to making 1-1s worse.

Comment by Thomas Kwa (tkwa) on Thomas Kwa's Shortform · 2022-09-16T18:34:42.286Z · EA · GW


Comment by Thomas Kwa (tkwa) on Thomas Kwa's Shortform · 2022-09-15T21:06:35.169Z · EA · GW

EA forum content might be declining in quality. Here are some possible mechanisms:

  1. Newer EAs have worse takes on average, because the current processes of recruitment and outreach produce a worse distribution than the old ones
  2. Newer EAs are too junior to have good takes yet. It's just that the growth rate has increased so there's a higher proportion of them.
  3. People who have better thoughts get hired at EA orgs and are too busy to post. There is anticorrelation between the amount of time people have to post on EA Forum and the quality of person.
  4. Controversial content, rather than good content, gets the most engagement.
  5. Although we want more object-level discussion, everyone can weigh in on meta/community stuff, whereas they only know about their own cause areas. Therefore community content, especially shallow criticism, gets upvoted more. There could be a similar effect for posts by well-known EA figures.
  6. Contests like the criticism contest decrease average quality, because the type of person who would enter a contest to win money on average has worse takes than the type of person who has genuine deep criticism. There were 232 posts for the criticism contest, and 158 for the Cause Exploration Prizes, which combined is more top-level posts than the entire forum in any month except August 2022.
  7. EA Forum is turning into a place primarily optimized for people to feel welcome and talk about EA, rather than impact.
  8. All of this is exacerbated as the most careful and rational thinkers flee somewhere else, expecting that they won't get good quality engagement on EA Forum
Comment by Thomas Kwa (tkwa) on What are some bottlenecks in AI safety? · 2022-09-03T19:00:32.590Z · EA · GW
  • Mentorship time of senior researchers
  • Exercises, a textbook, seminars, etc. to upskill junior researchers without using so much of senior researchers' time
  • A shared ontology for researchers, so they can communicate more easily
  • Percentage of ML community that takes safety seriously (one attempt)
Comment by Thomas Kwa (tkwa) on Thomas Kwa's Shortform · 2022-09-01T20:46:48.917Z · EA · GW

I think there are currently too few infosec people and people trying to become billionaires.

  • Infosec: this seems really helpful for AI safety and biosecurity in a lot of worlds, and I'm guessing it's just much less sexy / popular than technical research. Maybe I'm wrong about the number of people here, but from attendance at an EAGxSF event it didn't seem like we would be saturated.
  • Entrepreneurship: I think the basic argument for making tens of billions of dollars still holds. Just because many longtermist orgs are well-funded now doesn't mean they will be in the future (crypto and other risk), and there might be ways to spend hundreds of billions of dollars on other things. My understanding is that even at the crypto peak there were <50 EAs trying to become billionaires, and there are even fewer now, which seems like a mistake.

I've thought about both of these paths myself, and I think they're not quite as good as technical alignment research for me, but I can't rule out that I'm just being a coward.

Comment by Thomas Kwa (tkwa) on The top X-factor EA neglects: destabilization of the United States · 2022-08-31T23:50:20.469Z · EA · GW

The most recent empirical research on civil conflicts suggest the United States has a 4% annual risk of falling into a civil conflict.

I think this is misleading, because...

Barbara Walter, author of How Civil Wars Start, makes an empirical case that the United States is vulnerable to a civil war. Her co-research found a 4% annual risk of civil conflict in anocracies with ethnic mobilization. Walter makes the point that the United States is on the bubble of an anocracy using the Center for Systemic Peace’s polity scale.[6]

Being at +5 on a -10 to +10 scale does not mean that the -5 to +5 category is an appropriate reference class. Also, this is very different from the Metaculus community median  of 5% for "civil war" before 2031, defined as below. 

  • The Insurrection Act is invoked.
  • While the Insurrection Act is invoked, there are at least 500 deaths in a 6 month period as a result of armed conflicts between US residents and a branch of the US military, national guard, or in conflicts between/among such branches.
  • All of these deaths occur in any US state(s) (including DC).
Comment by Thomas Kwa (tkwa) on Preventing an AI-related catastrophe - Problem profile · 2022-08-29T21:57:24.470Z · EA · GW

Last I heard Nate Soares (at MIRI) has an all-things-considered probability around 80%, and Evan Hubinger recently gave ~80% too. Nate's reasoning is here, and he would probably also endorse this list of challenges.

I think you don't really have to have any crazy beliefs to have probabilities above 50%, just

  • higher confidence in the core arguments being correct, such that you think there are concrete problems that probably need to be solved to avoid AI takeover
  • a prior that is not overwhelmingly low, despite some previous mechanisms for catastrophe like overpopulation and nuclear war being avoidable. The world is allowed to kill you.
  • observation that not much progress has been made on the problem so far, and belief that this will not massively speed up as we get closer to AGI

Believing there are multiple independent core problems we don't have traction on, or that some problems are likely to take serial time or multiple attempts that we don't have, can drive this probability higher.

Comment by Thomas Kwa (tkwa) on Thomas Kwa's Shortform · 2022-08-29T20:38:59.495Z · EA · GW

A lot of EAs I know have had strong intuitions towards scope sensitivity, but I also remember having strong intuitions towards moral obligation, e.g. I remember being slightly angry at Michael Phelps' first retirement, thinking I would never do this and that top athletes should have a duty to maximize their excellence over their career. Curious how common this is.

Comment by Thomas Kwa (tkwa) on AI Safety For Dummies (Like Me) · 2022-08-24T21:40:16.332Z · EA · GW

I downvoted this because I noticed a few errors in the technical areas, but I'm potentially excited about posts about AI safety written at this level of accessibility.

Comment by Thomas Kwa (tkwa) on EAs underestimate uncertainty in cause prioritisation · 2022-08-24T21:36:00.138Z · EA · GW

Agree that this methodology of point estimates can be overconfident in what the top causes are, but I'm not sure if that's their methodology or if they're using expected values where they should. Probably someone from 80k should clarify, if 80k still believes in their ranking enough to think anyone should use it?

This means we need to take more seriously the idea that the true top causes are different to those suggested by 80K's model.

Also agree with this sentence.

My issue is that the summary claims "probabilities derived from belief which aren't based on empirical evidence [...] means that the optimal distribution of career focuses for engaged EAs should be less concentrated amongst a small number of "top" cause areas." This is a claim that we should be less confident than 80k's cause prio.

When someone has a model, you can't always say we should be less confident than their model without knowing their methodology, even if their model is "probabilities derived from belief which aren't based on empirical evidence". Otherwise you can build a model where their model is right 80% of the time, and things are different in some random way 20% of the time, and then someone else takes your model and does the same thing, and this continues infinitely until your beliefs are just the uniform distribution over everything. So I maintain that the summary should mention something about using point estimates inappropriately, or missing some kinds of uncertainty; otherwise it's saying something that's not true in general.

Comment by Thomas Kwa (tkwa) on EAs underestimate uncertainty in cause prioritisation · 2022-08-23T20:29:59.990Z · EA · GW

A useful thought experiment is to imagine 100 different timelines where effective altruism emerged. How consistent do you think the movement’s cause priorities would be across these 100 different timelines?

This is a useful exercise. I think that in many of these timelines, EA fails to take AI risk seriously (in our timeline, this only happened in the last few years) and this is a big loss. Also probably in a lot of timelines, the relative influence of rationality, transhumanism, philosophy, philanthropy, policy, etc. as well as existing popular movements like animal rights, social justice, etc. is pretty different. This would be good to the extent these movements bring in more knowledge, good epistemics, operational competence, and bad to the extent they either (a) bring in bad epistemics, or (b) cause EAs to fail to maximize due to preconceptions.

My model is something like this: to rank animal welfare as important, you have to have enough either utilitarian philosophers or animal rights activists to get "factory farming might be a moral atrocity" into the EA information bubble, and then it's up to the epistemics of decision-makers and individuals making career decisions. A successful movement should be able to compensate for some founder effects, cultural biases, etc. just by thinking well enough (to the extent that these challenges are epistemic challenges rather than values differences between people).

I do feel a bit weird about saying "where effective altruism emerged" as it sort of implies communities called "effective altruism" are the important ones, whereas I think the ones that focus on doing good and have large epistemic advantages over the rest of civilization are the important ones.

Comment by Thomas Kwa (tkwa) on EAs underestimate uncertainty in cause prioritisation · 2022-08-23T19:36:51.735Z · EA · GW

I mostly agree with this post, but I take issue with the summary:

EA cause prioritisation frameworks use probabilities derived from belief which aren't based on empirical evidence. This means EA cause prioritisation is highly uncertain, which means that the optimal distribution of career focuses for engaged EAs should be less concentrated amongst a small number of "top" cause areas.

As a Bayesian, you have to assign some subjective probabilities to things, and sometimes there just isn't empirical evidence. To argue that e.g. 80k doesn't have enough uncertainty (even if you have reasons to believe that high uncertainty is warranted in general), it's necessary to argue that their methodology is not only subject to irrationality, it's subject to bias in the direction you argue (overconfidence rather than underconfidence in top causes).

E.g. if their numbers for I, T, and 1/N are based on point estimates for each number rather than expected values, then you underrate causes that have a small chance to be big / highly tractable. (I don't know whether 80k actually has this bias.)

The main body of the post does address this briefly, but I would want more evidence, and I think the summary does not have valid standalone reasoning.

To take the concern that “some of them could easily be wrong by a couple of points” literally, this could mean that factory farming could easily be on par with AI, or land use reform could easily be more important than biosecurity.

minor point, but land use reform wouldn't be more important (in the sense of higher scale) than biosecurity, since they differ by 3 points (30x) in overall score but 6 points (1000x) in scale.

Comment by Thomas Kwa (tkwa) on Are you allocated optimally in your own estimation? · 2022-08-22T20:59:55.157Z · EA · GW

Strong upvote for asking people if they're doing the best thing they could be doing, even if the funding angle is a bit specific.

Comment by Thomas Kwa (tkwa) on Crypto markets, EA funding and optics · 2022-08-19T20:59:03.885Z · EA · GW

I'm guessing this isn't a huge deal, they just have to stop saying either (a) false things about customer deposits being FDIC-insured, or (b) true statements about customer deposits being FDIC-insured without specifying which bank, both of which the FDIC seems to prohibit.

Comment by Thomas Kwa (tkwa) on Should longtermists focus more on climate resilience? · 2022-08-08T21:47:23.625Z · EA · GW

I thought about this for ~4 hours. My current position is that a lot of these claims seem dubious (I doubt many of them would stand up to Fermi estimates), but several people should be working in political stabilization efforts, and it makes sense for at least one of them to be thinking about climate, whether or not this is framed as "climate resilience". The positive components of the vibe of this post reminded me of SBF's goals, putting the world in a broadly better place to deal with x-risks.

In particular, I'm skeptical of the pathway from (1) climate change -> (2) global extremism and instability -> (3) lethal autonomous weapon development -> (4) AI x-risk.

First note that this pathway has 4 steps, which is pretty indirect. Looking at each of the steps individually:

  • (1) -> (2): I think experts are mixed on whether resource shortages cause war of the type that can lead to (3). War is a failure of bargaining, so anything that increases war must either shift the game theory or cause decision-makers to become more irrational, not just shrink the pool of available resources. Quoting from the 80k podcast episode with economist / political scientist Chris Blattman:

Rob Wiblin: Yeah. Some other drivers of war that I hear people talk about that you’re skeptical of include climate change and water scarcity. Can you talk about why it is that you’re skeptical of this idea of water wars?

Chris Blattman: So I think scarce water, any scarce resource, is something which we’re going to compete over. If there’s a little bit, we’ll compete over it. If there’s a lot of it, we’ll still probably find a way to compete over it. And the competition is still going to be costly. So we’re always going to strenuously compete. It’ll be hostile, it’ll be bitter, but it shouldn’t be violent. And the fact that water becomes more scarce — like any resource that becomes more scarce — doesn’t take away from the fact that it’s still costly to fight over it. There’s always room for that deal. The fact that our water is shrinking in some places, we have to be skeptical. So what is actually causing this? And then empirically, I think when people take a good look at this and they actually look at all these counterfactual cases where there’s water and war didn’t break out, we just don’t see that water scarcity is a persistent driver of war.

Chris Blattman: The same is a little bit true of climate change. The theory is sort of the same. How things getting hotter or colder affects interpersonal violence is pretty clear, but why it should affect sustained yearslong warfare is far less clear. That said, unlike water wars, the empirical evidence is a little bit stronger that something’s going on. But to me, it’s just then a bit of a puzzle that still needs to be sorted out. Because once again, the fact that we’re getting jostled by unexpected temperature shocks, unexpected weather events, it’s not clear why that should lead to sustained political competition through violence, rather than finding some bargain solution.

  • (2) -> (3): It's not clear to me that global extremism and instability cause markedly greater investment into lethal autonomous weapons. The US has been using Predator drones constantly since 1995, independently of several shifts in extremism, just because they're effective; it's not clear why this would change for more autonomous weapons. More of the variance in autonomous weapon development seems to be from how much attention/funding goes to autonomous weapons as a percentage of world military budgets rather than the overall militarization of the world. As for terrorism, I doubt most terrorist groups have the capacity to develop cutting-edge autonomous weapon technology.
  • (3) -> (4): You write "In the context of AI alignment, often a distinction is drawn between misuse (bad intentions to begin with) and misalignment (good intentions gone awry). However, I believe the greatest risk is the combination of both: a malicious intention to kill an enemy population (misuse), which then slightly misinterprets that mission and perhaps takes it one step further (misalignment into x-risk possibilities)." Given that we currently can't steer a sufficiently advanced AI system at anything, plus there are sufficient economic pressures to develop goal-directed AGI for other reasons, I disagree that this is the greatest risk.

Each of the links in the chain is reasonable, but the full story seems altogether too long to be a major driver of x-risk. If you have 70% credence in the sign of each step independently, the credence in the 3-step argument goes down to 34%. Maybe you have a lower confidence than the wording implies though.

Comment by Thomas Kwa (tkwa) on Longtermists Should Work on AI - There is No "AI Neutral" Scenario · 2022-08-07T23:48:37.132Z · EA · GW

Also, I think that things that are extremely infohazardy shouldn't be thought of too strongly bc without the info revelation they will likely remain very unlikely

I don't think this reasoning works in general. A highly dangerous technology could become obvious in 2035, but we could still want actors to not know about it until as late as possible. Or the probability of a leak over the next 10 years could be high, yet it could still be worth trying to maintain secrecy.

Comment by Thomas Kwa (tkwa) on Longtermists Should Work on AI - There is No "AI Neutral" Scenario · 2022-08-07T19:57:25.448Z · EA · GW

I argued that orders of magnitude difference in tractability are rare here.

Comment by Thomas Kwa (tkwa) on EA in the mainstream media: if you're not at the table, you're on the menu · 2022-08-01T01:00:49.946Z · EA · GW

I think it's valuable to point out a problem. The fact is that the majority of media articles about EA are negative (and often inaccurate), and this has been the case for years. Inasmuch as this is a problem, all existing efforts to solve it have failed! Listing upcoming efforts seems like more of a nice addition than a mandatory component.

Comment by Thomas Kwa (tkwa) on Open Philanthropy's AI grants · 2022-07-30T19:49:23.807Z · EA · GW

It seems like normality is violated on the first graph, have you tried taking a log transform or something?

Comment by Thomas Kwa (tkwa) on Closing the Feedback Loop on AI Safety Research. · 2022-07-30T03:59:01.342Z · EA · GW

I'm not excited about this particular idea, but finding some way to iterate on alignment solutions is a hugely important problem.

Comment by Thomas Kwa (tkwa) on Remuneration In Effective Altruism · 2022-07-29T01:12:23.045Z · EA · GW

If someone says "look, I'll do the work, and I will be excellent, but you have to pay me $150k a year or I walk" I would doubt that were that serious about helping other people. They'd sound more like your classic corporate lawyer than an effective effective altruist.

Adding to my other comment, there are several reasons I might choose a different job if I were paid <<150k, even as someone who is basically dedicated to maximizing my impact.

  • My bargain with the EA machine lets selfish parts of me enjoy a comfortable lifestyle in exchange for giving EA work my all.
  • Salaries between EA orgs should be a signal of value in order to align incentives. If EA org A is paying less than org B, but I add more value at org A, this is a wrong incentive that could be fixed at little cost.
  • There are time-money tradeoffs like nice apartments and meal delivery that make my productivity substantially higher with more money.
  • Having financial security is really good for my mental health and ability to take risks; in the extreme case, poverty mindset is a huge hit to both.
  • Underpaying people might be a bad omen. The organization might be confusing sacrifice with impact, be constrained by external optics, unable to make trades between other resources, or might have trouble getting funding because large funders don't think they're promising.
  • Being paid, say, 15% of what I could probably make in industry just feels insulting. This is not an ideal situation, but pay is tied up with status in our society, especially taking pay cuts.
  • An organization that cuts my pay might be exhibiting distrust and expecting me to spend the money poorly; this is also negative signal.
Comment by Thomas Kwa (tkwa) on Remuneration In Effective Altruism · 2022-07-29T00:41:07.372Z · EA · GW

I agree that this is more accurate.

I think what I was going for is: say someone is day trading and making tens of millions of $/year. It would be pretty unreasonable to expect them to donate 98%, especially because time-money tradeoffs mean they can probably donate more if donating only 90%.

This is not necessarily equivalent to a situation where someone is producing tens of millions of research value per year, but it's similar in a few respects:

  • Keeping all the value for themself isn't on the table for an altruist
  • Barring optics, taxes, etc. the impact calculation is similar
  • Pay provides incentives and a signal of value in both cases
  • Deviating from the optimum is deadweight loss

I don't think salary norms in these circumstances should be identical, but there's a sense in which having completely unrelated salary norms for each case bothers me. It's a wrong price signal, like a $1000 bottle of wine that tastes identical to $20 wine, or a Soviet supermarket filled with empty shelves due to price controls.

By  I'm probably "essentially donating" only around 94%, though it does get closer to 99% if you count equity from possible startups.

Comment by Thomas Kwa (tkwa) on Neartermists should consider AGI timelines in their spending decisions · 2022-07-27T05:59:54.860Z · EA · GW

Surely most neartermist funders think that the probability that we get transformative AGI this century is low enough that it doesn't have a big impact on calculations like the ones you describe?

This is not obvious to me; I'd guess that for a lot of people, AGI and global health live in separate magisteria, or they're not working on alignment due to perceived tractability.

This could be tested with a survey.

Comment by Thomas Kwa (tkwa) on Thomas Kwa's Shortform · 2022-07-26T04:05:49.539Z · EA · GW

What's the right way to interact with people whose time is extremely valuable, equivalent to $10,000-$1M per hour of OpenPhil's last dollar? How afraid should we be of taking up their time? Some thoughts:

  • Sometimes people conflate time cost with status, and the resulting shyness/fear can prevent you from meeting someone-- this seems unwarranted because introducing yourself only takes like 20 seconds.
  • The time cost should nevertheless not be ignored; how prepared you are for a 1-hour meeting might be the largest source of variance in the impact you produce/destroy in a week.
  • At a $24k/hour valuation, a one-hour meeting is $24k but you might only need 2 slack messages, which maybe take one minute = $400 to respond to.
  • Being net-negative in immediate impact by taking up more mentor/manager time than the impact you create is not necessarily bad, because you build skills towards being much more valuable.
Comment by Thomas Kwa (tkwa) on Remuneration In Effective Altruism · 2022-07-26T01:42:05.667Z · EA · GW

Thanks for writing this.

I feel much more important and valued at an EA position if I'm paid a high salary, and sad when I'm not, because the difference only converts to 1-2% of my impact. So I'm glad to see someone write about the framing that I'm essentially this should be treated as donating 98-99%. Being underpaid would be especially annoying to the extent it prevents beneficial time-money tradeoffs.

Note I currently feel adequately paid.

Comment by Thomas Kwa (tkwa) on Criticism of EA Criticism Contest · 2022-07-25T23:49:36.154Z · EA · GW

My guess is they are perceived as heretical.

Comment by Thomas Kwa (tkwa) on It's OK not to go into AI (for students) · 2022-07-25T23:42:35.244Z · EA · GW

Thanks for the good reply.

This is true about many good things a person could do. Some people see AI safety as a special case because they think it's literally the most good thing, but other people see other causes the same way — and I don't think we want to make any particular thing a default "justify if not X".

I'm unsure how much I want AI safety to be the default, there are a lot of factors pushing in both directions. But I think one should have a reason why one isn't doing each of the top ~10 things one could, and for a lot of people AI safety (not necessarily technical research) should be on this list.

When someone in EA tells me they work on X, my default assumption is that they think their (traction on X * assumed size of X) is higher than the same number would be for any other thing. Maybe I'm wrong, because they're in the process of retraining or got rejected from all the jobs in Y or something. But I don't see it as my job to make them explain to me why they did X instead of Y, unless they're asking me for career advice or something.

My guess is that the median person who filled out the EA survey isn't being consistent in this way. I expect that they could have a one-hour 1-1 with a top community-builder that makes them realize they could be doing something at least 10% better. This is a crux for me.

Separately, I do feel a bit weird about making every conversation into a career advice conversation, but often this seems like the highest impact thing.

If someone finds EA strategy in [global health] unconvincing, do they need to justify why they aren't writing up their arguments?

This was thought-provoking for me. I think existing posts of similar types were hugely impactful. If money were a bottleneck for AI safety and I thought money currently spent on global health should be reallocated to AI safety, writing up some document on this would be among the best things I could be doing. I suppose in general it also depends on one's writing skill.

Comment by Thomas Kwa (tkwa) on Thomas Kwa's Shortform · 2022-07-25T19:25:21.247Z · EA · GW

Epistemic status: showerthought

If I'm capable of running an AI safety reading group (at my school, and I learn that someone else is doing it, I might be jealous that my impact is "being taken".

If I want to maximize total impact, I don't endorse this feeling. But what feeling does make sense from an impact maximization perspective? Based on Shapley values, you should

  • update downwards on the impact they get (because they're replaceable)
  • update downwards on the impact you get, if you thought this was your comparative advantage (because you're replaceable).
  • want to find a new task/niche where you're less replaceable.

I claim that something like this is the good form of impact jealousy. (Of course, you should also be happy the work is happening).

Comment by Thomas Kwa (tkwa) on It's OK not to go into AI (for students) · 2022-07-15T23:13:51.582Z · EA · GW

I agree with the statement "It's OK for some people not to go into AI" but strongly disagree with the statement "It's OK not to go into AI, even if you don't have a good reason". One can list increasingly unreasonable statements like:

  1. AI safety is an important problem.
  2. Every EA should either work on AI safety or have a good reason why they're not.
  3. EAs should introduce themselves with what cause area they're working on, and their reason for not working on AI if applicable.
  4. Literally everyone should work on AI safety; there are no excuses not to.

I want to remind people of (1) and defend something between (2) and (3).

Our goal as world optimizers is to find the best thing we could possibly be doing, subject to various constraints like non-consequentialist goals and limited information. This means that for every notable action worth considering, we should have a good reason why we're not doing it. And (2) is just a special case of this, since working on alignment (technical or otherwise) is definitely a notable action. Because there are good reasons to work on AI safety, you need to have a better reason not to.

  • Having 100x more traction on the problem of making Malawi (0.25% of the world population) a developed country is not a good enough reason, because under most reasonable moral views, preventing human extinction is >100x better than raising the standard of living of 0.25% of the world population.
    • Note that there are many people who should not work on AI safety because they have >400x more traction on problems 400x smaller, or whatever.
  • Wanting to expand what EA thinks is possible is not a sufficient reason, because you also have to argue that the expected value of this is higher than investing into causes we already know about.
    • Holden Karnofsky makes the case against "cause X" here: AI risk is already really large in scale; they essentially say "this century we’re going to figure out what kind of civilization is going to tile the entire galaxy", and it's hard to find something larger in scale than that; x-risks are also neglected. It's hard for tractability differences to overwhelm large differences in scale/neglectedness.
  • Not having thought about the arguments is not a good enough reason. Reality is the judge of your actions and takes no excuses.
  • Majoring in something other than math/CS is not a good enough reason, because your current skills or interest areas don't completely determine your career comparative advantage
  • Finding the arguments for AI risk unconvincing is not a reason to just not work on AI risk, because if the arguments are wrong, this implies lots of effort on alignment is wasted and we need to shift billions of dollars away from it (and if they have nonessential flaws this could change research directions within alignment), so you should write counterarguments up to allow the EA community to correctly allocate its resources.
    • also, if working on alignment is your comparative advantage, it might make sense to work on even if the arguments have a 10% chance of being right.
  • Some potential sufficient reasons
    • "I tried 3 different kinds of AI safety research and was worse than useless at all of them, and have various reasons not to do longtermist community-building either"
    • "I have 100x more traction on biorisk and think biorisk is 20x smaller than AI risk"
    • "I have 100x more traction on making the entire continent of Africa as developed as the USA, plus person-affecting views, plus my AI timelines are long enough that I can make a difference before AGI happens"
    • "I think suffering-focused ethics are correct, so I would rather prevent suffering now than have a small chance of preventing human extinction"
    • "I can become literally the best X in the world, or a mediocre AI safety community-builder. I think the
    • "I have a really good story for why the arguments for AI risk are wrong and have spent the last month finding the strongest version of my counterarguments; this will direct lots of resources to preventing various moral atrocities in worlds where I am right"

edit: after thinking about it more I don't endorse the below paragraph

I also want to defend (3) to some extent. Introducing yourself with your target cause area and reasons for working on it seems like a pretty natural and good thing. In particular it forces you to have a good reason for doing what you're doing. But there are other benefits too: it's an obvious conversation starter, and when half the people at the EA event are working on AI safety it just carries a lot of information.

Comment by Thomas Kwa (tkwa) on Recommendations for non-technical books on AI? · 2022-07-12T23:29:44.775Z · EA · GW

What's your goal?

Comment by Thomas Kwa (tkwa) on Doom Circles · 2022-07-09T07:12:35.284Z · EA · GW

Having done a few doom circles I think they're much better when 

  • there are <6 people in the circle, so everyone can feel attended to
  • I have high trust in everyone, which might come from knowing them well or having been vulnerable in other ways
  • everyone is in the right mood for doom circles; vague insecurities or pressure create the wrong mood
Comment by Thomas Kwa (tkwa) on Fanatical EAs should support very weird projects · 2022-07-09T07:04:39.736Z · EA · GW

There are ways to deal with Pascal's Mugger with leverage penalties, which IIRC deal with some problems but are not totally satisfying in extremes.

Comment by Thomas Kwa (tkwa) on Penn EA Residency Takeaways · 2022-06-11T04:04:37.296Z · EA · GW

And why Quinn M wasn't tapped, or myself. Is there a view formed that people in their later 20s out working aren't a super good fit for university work? Is there a view formed that top universities are culturally particular, and that people who weren't at top universities would screw it up? Things like this seem plausible to me, but I'm shooting in the dark.

My view is some of (1) and not much of (2), and people who think more about university groups might have more concerns. The point at which we made the mistake was probably not even thinking about it, and it's plausible that if we had, we would have connected them to EA Philly.

Comment by Thomas Kwa (tkwa) on Penn EA Residency Takeaways · 2022-06-10T23:11:24.847Z · EA · GW

Thanks for posting this comment.

TL;DR: There is basically no Penn EA group right now. However, I don't think this is as severe a failure as it sounds because all the potential organizers might be doing higher-impact things.

Basically, the first couple months of Penn EA found ~6 engaged people with any time at all to help run a group, not including me and Sydney. My impression is that one or two of them drifted away from EA as a philosophy, stopped having time, or something. Three of them dropped out of Penn (undergrad or graduate); of these, Ashley and Akash are working on movement-scale talent search and Tamera is skilling up towards direct work. They thought about their decisions pretty carefully from an impact maximization perspective, and I think they are creating substantially more impact than they could have at Penn. The other potential organizer, Brandon, is a full-time student who was pretty new to EA and didn't have capacity to run a group on his own.

My best guess is that under the optimal allocation of people, 0-1 of the 3 who dropped out are still at Penn doing community-building, and 2-3 of them correctly dropped out.

Brain drain towards direct work and bigger meta work is an established pattern with university groups. The Stanford EA executive board had 9 people in early 2021, and for a while in late 2021 Stanford EA was basically being run by 1-2 people and in danger of falling apart because basically the entire board graduated, dropped out, or spent all of their spare time doing part-time EA things that were not running Stanford EA. I think that most of them made good decisions. To be fair, a little of this was for bad reasons: when everyone else is moving to Berkeley/the UK, it's fun to be there and so on. Also, even if everyone had absolute disadvantage running Stanford EA, there should be coordination among people so that the person with the least comparative disadvantage among Stanford EA made succession not fail. (Although I know a lot of Stanford EA people, some of this is rumor/speculation so don't take my word as definitive.)

This was all exacerbated by three shifts in movement-building philosophy over the last few months.

  • The first is from "EA group community-building" to "global talent search". There is now less emphasis on having a university group at every top-20 university in the US, and more emphasis on (a) search for top talent in neglected countries like India and Brazil (either at top universities there or through other means), and (b) cause-area-specific discussion groups at the very top universities like MIT. Due to all the longtermist resources now, this is happening the most with AI safety reading groups, some following Cambridge's AGISF syllabus, some more advanced that are filled with aspiring researchers. But it's also happening with e.g. altpro. I think this shift is on net good but depends heavily on implementation-- there can be old-style groups that cause a lot of impact on important problems because they happen to target talent well and are large enough to specialize into cause areas, or whatever reason.
  • The second shift is from intro/advanced fellowships to retreats/workshops. Workshops have much higher fidelity, plus concentrate people from multiple universities in a space so they can have high-context conversations and connect with each other. There's a tradeoff here, but my view on that is outside the scope of this comment. I think it's potentially OK that Penn didn't have fellowships for this reason, though it's pretty bad that they didn't have weekly dinners.
  • The third shift is towards greater ambition and larger action spaces. This is why most of the organizers dropped out. I think Penn had a greater proportion of people who were willing to drop out than most university groups, and this has diminishing returns when it is as high as 75%.

Lessons for new university groups

If there's an update for new university groups, it's that building capacity to recruit talented people at moderate efficiency is easy, but that succession fails by default and is even more likely to fail when you don't put lots of effort into it. The community is moving towards a place where it might no longer be necessary to developing university groups in the same way, and succession is somewhat less of a perennial issue, due to workshops and efforts like GCP. However, there's value in university groups existing at all, e.g. by creating ways for people to hear about EA at all. There's also a lot of value in building groups with their own culture (this gives you intellectual diversity and information value) though this is easy to screw up either by reducing efficiency or targeting the wrong things.

The top considerations I have for whether to do community-building at university groups, for people who are willing to drop out, are basically "what's your counterfactual? (if movement-scale CB, does this have higher fidelity and growth rate?)", and "are you irreplaceable in this university CB role?" and "if you build the university group, will it actually produce a lot of impact (the classic source of impact is career changes of highly talented people towards the best thing they could be doing)?". There are sub-considerations which I can expand on, but I haven't spent much time thinking about university group strategy, so I'm pretty uncertain.


I think the mistake was that Sydney, Ashley, and I basically stopped thinking about Penn when we left. We all have pretty limited capacity to take on side projects, but one of us should have (a) tried to have weekly calls with the remaining Penn organizers, and (b) connected them with GCP. I think the median outcome of this is that not much happened at Penn due to sheer lack of organizer hours, but there was still lots of expected value there for relatively little investment.

Comment by Thomas Kwa (tkwa) on Terminate deliberation based on resilience, not certainty · 2022-06-06T18:01:28.988Z · EA · GW

Succinctly, beliefs should behave like a martingale, and the third and fourth graphs are probably not a martingale. It's possible to update based on your expected evidence and still get graphs like in 3 or 4, but this means you're in an actually unlikely world.

That said, I think it's good to keep track of emotional updates as well as logical Bayesian ones, and those can behave however.

Comment by Thomas Kwa (tkwa) on Potatoes: A Critical Review · 2022-05-27T18:21:53.253Z · EA · GW

(epistemic status: possibly dumb question by someone learning causal inference)
shouldn't you test only combinations of controls which are good conditioning strategies for a plausible causal DAG?

Comment by Thomas Kwa (tkwa) on The EA movement’s values are drifting. You’re allowed to stay put. · 2022-05-24T22:56:12.875Z · EA · GW

Sure, here's the ELI12:

Suppose that there are two billionaires, April and Autumn. Originally they were funding AMF because they thought working on AI alignment would be 0.01% likely to work and solving alignment would be as good as saving 10 billion lives, which is an expected value of 1 million lives, lower than you could get by funding AMF.

After being in the EA community a while they switched to funding alignment research for different reasons.

  • April updated upwards on tractability. She thinks research on AI alignment is 10% likely to work, and solving alignment is as good as saving 10 billion lives.
  • Autumn now buys longtermist moral arguments. Autumn thinks research on AI alignment is 0.01% likely to work, and solving alignment is as good as saving 10 trillion lives.

Both of them assign the same expected utility to alignment-- 1 billion lives. As such they will make the same decisions. So even though April made an epistemic update and Autumn  a moral update, we cannot distinguish them from behavior alone.

This extends to a general principle: actions are driven by a combination of your values and subjective probabilities, and any given action is consistent with many different combinations of utility function and probability distribution.

As a second example, suppose Bart is an investor who makes risk-averse decisions (say, invests in bonds rather than stocks). He might do this for two reasons:

  1. He would get a lot of disutility from losing money (maybe it's his retirement fund)
  2. He irrationally believes the probability of losing money is higher than it actually is (maybe he is biased because he grew up during a financial crash).

These different combinations of probability and utility inform the same risk-averse behavior. In fact, probability and utility are so interchangeable that professional traders-- just about the most calibrated, rational people with regard to probability of losing money, and who are only risk-averse for reason (1) -- often model financial products as if losing money is more likely than it actually is, because it makes the math easier.

Comment by Thomas Kwa (tkwa) on The EA movement’s values are drifting. You’re allowed to stay put. · 2022-05-24T01:50:26.762Z · EA · GW

Maybe related is that even for ideal expected utility maximizers, values and subjective probabilities are impossible to disentangle by observing behavior. So it's not always easy to tell what changes are value drift vs epistemic updates.

Comment by Thomas Kwa (tkwa) on St. Petersburg Demon – a thought experiment that makes me doubt Longtermism · 2022-05-24T01:40:47.477Z · EA · GW

But if a random variable is 0 with probability measure 1 and is undefined with probability measure 0, we can't just say it's identical to the zero random variable or that it has expected value zero (I think, happy to be corrected with a link to a math source).

The definition of expected value is . If the set of discontinuities of a function has measure zero, then it is still Riemann integrable.  So the integral exists despite not being identical to the zero random variable, and the value is zero. In the general case you have to use measure theory, but I don't think it's needed here.

Also, there's no reason our intuitions about the goodness of the infinite sequence of bets has to match the expected value.

Comment by Thomas Kwa (tkwa) on St. Petersburg Demon – a thought experiment that makes me doubt Longtermism · 2022-05-23T17:45:05.318Z · EA · GW

I don't have a confident opinion about the implications to longtermism, but from a purely mathematical perspective, this is an example of the following fact: the EV of the limit of an infinite sequence of policies (say yes to all bets; EV=0) doesn't necessarily equal the limit of the EVs of each policy (no, yes no, yes yes no, ...; EV goes to infinity).

In fact, either or both quantities need not converge. Suppose that bet 1 is worth -$1, bet 2 is worth +$2, bet k is worth  and you must either accept all bets or reject all bets. The EV of rejecting all bets is zero. The limit of EV of accepting the first k bets is undefined. The EV of accepting all bets depends on the distribution of outcomes of each bet and might also diverge.

The intuition I get from this is that infinity is actually pretty weird. The idea that if you accept 1 bet, you should accept infinite identical bets should not necessarily be taken as an axiom.