Defining Meta Existential Risk 2019-07-09T18:16:34.736Z · score: 12 (10 votes)
An Argument to Prioritize "Tithing to Catalyze a Paradigm Shift and Negate Meta Existential Risk" 2019-03-15T15:47:25.201Z · score: 6 (7 votes)
How can I internalize my most impactful negative externalities? 2019-01-17T05:38:13.236Z · score: 9 (6 votes)
Memetic Tribes and Culture War 2.0 (Article Discussion) 2018-09-23T23:05:11.425Z · score: 11 (11 votes)
What is the Most Helpful Categorical Breakdown of Normative Ethics? 2018-08-15T20:24:31.312Z · score: 4 (8 votes)
Current Estimates for Likelihood of X-Risk? 2018-08-06T18:05:54.763Z · score: 23 (22 votes)
An Argument To Prioritize "Positively Shaping the Development of Crypto-assets" 2018-04-03T21:59:43.951Z · score: 10 (14 votes)


Comment by rhys_lindmark on Why I Am Not a Technocrat · 2019-08-20T13:51:40.957Z · score: 10 (5 votes) · EA · GW

Link to an ongoing Twitter discussion with Rob Wiblin, Vitalik Buterin, etc. here:

Comment by rhys_lindmark on Philanthropy Vouchers as Systemic Change · 2019-07-11T17:38:01.207Z · score: 5 (4 votes) · EA · GW

I like this style of thinking. A couple quick notes:

1. Various U.S. presidential candidates have proposals for "democracy dollars", which are similar to philanthropy vouchers, but scoped to political giving. AFAICT, they have a different macro goal as well: to decentralize campaign financing. See and

2. I agree that non-politics can be systemic. See this post that expands on your idea of "what if everyone tithed 10%?"

3. It would be interesting to see philanthropic vouchers tested in the EA community. Kind of like a reverse EA Funds/donor lottery, where an EA donor gives lots of EAs vouchers (money) and then the EAs donate it.

Comment by rhys_lindmark on An Argument to Prioritize "Tithing to Catalyze a Paradigm Shift and Negate Meta Existential Risk" · 2019-03-15T18:39:39.814Z · score: 3 (2 votes) · EA · GW

Woof! Thanks for noting this Stefan! As you say, cause neutrality is used in the exact opposite way (to denote that we select causes based on impartial estimates of impact, not that we are neutral about where another person gives their money/time). I've edited my post slightly to reflect this. Thanks!

Comment by rhys_lindmark on How can I internalize my most impactful negative externalities? · 2019-01-22T22:30:11.595Z · score: 1 (1 votes) · EA · GW

Boom, thanks! Dig the push back here. I generally agree with Scott Alexander's comment at the bottom: "I don't think ethical offsetting is antithetical to EA. I think it's orthogonal to EA."

(Though I also believe there are some "macro systemic" reasons for believing that offsetting is a crucial piece to moving more folks to an EA-based non-accumulation mindset. More detailed explanation of this later!)

Comment by rhys_lindmark on How can I internalize my most impactful negative externalities? · 2019-01-22T22:28:24.311Z · score: 1 (1 votes) · EA · GW

Awesome resource, thanks for the link! (Also, I had never heard of Pigouvian taxes before—thanks!)

Given your list, I'd group the "categories" of externalities into:

  • Environment (driving, emitting carbon, agriculture, municipal waste)
  • Public health (driving, obesity, alcohol, smoking, antibiotic use, gun ownership)
  • Financial (debt)

And, if I understand it correctly, it's tough for me to offset some of these. This is because:

  • Luckily, I just happen to not do many of them (e.g. driving, obesity, alcohol, smoking, debt).
  • But even if I did, it's not clear to me how to offset. i.e. Given your research in this area, could you help me answer this question—if I (or people in the developed world generally) were to offset the externalities our actions, what should we offset? 1st clear answer is paying to offset our carbon emissions. What would be "#2", and how would we "pay" to offset it? (e.g. If I was obese, who would I pay to offset that?)


Comment by rhys_lindmark on Current Estimates for Likelihood of X-Risk? · 2018-08-12T18:40:59.111Z · score: 2 (2 votes) · EA · GW

Perfect, thanks! I agree with most of your points (and just writing them here for my own understanding/others):

  • Uncertainty hard (long time scale, humans adaptable, risks systemically interdependent so we get zero or double counting)
  • Probabilities have incentives (e.g. Stern's discounting incentive)
  • Probabilities get simplified (0-10% can turn into 5% or 0% or 10%)

I'll ping you as I get closer to a editable draft of my book, so we can ensure I'm painting an appropriate picture. Thanks again!

Comment by rhys_lindmark on Current Estimates for Likelihood of X-Risk? · 2018-08-08T01:15:05.046Z · score: 2 (2 votes) · EA · GW

Hey Simon! Thanks writing up this paper. The final 1/3 is exactly what I was looking for!

Could you give us a bit more texture on why you think it's "best not to put this kind of number on risks"?

Comment by rhys_lindmark on An Argument To Prioritize "Positively Shaping the Development of Crypto-assets" · 2018-04-04T17:10:59.815Z · score: 3 (3 votes) · EA · GW

Thanks! Here are my other favorite bear/skeptical/reasonable takes:


Comment by rhys_lindmark on Which five books would you recommend to an 18 year old? · 2017-09-06T18:31:11.161Z · score: 2 (2 votes) · EA · GW

Love this exercise (I read a non-fiction book a week, so I think about this a lot!). I'd definitely put an EA book in the top 5, but I think we get more differentiated advantage by adding non-EA books too. My list:

  1. On Direction and Measuring Your Impact—Doing Good Better
  2. On Past-Facing Pattern Matching from History—Sapiens
  3. On Future-Facing Tech Trends—Machine, Platform, Crowd
  4. On Prioritization and Process—Running Lean
  5. On Communication—An Everyone Culture

Honorable Mentions:

  1. Influence/Hooked/Thinking Fast and Slow (on behavioral psychology)
  2. World After Capital/Homo Deus/The Inevitable (more macro trends)
  3. Designing Your Life (process)
  4. Nonviolent Communication (communication)
Comment by rhys_lindmark on Open Thread #38 · 2017-08-24T16:19:19.098Z · score: 1 (1 votes) · EA · GW

I'm interested in quantifying the impact of blockchain and cryptocurrency from a ITN perspective. My instinct is that the technology could be powerful from a "root cause incentive" perspective, from a "breaking game theory" perspective, and from a "change how money works" perspective. I'll have a more full post about this soon, but here's some of my initial thoughts on the subject:


I'd be especially interested in hearing from people who think blockchain/crypto should NOT be a focus of the EA community! (e.g. It's clearly not neglected!)

Comment by rhys_lindmark on Open Thread #38 · 2017-08-24T15:59:18.168Z · score: 0 (0 votes) · EA · GW

Great question. and are building decentralized prediction markets on the Ethereum blockchain. Their goal is to "match the global liquidity pool to the global knowledge pool."

I've asked them how they're thinking about hedgehogs to form a collective fox-y model (and then segmenting the data by hedgehog type).

But yeah, I think they will allow you to do what you want above: "Questions of the form: if intervention Y occurs what is the expected magnitude of outcome Z."

Comment by rhys_lindmark on Open Thread #38 · 2017-08-24T15:50:43.074Z · score: 2 (2 votes) · EA · GW

I'm super into this! I'd be happy to check out your rough sketch. A couple thoughts:

  1. I think we should not bucket all of our time into a general time bucket. In fact, some of our time needs to be "fun creative working time". e.g. Sometimes I work on EA things, and sometimes I make music. "Designing an EA board game" could be part of that "fun bucket".
  2. A game like Pandemic ( could be a good starting point for designing the game (or to work with them on designing it). Essentially, use Pandemic as the MVP game for this, then expand to other cause areas (or to EA as a whole). Also, see 80,000 Hours most recent podcast on pandemics (the concept, not the oard game :)
  3. Here's my favorite piece on game design (by Magic the Gathering's head designer)
  4. My instinct is that this should be a collaborative game (or, as William Macaskgill would say, a "shared aims community").
Comment by rhys_lindmark on Open Thread #38 · 2017-08-24T15:40:41.370Z · score: 2 (2 votes) · EA · GW

Nice link! I think there's worthwhile research to be done here to get a more textured ITN.

On Impact—Here's a small example of x-risk (nuclear threat coming from inside the White House):

On Neglectedness—Thus far it seems highly neglected, at least at a system-level. is one of the only projects I know in the space (but the founder is not contributing much time to it)

On Tractability—I have no clue. Many of these "bottom up"/individual-level solution spaces seem difficult and organic (though we would pattern match from the spread of the EA movement).

  1. There's a lot of momentum in this direction (the public is super aware of the problem). Whenever this happens, I'm tempted by pushing an EA mindset "outcome-izing/RCT-ing" the efforts in the space. So even if it doesn't score highly on Neglectedness, we could attempt to move the solutions towards more cost-effective/consequentialist solutions.
  2. This is highly related to the movement that Tristan Harris (who was at EAGlobal) is pushing.
  3. I feel like we need to differentiate between the "political-level" and the "community-level".
  4. I'm tempted to think about this from the "communities connect with communities" perspective. i.e The EA community is the "starting node/community" and then we start more explicitly collaborating/connecting with other adjacent communities. Then we can begin to scale a community connection program through adjacent nodes (likely defined by n-dimensional space seen here
  5. Another version of this could be "scale the CFAR community".
  6. I think this could be related to Land Use Reform ( and how we construct empathetic communities with a variety of people. (Again, see Nicky Case —
Comment by rhys_lindmark on Local Group Support Overview: CEA, EAF and LEAN · 2017-08-24T15:20:51.911Z · score: 0 (0 votes) · EA · GW

Awesome. Thanks Richenda—I'm looking into Secular Student Alliance now!

Comment by rhys_lindmark on How should we assess very uncertain and non-testable stuff? · 2017-08-23T18:18:10.971Z · score: 1 (1 votes) · EA · GW

Yep yep, happy to! A couple things come to mind:

  1. We could track the "stage" of a given problem/cause area, in a similar way that startups are tracked by Seed, Series A, etc. In other words, EA prioritization would be categorized w.r.t. stages/gates. I'm not sure if there's an agreed on "stage terminology" in the EA community yet. (I know GiveWell's Incubation Grants and EAGrants are examples of recent "early stage" investment.) Here would be some example stages:

Stage 1) Medium dive into the problem area to determine ITN. Stage 2) Experiment with MVP solutions to the problem. Stage 3) Move up the hierarchy of evidence for those solutions—RCTs, etc. Stage 4) For top solutions with robust cost-effectiveness data, begin to scale.

(You could create something like a "Lean Canvas for EA Impact" that could map the prioritized derisking of these stages.)

  1. From the "future macro trends" perspective, I feel like there could be more overlap between EA and VC models that are designed to predict the future. I'm imagining this like the current co-evolving work environment with "profit-focused AI" (DeepMind, etc.) and "EA-focused AI" (OpenAI, etc.). In this area, both groups are helping each other pursue their goals. We could imagine a similar system, but for any given macro trend. i.e. That macro trend is viewed from a profit perspective and an impact/EA perspective.

In other words, this is a way for the EA community to say "The VC world has [x technological trend] high on their prioritization list. How should we take part from an EA perspective?" (And vice versa.)

(fwiw, I see two main ways the EA community interacts in this space—pursuing projects that either a) leverage or b) counteract the negative externalities of new technologies. Using VR for animal empathy is an example of leverage. AI alignment is an example of counteracting a negative externality.)

Do those examples help give a bit of specificity for how the EA + VC communities could co-evolve in "future uncertainty prediction"?

Comment by rhys_lindmark on How should we assess very uncertain and non-testable stuff? · 2017-08-23T17:37:10.648Z · score: 0 (0 votes) · EA · GW

This isn't a unique thought, but I just want to make sure the EA community knows about Gnosis and Augur, decentralized prediction markets built on Ethereum.

Comment by rhys_lindmark on How should we assess very uncertain and non-testable stuff? · 2017-08-18T16:38:31.436Z · score: 2 (4 votes) · EA · GW

I definitely agree that information on these topics is ripe for aggregation/curation.

My instinct is to look to the VC/startup community for some insight here, specifically around uncertainty (they're in the business of "predicting/quantifying/derisking uncertain futures/projects"). Two quick examples:

I would expect an "EA-focused uncertainty model" to include gates that map a specific project through time given models of macro future trends.

Comment by rhys_lindmark on Local Group Support Overview: CEA, EAF and LEAN · 2017-08-18T16:01:06.620Z · score: 1 (3 votes) · EA · GW

Thanks for aggregating this information, Richenda! One quick bucket of thoughts around EA groups + universities:

  1. How are LEAN/CEA/EAF thinking about university chapters? Have they been an effective way of building a local community? Are there any university-focused plans going forwards?
  2. Are there other movements trying a university-focused strategy? Could we partner/learn from them? I'm thinking about something like Blockchain Education Network (see and

Thanks Richenda!

Comment by rhys_lindmark on Update on Effective Altruism Funds · 2017-07-04T19:45:07.008Z · score: 0 (0 votes) · EA · GW

One note on this: blockchain-based DAOs (decentralized autonomous organizations) are a good way to decentralize a giving body (like EAFunds). Rhodri Davies has been doing good work in this space (on AI-led DAOs for effective altruism). See or my recent overview of EA + Blockchain: