Update on the Simon Institute: Year One 2022-04-07T09:45:40.316Z
The role of tribes in achieving lasting impact and how to create them 2021-09-29T20:48:08.972Z
Introducing the Simon Institute for Longterm Governance (SI) 2021-03-29T18:10:12.483Z
How to make people appreciate asynchronous written communication more? 2021-03-10T19:53:24.668Z
Is pursuing EA entrepreneurship becoming more costly to the individual? 2021-03-10T19:35:59.482Z
konrad's Shortform 2021-03-10T19:11:07.902Z
Call for feedback and input on longterm policy book proposal 2020-07-07T15:20:00.854Z
Tactical models to improve institutional decision-making 2019-01-24T01:24:42.329Z
Book Summary: "Messages: The Communication Skills Book" Part I & II 2018-11-09T08:47:39.172Z
Better models for EA development: a network of communities, not a global community 2018-09-18T09:33:24.673Z
Local Community Building Funnel and Activities - EA Geneva 2018-08-09T08:22:13.543Z
Why you should consider going to EA Global 2017-05-09T14:31:18.798Z
Meetup : Geneva, Ethnobar 2015-04-14T14:12:23.365Z


Comment by konrad on Update on the Simon Institute: Year One · 2022-09-21T19:58:21.179Z · EA · GW

Dear Nuño, thank you very much for the very reasonable critiques! I had intended to respond in depth but it's continuously not the best use of time. I hope you understand. Your effort has been thoroughly appreciated and continues to be integrated into our communications with the EA community. 

We have now secured around 2 years of funding and are ramping up our capacity . Until we can bridge the inferential gap more broadly, our blog offers insight into what we're up to. However, it is written for a UN audience and non-exhaustive, thus you may understandably remain on the fence.

Comment by konrad on Interesting vs. Important Work - A Place EA is Prioritizing Poorly · 2022-07-29T06:57:37.046Z · EA · GW

Maybe a helpful reframe that avoids some of the complications of "interesting vs important" by being a bit more concrete is "pushing the knowledge frontier vs applied work"?

Many of us get into EA because we're excited about crucial considerations type things and too many get stuck there because you can currently think about it ~forever but it practically contributes 0 to securing posterity. Most problems I see beyond AGI safety aren't bottlenecked by new intellectual insights (though sometimes those can still help). And even AGI safety might turn out in practice to come down to a leadership and governance problem.

Comment by konrad on Mastermind Groups: A new Peer Support Format to help EAs aim higher · 2022-05-31T21:51:12.731Z · EA · GW

This sounds great. It feels like a more EA-accessible reframe of the core value proposition of Nora and my post on tribes. 

Comment by konrad on Issues with centralised grantmaking · 2022-04-13T09:27:37.052Z · EA · GW

tl;dr please write that post

I'm very strongly in favor of this level of transparency. My co-founder Max has been doing some work along those lines in coordination with CEA's community health team. But if I understand correctly, they're not that up front about why they're reaching out. Being more "on the nose" about it, paired with a clear signal of support would be great because these people are usually well-meaning and can struggle parsing ambiguous signals. Of course, that's a question of qualified manpower - arguably our most limited resource - but we shouldn't let our limited capacity for immediate implementation stand in the way of inching ever closer to our ideal norms.

Comment by konrad on Update on the Simon Institute: Year One · 2022-04-08T22:36:16.612Z · EA · GW

Yes, happily!

Comment by konrad on Update on the Simon Institute: Year One · 2022-04-08T08:44:53.941Z · EA · GW

Thanks very much for highlighting this so clearly, yes indeed. We are currently in touch with one potential such grantmaker. If you know of others we could talk to, that would be great.

The amount isn't trivial at ~600k. Max' salary also guarantees my financial stability beyond the ~6 months of runway I have. It's what has allowed us to make mid-term plans and me to quit my CBG.

Comment by konrad on Are there highly leveraged donation opportunities to prevent wars and dictatorships? · 2022-03-08T07:04:29.918Z · EA · GW

The Simon Institute for Longterm Governance (SI) is developing the capacity to do a) more practical research on many of the issues you're interested in and b) the kind of direct engagement necessary to play a role in international affairs. For now, this is with a focus on the UN and related institutions but if growth is sustainable for SI, we think it would be sensible to expand to EU policy engagement. 

You can read more in our 2021 review and 2022 plans.  We also have significant room for more funding, as we have only started fundraising again last month.

Comment by konrad on Ideas from network science about EA community building · 2022-02-18T09:58:06.514Z · EA · GW

In my model, strong ties are the ones that need most work because they have highest payoff. I would suggest they generate weak ties even more efficiently than focusing on creating weak ties.

This hinges on the assumption that the strong-tie groups are sufficiently diverse to avoid insularity. Which seems to be the case at sufficiently long timescales (e.g 1+years) as most strong tie groups that are very homogenous eventually fall apart if they're actually trying to do something and not just congratulate one another. That hopefully applies to any EA group.

That's why I'm excited that, especially in the past year, the CBG program seems to be funding more teams in various locations, instead of just individuals. And I think those CB teams would do best to build more teams who start projects. The CB teams then provide services and infrastructure to keep exchange between all teams going.

This suggests I would do fewer EAGx (because EAGs likely cover most of that need if CEA scales further) and more local "charity entrepreneurship" type things.

Comment by konrad on Objections to Value-Alignment between Effective Altruists · 2021-12-29T15:37:26.045Z · EA · GW

EAs talk a lot about value alignment and try to identify people who are aligned with them. I do, too. But this is also funny at a global level, given we don't understand our values nor aren't very sure about how to understand them much better, reliably. Zoe's post highlights that it's too early to double down on our current best guesses and more diversification is needed to cover more of the vast search space.

Comment by konrad on Disagreeables and Assessors: Two Intellectual Archetypes · 2021-11-05T16:52:24.238Z · EA · GW

Disclaimer: I have disagreeable tendencies, working on it but biased. I think you're getting at something useful, even if most people are somewhere in the middle. I think we should care most about the outliers on both sides because they could be extremely powerful when working together.

I want to add some **speculations** on these roles in the context of the level at which we're trying to achieve something: individual or collective.

When no single agent can understand reality well enough to be a good principal, it seems most beneficial for the collective to consist of modestly polarized agents (this seems true from most of the literature on group decision-making and policy processes, e.g. Adaptive Rationality, Garbage Cans, and the Policy Process | Emerald Insight).

This means that the EA network should want people who are confident enough in their own world views to explore them properly, who are happy to generate new ideas through epistemic trespassing, and to explore outside of the Overton window etc. Unless your social environment productively reframes what is currently perceived as "failure", overconfidence seems basically required to keep going as a disagreeable.

By nature, overconfidence gets punished in communities that value calibration and clear metrics of success. Disagreeables become poisonous as they feel misunderstood and good assessors become increasingly conservative. The succesful ones of the two characters build up different communities in which they are high status and extremize one another.

To succeed altogether, we need to walk the very fine line between productive epistemic trespassing and conserving what we have.

Disagreeables can quickly lose status with assessors because they seem insufficiently epistemically humble or outright nuts. Making your case against a local consensus costs you points. Not being well calibrated on what reality looks like costs you points.

If we are in a sub-optimal reality, however, effort needs to be put into defying the odds and change reality. To have the chutzpah to change a system, it helps to ignore parts of reality at times. It helps to believe that you can have sufficient power to change it. If you're convinced enough of those beliefs, they often confer power on you in and of themselves.

Incrementally assessing baseline and then betting on the most plausible outcomes also deepens the tracks we find ourselves on. It is the safe thing to do and stabilizes society. Stability is needed if you want to make sure coordination happens. Thus, assessors rightly gain status for predicting correctly. Yet, they also reinforce existing narratives and create consensus about what the future could be like.

Consensus about the median outcome can make it harder to break out of existing dynamics because the barrier to coordinating such a break-out is even higher when everyone knows the expected outcome (e.g. odds of success of major change are low).

In a world where ground truth doesn't matter much, the power of disagreeables is to create a mob that isn't anchored in reality but that achieves the coordination to break out of local realities.

Unfortunately, to us who have insufficient capabilities to achieve their aims - to change not just our local social reality but the human condition - creating a cult just isn't helpful. None of us have sufficient data or compute to do it alone.

To achieve our mission, we will need constant error correction. Plus, the universe is so large that information won't always travel fast enough, even if there was a sufficiently swift processor. So we need to compute decentrally and somehow still coordinate.

It seems hard for single brains to be both explorers and stabilizers simultaneously, however. So as a collective, we need to appropriately value both and insure one another. Maybe we can help each other switch roles to make it easier to understand both. Instead of drawing conclusions for action at our individual levels, we need to aggregate our insights and decide on action as a collective.

As of right now, only very high status or privileged people really say what they think and most others defer to the authorities to ensure their social survival. At an individual level, that's the right thing to do. But as a collective, we would all benefit if we enabled more value-aligned people to explore, fail and yet survive comfortably enough to be able to feed their learnings back into the collective.

This is of course not just a norms questions, but also a question of infrastructure and psychology.

Comment by konrad on Suffering-Focused Ethics (SFE) FAQ · 2021-10-29T09:19:44.832Z · EA · GW

Thank you (and an anonymous contributor) very much for this!

you made some pretty important claims (critical of SFE-related work) with little explanation or substantiation

If that's what's causing downvotes in and of itself, I would want to caution people against it - that's how we end up in a bubble.

What interpretations are you referring to? When are personal best guesses and metaphysical truth confused?

E.g. in his book on SFE, Vinding regularly cites people's subjective accounts of reality in support of SFE at the normative level. He acknowledges that each individual has a limited dataset and biased cognition but instead of simply sharing his and the perspective of others, he immediately jumps to normative conclusions. I take issue with that, see below.

 Do you mean between "practically SFE" people and people who are neither "practically SFE" nor SFE?

Between "SFE(-ish) people" and "non-SFE people", indeed.

What do you mean [by "as a result of this deconfusion ..."]?

I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we're still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).

So in the end, you'll want to push humanity's development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to. 

In practice, we practically never face decisions where we would be sufficiently certain about the possible results to have choices dominated by our ethics. We need collective authoring of decisions and, given moral uncertainty, this decentralized computation seems to hinge on a robust synthesis of points of view. I don't see a need to appeal to normative theories.

Does that make sense?

Comment by konrad on Suffering-Focused Ethics (SFE) FAQ · 2021-10-20T10:37:50.484Z · EA · GW

Intrigued by which part of my comment it is that seems to be dividing reactions. Feel free to PM me with a low effort explanation. If you want to make it anonymous, drop it here.

Comment by konrad on Suffering-Focused Ethics (SFE) FAQ · 2021-10-18T06:17:17.936Z · EA · GW

Strong upvote. Most people who identify with SFE I have encountered seem to subscribe to the practical interpretation. The core writings I have read (e.g. much of Gloor & Mannino's or Vinding's stuff) tend to make normative claims but mostly support them using interpretations of reality that do not at all match mine.  I would be very happy if we found a way to avoid confusing personal best guesses with metaphysical truth. 

Also, as a result of this deconfusion, I would expect there to  be very few to no decision-relevant cases of divergence between "practically SFE" people and others, if all of them subscribe to some form of longtermism or suspect that there's other life in the universe.

Comment by konrad on Feedback on Meta Policy Research Questions to Help Improve the X/GCR Field’s Policy Engagement · 2021-10-12T09:31:14.513Z · EA · GW

Thanks for starting this discussion! I have essentially the same comment as David,  just a different body of literature: policy process studies. 

We reviewed the field in the context of our Computational Policy Process Studies paper (section 1.1). From that, I recommend Paul Cairney's work, e.g. Understanding public policy (2019), and Weible & Sabatier’s Theories of the Policy Process (2018).

Section 4 of the Computational Process Studies paper contains research directions we think are promising and can be investigated with other methods, too. The paper was accepted by Complexity and is currently undergoing revisions - the reviewers liked our summary and thrust, just the maths is too basic for the audience, so we’re expanding the model. Section 1 of our Long-term Institutional Fit working paper (update in the works, too) also ends with concrete questions we’d like answered.

Comment by konrad on If you had a large amount of money (at least $1M) to spend on philanthropy, how would you spend it? · 2021-09-21T15:32:36.085Z · EA · GW

Dear Khorton, I just wanted to say thank you for this vote of confidence - it is very motivating to see civil servants who think we're on to something.

Comment by konrad on konrad's Shortform · 2021-07-20T07:58:41.120Z · EA · GW

Our World in Data has created two great posts this year, highlighting how the often proposed dichotomy between economic growth & sustainability is false.

In The economies that are home to the poorest billions of people need to grow if we want global poverty to decline substantially, Max Roser points out that given our current wealth,

the average income in the world is int.-$16 per day

Which is far below what we'd think of as the poverty line in developed countries. This means that mere redistribution of what we have is insufficient - we'd all end up poor and unable to continue developing much further because we're too occupied with mere survival. In How much economic growth is necessary to reduce global poverty substantially?, he writes:

I found that $30 per day is, very approximately, the level below which people are considered poor in high-income countries.

in the section Is it possible to achieve both, a reduction of humanity’s negative impact on the environment and a reduction of global poverty?, he adds:

As you will see in our writing there are several important cases in which an increased consumption of specific products gets into unavoidable conflict with important environmental goals; in such cases we aim to emphasize that we all as individuals, but also entire societies, should strongly consider to reduce the consumption of these products – and thereby reduce the use of specific resources and forgo some economic growth – to achieve these environmental goals. We believe a clear understanding of which specific reductions in production and consumption are necessary to reduce our impact on the environment is a much more forceful approach to reducing environmental harm than an unspecific opposition to economic growth in general.

So for discussions on how to approach individual "consumption" or policymaking around it, we could start a list of specific products to avoid. Would somebody be up for compiling this? It would be a resource I'd link to quite regularly. You can apparently just extract them from the 13 links Max Roser put just above the paragraph cited above. It would make for a great, short and crisp EA Forum post, too.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-07-01T15:12:04.731Z · EA · GW

To avoid spamming more comments, one final share: our resource repository is starting to take shape.  Two recent additions that might be of use to others:

In the works: a brief guide to decision-making on wicked problems, an analysis of 28 policymaker interviews on "decision-making under uncertainty and information overload" and a summary of our first working paper.

We have set up an RSS feed for the blog (or just subscribe to the ~quarterly newsletter).

And last but not least, we now have fiscal sponsorship for tax-deductible donations from the US, UK and the Netherlands via EA Funds and a lot of room for more funding.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-06-24T22:43:14.956Z · EA · GW

We have published a few additional blog posts of interest:

Comment by konrad on Effective charities for improving institutional decision making and improving global coordination · 2021-05-24T09:15:08.789Z · EA · GW

Disclaimer: I am a co-founder.

The Simon Institute for Longterm Governance. We help international civil servants understand individual and group decision-making processes to foster the metacognition and tool-use required for tackling wicked problems like global catastrophic risks and the representation of future generations.

We have a well-researched approach and direct access to senior levels in most international organizations. Given that we just launched, we have no sense of our effectiveness yet but hope to provide a guesstimate by 2023

You can donate to us here.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-05-02T16:44:44.757Z · EA · GW

Hi! We uploaded drafts for two pieces last week: 

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-04-02T07:53:34.835Z · EA · GW

It’s all somewhat mixed up - highly targeted advocacy is a great way to build up capacity because you get to identify close allies, can do small-scale testing without too much risk, join more exclusive networks because you’re directly endorsed by “other trusted actor x*, etc. 

Our targeted advocacy will remain general for now - as in “the long-term future matters much more than we are currently accounting for” and “global catastrophic threats are grossly neglected”. With increasing experience and clout, it will likely become more concrete. 

Until then, we think advocating for specific recommendations at the process level, i.e. offering decision-making support, is a great middle way that preserves option value. We are about something very tangible, have more of a pre-existing knowledge base to work with, do not run into conflicts of interest and can incrementally narrow down the most promising pathways for more longtermist advocacy.

Regarding public advocacy: given that we interact mostly with international civil servants, there aren’t any voting constituencies to mobilize. If we take 'public advocacy' to include outreach to a larger set of actors - NGOs, think tanks, diplomatic missions, staff unions and academics - then yes, we have considered targeted media campaigns. That could be impactful in reframing issues/solutions and redirecting attention once we’re confident about context-appropriate messaging.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-04-02T07:53:06.936Z · EA · GW

Yup, the portfolio approach makes a lot of sense to us. Also, as always, thanks for the summary and links!

A big question is how to define “extremely nearby”. Within the next 5 years, SI should be in a position to directly take meaningful action. Ironically, given SI’s starting point, making short-term action the main goal seems like it could make it less likely to attain the necessary capacity. There’s just no sustainable way in which a new actor can act urgently, as they first have to “stand the test of time” in the eyes of the established ones. 

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-04-02T07:52:51.737Z · EA · GW

Yeah, public attention can also be a carrot, not just a stick. But it’s a carrot that grows legs and will run its own way, possibly making it harder when you want to change course upon new learnings.

Our current take here is something like “public advocacy doesn’t create windows of opportunity, it creates windows of implementation”. When public pressure mounts, policymakers want to do something to signal they are trying. And they will often do whatever looks best in that moment. It would only be good to pressure once proposals are worked out and just need to be “pushed through”.

To influence agendas, it seems better, at least mid-term, to pursue insider strategies. However, if all you have is one shot, then you might as well try public advocacy for reprioritization and hope it vaguely goes into the right direction. But if you think there’s time for more targeted and incremental progress, then the best option probably is to become a trusted policy actor in your network of choice.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-04-02T07:52:15.865Z · EA · GW

Thanks a lot for the compliments! Really nice to read.

The metrics are fuzzy as we have yet to establish the baselines. We will do that until the end of September 2021 via our first pilots to then have one year of data to collect for impact analysis.

The board has full power over the decision of whether to continue SI’s existence. In Ralph Hertwig’s words, their role is to figure out whether we “are visionary, entirely naïve, or full of cognitive biases”. For now, we are unsure ourselves. What exactly happens next will depend on the details of the conclusion of the board.

Comment by konrad on New Top EA Causes for 2021? · 2021-04-01T14:32:01.601Z · EA · GW

I prefer the lower pitch "wob-wob-wob" and thus would like to make a bid to simply rename Robert Wiblin to "the Wob". Maybe Naming What We Can could pick this up?

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-03-30T14:46:30.659Z · EA · GW

Hi Khorton, thanks for the pointer - we will make sure to update. Is there something you'd be particularly keen on reading? We're happy to share drafts - just drop me an email

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-03-30T14:44:15.762Z · EA · GW

4. Two of our forthcoming working papers deal with “the evidence underlying policy change” and “strategies for effective longtermist advocacy”. A common conclusion that could deserve more scrutiny is the relative effectiveness of insider vs outsider strategies (insiders directly work within policy networks and outsiders publicly advocate for policy change). Insider strategies seem more promising. What is well-validated, especially in the US, is that the budget size of advocacy campaigns does not correlate with their success. However, an advocate’s number of network connections and their knowledge of institutions do correlate with their performance. These findings are also consistent with this systematic review on policy engagement for academics.  

As it’s not our top priority, we’re happy to share what we’ve got with somebody who has the capacity to pick this up. To do so, get in touch with Max (

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-03-30T14:43:30.595Z · EA · GW

3. I sympathize strongly with the feeling of urgency but it seems risky to act on it, as long as the longtermist community doesn’t have fully elaborated policy designs on the table that can simply be lobbied into adoption and implementation. 

Given that the design of policies or institutional improvements requires a lot of case-specific knowledge, we see this as another reason to privilege high-bandwidth engagement. In such settings, it’s also possible to become policy-entrepreneurs who can create windows of opportunity, instead of needing to wait for them. 

Whenever there are large-scale windows of opportunity (e.g. a global pandemic causing significant budget shifts), we’d only be confident in attempting to seize them in a rushed manner if (a) the designs are already on the table and just need to be adopted/implemented or if (b) we were in the position to work in direct collaboration with the policymakers. Of course, SI leverages COVID-19 in its messaging but that’s to make its general case, for now.

If an existential catastrophe is happening very soon, SI is not in a position to do much beyond supporting coordination and networking of key actors (which we’re doing). Being overly alarmist would quickly burn the credibility we have only begun to consolidate. Other actors are in positions with higher leverage and we hope to be able to support them indirectly. Overall, we see most of SI’s impact potential 5-20 years down the line - with one potential milestone being the reassessment of the 2030 UN Agenda.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-03-30T14:42:25.575Z · EA · GW

2. You’re right. We’re assuming that policy analysis is being done by more and more organizations in increasing quantities. Highly targeted advocacy is well within the scope of what we mean by “building capacity locally”. There are some things one can propose to advance discussions (see e.g. Toby Ord’s recent Guardian piece). The devil is in the details of these proposals, however. Translating recommendations into concrete policy change isn’t straightforward and highly contextual (see e.g. missteps with LAWS). As advocacy campaigns can easily take on a life of their own, it seems highest leverage to privilege in-depth engagement at this point in time. 

Toby’s Guardian article is an interesting edge case, as it could be seen as “advocacy campaign”-ish. But given its non-sensationalist nature and fit with the UK’s moves towards a national health security agency - in which a bunch of EAs seem to be involved anyway - that’s a well-coordinated multilevel strategy that seems unlikely to catch on fire.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-03-30T14:42:05.686Z · EA · GW

1. Quick definitions first, an explanation below. “Policy engagement” - interacting with policy actors to advance specific objectives; “start locally”: experimenting with actions and recommendations in ways that remain within the scope of organizational influence; “organizational capacity” capability to test, iterate and react to external events in order to preserve course.

Achieving policy change requires organizational capacity to sustain engagement for indefinite amounts of time because (a) organizations have to have sufficient standing within, or strong connections to, the relevant networks in order to be listened to and (b) the funding to hire staff with appropriate experience to react to what arises. 

For example, we wrote this announcement because input from the EA community is of high quality and worth engaging with. If, instead, we had written a big online newspaper announcement for international Geneva and beyond, the reactions would likely have been more overwhelming and interactions more likely to harm SI’s standing than here. This illustrates one way in which SI currently lacks the “capacity” to react to big events in its direct environment and thus needs to build up first.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-03-30T14:39:17.023Z · EA · GW

I really liked this comment. I will split up my answer into separate comments to make the discussion easier to follow.  Thanks also for sharing Hard-to-reverse decisions destroy option value, hadn't read it and it seems under-appreciated.

Comment by konrad on How to make people appreciate asynchronous written communication more? · 2021-03-20T19:45:11.168Z · EA · GW

Thanks, I have this wherever possible. Strong upvote for the practical usefulness of the comment.

There are cases, though, where the core problem is not the ability to record  but the lack of appreciation of the value of making things explicit and documenting them as such. Then I can one-sidedly record all I want, it won't shape my environment in the way I want to.

That's why I'm asking about the appreciation aspect in particular. I think there are a lot of gains from attitudes that are common in EA that are just lost in many other circles because people don't have the same commitment to growth. 

This is especially the case when you alone can't do much but need a whole group to buy into this attitude. That's also why I'm less interested in meetings that are clearly only limited to 1-1 exchange.  There are settings where you need to asynchronously update multiple people and having explicit communication would be much better, yet people seem to have a clear preference for 1-1 calls etc.

I'm also not talking about situations where you can impose your norms - but rather about situations where you have to figure out carefully how to go meta while avoiding triggering any individual's defensiveness to then level up the group as a whole.

Essentially, I guess, I'm interested in case studies for what pieces are missing in people's models that this seems so hard for many groups outside of EA. The answers here have already given some insight into it.

Comment by konrad on How to make people appreciate asynchronous written communication more? · 2021-03-18T17:26:02.689Z · EA · GW

it also allows people to qualify and clarify thinking as they go, resulting in what feels like a smooth evolution of thinking as opposed to the seemingly discontinuous and inelegant show of changing your mind after being corrected or learning new information via asynchronous communication.

This gets exactly to the core of the potential I see: groups get stuck in a local equilibrium where progress happens and everybody is content but the payoff from going meta and improving self-knowledge and transparency would compound over time - and that seems to be easier to achieve in written form, exactly because people can't ignore their kinks. And that seems harmful at first because vulnerability does but in many environments it could easily lead to very productive dynamics because then everybody can help one another become the best possible version of themselves, more easily insure each other,


Comment by konrad on Is pursuing EA entrepreneurship becoming more costly to the individual? · 2021-03-11T08:13:23.327Z · EA · GW

Thanks for the feedback! I gave it another pass. Is there anything concrete that threw you off or still does? I'd appreciate pointers as I had other people look at it before.

Comment by konrad on How to make people appreciate asynchronous written communication more? · 2021-03-11T07:22:10.033Z · EA · GW

Yeah, agreed that your conclusion applies to the majority of interactions from a 1-off perspective.

But I observe a decent amount of cases where it would be good to have literal documentation of statements, take-aways etc. because otherwise, you'll have to have many more phone calls.

I'm especially thinking of co-working and other mutually agreed upon mid- to long-term coordination scenarios. In order to do collective world-modelling better, i.e. to find cruxes, prioritize, research, introspect, etc., it seems good to have more bandwidth AND more memory. But people routinely choose bandwidth over memory, without having considered the trade-off. 

I suspect that this is an unconscious choice and often leads to quite suboptimal outcomes for groups, as they become reliant on human superconnectors and those people's memory - as a local community-builder, this is what I am. And I can't trust my memory, so I outsource most of it in ways that are legible mostly to me - as it would be too costly for me to make it such that it's legible also for others.

It is these superconnectors who have a disproportionate effect on the common knowledge and overall culture of a group. If the culture is being developed purposefully, you'd want really good documentation of it to remind, improve and onboard. 

Instead, most groups seem to have to rely on leadership and oral communication to coordinate. In part this might be because the pay-off of good documentation and building a culture that uses it is so long-term, that few are currently willing to pay for it?

I am essentially wondering about the causal relationship here: are we (a) not paying for more resource-intensive coordination systems because we consciously aren't convinced of the value/possibility of it or are we (b) not convinced of the value/possibility of more resource-intensive coordination systems because we haven't actually tried all that much yet?

I suspect that we're in the scenario of "not actually having tried enough" because of a) general culture and norms around communication that discourage trying and b) only having had the necessary level of tech adoption to even make this a possibility for <20 years.

Communities of people with mostly technical backgrounds seem to fare massively better on the existence of asynchronous and formal coordination mechanisms than most other groups (e.g. GitLab's remote culture). Is this because these people are a specific kind of person? Is it because they've been trying harder/for longer? How easily transferrable is their culture? What does it take to make it more popular? Or do we believe this attempt is doomed to fail? If so, why?

And if we agree that this seems valuable to popularize, then why is it so hard to mobilize the necessary resources to make it happen more? Is it just general inertia or is there more?

I am afraid that any single individual is making your observation for any single instance but at the collective level and across time, I would be surprised if the calculus holds.

Comment by konrad on konrad's Shortform · 2021-03-10T20:08:30.563Z · EA · GW

Evaluating the UN based on news from the security council is like evaluating the US government based on news from hollywood.

The SC is a circus, but the UN fosters lots of multilateral progress through meetings you don't hear about because everybody's scared of showing that they just want world peace in a world where realism reigns. 

Hollywood shows American superheroes fighting evil, while the government tries to operationalize the coordination of 300mio people. Sure, hollywood memes might foster popular American dream narratives and the government fails regularly but it's doing surprisingly well even when a hollywood-esque clown was its face for four years.

UN soft power has given us beautiful multilateralism that lets a virus spread faster than any, even more virulent ones, before - take a moment to appreciate it when you brush your teeth. Our tooth brushes also are the product of a crazy amount of coordination that no national security focused perspective would have enabled.

The national security perspective is a potentially harmful self-fulfilling prophecy and the people meeting in Geneva to undermine it or those running governments as a career are much closer to superheroes than most others. 

Of course, the UN is highly dysfunctional in many ways but there's also a lot of room for improvement. Of course, there's Musk et al, but they are crazy outliers, unreachable for most. 

I think, if more EAs focused on international governance and diplomacy, the UN system would already be substantially better off. From what I've seen so far, what it's lacking most is memetic leadership and EA+LW have done a great job at reifying values through memes that would actually catch on and could be well-maintained in the cosmopolitan UN context.

Comment by konrad on konrad's Shortform · 2021-03-10T20:01:13.745Z · EA · GW

Pre- vs post-Cuban-Missile-Crisis Kennedy quotes illustrating a too common development pattern I have observed in people who dabble in world-improvement. They start out extremely determined to do Good and end up simply reminding themselves of our humanity. In a somewhat desperate way, holding onto the last straw of hope they could find.


> We dare not tempt them with weakness. For only when our arms are sufficient beyond doubt can we be certain beyond doubt that they will never be employed.


> weapons acquired for the purpose of making sure we never need to use them is essential to keeping the peace. But surely the acquisition of such idle stockpiles--which can only destroy and never create--is not the only, much less the most efficient, means of assuring peace


> So let us begin anew--remembering on both sides that civility is not a sign of weakness, and sincerity is always subject to proof. Let us never negotiate out of fear. But let us never fear to negotiate.


> For, in the final analysis, our most basic common link is that we all inhabit this small planet. We all breathe the same air. We all cherish our children's future. And we are all mortal.

This is essentially what I see in many parts of international organization circles: lots of people who were very serious about improving the world in the past but are now mostly reassuring each other that they are good people instead of busting their asses to make sure they *actually* try.

This seems unfortunately mainly due to the innate human need for safety and acceptance. Needy humans produce bad culture. Even though we're living in abundance, most of us do not perceive the world as such.

My two-step plan: 

1. Engineer my local bubble such that I am constantly reminded of living in abundance. 

2. Figure out interventions at increasingly larger scales to get more and more people to do the same, more and more easily.

Quotes from:

1961 President John F. Kennedy's Inaugural Address:… 


Comment by konrad on Is pursuing EA entrepreneurship becoming more costly to the individual? · 2021-03-10T19:37:43.221Z · EA · GW

[Epistemic status: patchwork of unsystematically tracked personal impressions since first encountering EA in 2014 noted down over the course of a work day]

So here's an attempt at partially explaining, from a historical perspective, why it might be getting more difficult to fulfil the necessary conditions to start independent EA projects without burning your"EA" career capital (and why that might have been different in the early days).

This perspective seems important if true because it would imply making more of an effort to update common EA career advice and culture.  

It is only a partial answer, and I am sharing this because I would appreciate other perspectives.


With the increasing establishment of any field, such as EA, the likelihood of success of new projects decreases while the gravitas of failure for first-time founders increases. If you aren't yet part of one of the core groups that provide a safety net for their entrepreneurs, I would build career capital via safer paths first.

Trust-based networks scale badly

Networks provide access to their resources in a mostly trust-based manner. Resources are often allocated in somewhat formalized ways but still heavily reliant on personal references.

Verification of alignment is always costly

The ways in which a trust-based network can grow are limited: either through explicit entry criteria or through recommender-systems. Designing explicit entry criteria that work is hard, so our civilization mostly relies on opaque arbitration systems running on human brains.

EA organizations and thought leaders do an extraordinary job at documenting their thinking. But no matter how well you are aligned, verification will always remain fairly costly.

Core groups are increasingly hard to access

As an EA founder, you have to make verification of your alignment as cheap as possible. That's costly for a resource-strapped project in its starting phase. Thus, you want to get maximally relevant feedback - ideally from the gatekeepers of the network you want to join.

But if you are on the other side of things, somebody who has a lot of "EA capital", and get asked for feedback from someone you barely know, you are unlikely to engage. You don't have enough time to properly vet them and their project. Being even associated with it could suggest you endorse it - and you have better things to do than to create a nuanced write-up explaining your engagement and lessons learned from it.

As a result, the important EA people will not engage with cool new independent project unless the founder has sufficient "EA capital" prior to going "off track".

Losses look worse relative to safe bets

Even with relevant feedback it is difficult to build up a successful project. And if you don't have much status, few will recognize a worthy attempt in case of failure - and least likely the busy important people.

Unless you are already highly skilled or have a lot of resources to bring your project to fruition, you're likely to fail at a stage that doesn't provide much information about your skill-level. In that case, any failure provides at least a little bit of information about you not being good enough (assuming success is not pure luck).

As there are more and more smart youngsters, it gets more and more difficult not to be burnt by failure simply because failure looks worse relative to everyone else who is playing it safe or has succeeded.

EA used to be different because it was young

In the early days of EA, a handful of smart youngsters could start a bunch of projects because everyone knew everyone else. There was a lot of excitement about people exploring. The early entrepreneurs received status just by starting things - even if they weren't immediately successful.

Today, just starting a new project is getting you less and less "EA capital". This is because the network has grown, the space has been carved out. Growth of the network means more unknown people and thus core groups are becoming more prudent about who to extend their trust to. 

This is not a bad thing; it's a sign of maturity. I just worry that it hasn't been made explicit often enough:

You're more likely not to get credit for trying nowadays because it's harder to interact with the key nodes of the network, as they now have to protect themselves more vigilantly from incurring significant opportunity and reputational costs.


If this were the dominant dynamic, entrepreneurship would not be worth going into for most people who currently self-identify as EAs. Especially the younger ones, let's say under 30. Even more so if being part of EA is already considered weird in your support circle. Not having a "proper" job might cost you too much social capital to have a larger impact later on in life (when you'd likely have most of your impact).

Given EA demographics, there only are few highly skilled, wealthy or well-supported people who can afford to resist these incentive structures. Most are better off continuing on the beaten career paths until they have accumulated enough capital to not lose status when taking risky bets - no matter how well calculated.

Comment by konrad on konrad's Shortform · 2021-03-10T19:11:08.295Z · EA · GW

I am somewhat disappointed by Yuval Noah Harari's Lessons from a year of Covid (…). He says many great things in the article but furthers a weird misconception of political decision-making.

Two quotes to illuminate what bothers me 

One reason for the gap between scientific success and political failure is that scientists co-operated globally, whereas politicians [...] have failed to form an international alliance against the virus and to agree on a global plan.


a global anti-plague system already exists in [...] the World Health Organization and several other institutions. [...] We need to give this system some political clout and a lot more money, so that it won’t be entirely dependent on the whims of self-serving politicians.

I agree with his framing of this being a political failure but think he's drawing a false dichotomy that perpetuates the problem. Scientists have an easy time collaborating on COVID related matters because it's mainly **natural** scientists. They all more or less agree on something like scientific realism. Scientists utterly fail in political arenas all the time. There is no way around politics, given this planet's diverse societies, so claiming that politicians are self-serving is a little bit of a dick move.

At their core, scientists are as self-serving as politicians. It's just that the political environment compounds minor discrepancies into giant gaping cleavages, in the face of which any single individual feels fear and the need to save their own ass first. To compare political failure to scientific success, we should not look at the individuals but their environment. Unless we have reason to believe that selection effects direct a super disproportionate amount of sociopaths into politics. That doesn't seem very plausible.

We need to foster a culture that evaluates policy-makers and politicians based on their decision-making processes, not outcomes. Essentially, a culture closer to the ideal of science: what matters most are your methods and even negative results can be great learning. One key problem here is a problem scientists routinely avoid: public relations. When you're being judged on outcomes, massive uncertainty incentivizes you to do the most defensible thing - which usually doesn't correlate with upside potential.

The trick we need to achieve: make the most defensible thing not the least risky but the most methodological. Everybody can learn to sit with uncertainty and communicate it. It doesn't matter if the broader public is demanding certainty. What matters is the immediate environment of policy-makers and politicians - their teams and institutions. Each and every one of us can impact that incentive landscape. We can vote, we can advocate, we can become policy actors ourselves. 

But whatever you do, the first step is to not criticize the people who are usually really trying their best under many many constraints. Going into science is, in some ways, the route of least resistance for anybody who values truth. If you have the capacity: play in hard mode, go into international policy-making and incrementally change the face of it such that it becomes an easy mode for future generations.

[crossposted from my twitter]

Comment by konrad on Local Community Building Funnel and Activities - EA Geneva · 2021-02-22T16:59:46.212Z · EA · GW

Hi Markus, only just saw this, sorry! 

Might still be helpful: you can find somewhat more extensive answers in our annual reports.

In short:

We have quite good engagement data now, since starting a zulip chat server, allowing better tracking of activity. We have stopped running individual workshops and replaced them with a standardized intro seminar series and a personalized fellowship program.

The core group of heavily-involved individuals is still growing: >30 people now, which is more than double what we had at the time of the previous comment. With 10-20 core members a year leaving, we have to have 20-30 new core members each year to keep up the growth, which is within the bounds of my growth projections for the most engaged member segment (people who join our fellowship). 

I suspect growth will start stagnating in 0.5-2 years and deepening engagement will remain the main focus of staff, as it has been more and more so for the past ~1.5 years. This is because the community size is becoming more self-sustaining, due to a critical mass of people who provide organic outreach and retention. 

Comment by konrad on Make a $10 donation into $35 · 2020-12-04T13:51:47.733Z · EA · GW

Just got an email saying 230k are still left, so worth pushing this further. 

Comment by konrad on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-27T11:41:08.499Z · EA · GW

"I have a dream," said Harry's voice, "that one day sentient beings will be judged by the patterns of their minds, and not their color or their shape or the stuff they're made of, or who their parents were. Because if we can get along with crystal things someday, how silly would it be not to get along with Muggleborns, who are shaped like us, and think like us, as alike to us as peas in a pod? The crystal things wouldn't even be able to tell the difference. How impossible is it to imagine that the hatred poisoning Slytherin House would be worth taking with us to the stars? Every life is precious, everything that thinks and knows itself and doesn't want to die. Lily Potter's life was precious, and Narcissa Malfoy's life was precious, even though it's too late for them now, it was sad when they died. But there are other lives that are still alive to be fought for. Your life, and my life, and Hermione Granger's life, all the lives of Earth, and all the lives beyond, to be defended and protected, EXPECTO PATRONUM! " 

And there was light.

Harry Potter and the Methods of Rationality, Chapter 47: Personhood Theory

Comment by konrad on Introducing Probably Good: A New Career Guidance Organization · 2020-11-16T18:56:18.230Z · EA · GW

Yes, happily!

Comment by konrad on Introducing Probably Good: A New Career Guidance Organization · 2020-11-11T15:04:31.891Z · EA · GW

As a data point:

We have organized different "collective ABZ planning sessions" in Geneva that hinge on peer feedback given in a setting I would call a light version of CFAR's hamming circles.

This has worked rather well so far and with the efficient pre-selection of the participants can probably scale quite well. We tried to do so at the Student Summit and it seemed to have been useful to 100+ participants, even though we didn't get to collect detailed feedback in the short time frame. 

Already providing the Schelling point for people to meet, pre-selecting participants & improving the format  seems potentially quite valuable.

Comment by konrad on Can my self-worth compare to my instrumental value? · 2020-10-17T12:50:29.482Z · EA · GW

I think we can assume that people on this forum seek truth and personal growth. Of course, this is challenging for all of us from time to time.

I think having a norm of speaking truthfully and not withholding information is important for community health. Each one of us has to assume the responsibility of knowing our own boundaries and pushing them within reasonable bounds, as few others can be expected to know ourselves well enough. Combined with the fact that in this case people have consciously decided to *opt in* to the discussion by posting a comment, I would think it overly cautious to refrain from replying.

There surely are edge cases that are more precarious and deserve tailored thought but I think this isn't one.

If you know somebody well enough to think they are pushing their boundaries in unsustainable ways, I would reach out to them and mention exactly that thought in a personal message. Add some advice on how to engage with the community and its norms sustainably, link to posts like this showing that we all struggle with similar problems, and then people can also work through possible problems regarding "not feeling good enough".

Personally, I'd rather be forced to live in reality than be protected because people worry I might not be able to come to grips with it. One important reason for which I like the EA community is that it feels like we all have consented to hearing the truth, even if it might be uncomfortable and imply labour.

Comment by konrad on SHOW: A framework for shaping your talent for direct work · 2019-05-27T15:15:00.890Z · EA · GW

Didn't downvote but my two cents:

I am unsure about the net value of encouraging people to simply need less management and wait for less approval.

  • Some (most?) people do need guidance until they are able to run projects independently and successfully, ignoring the need doesn't make it go away.
  • The unilateralist's curse is scary. A lot of decisions about EA network growth and strategy that the core organizations have come to are rather counter-intuitive to most of us until we got the chance to talk it through with someone who has spent significant amounts of time thinking about them.
  • Even with value-aligned actors, coordination might become close to impossible if we accelerate the amount of nodes without accelerating the development of culture. I currently prefer preserving the option of coordination being possible over "many individuals try different things because coordination seemed too difficult a problem to overcome".
Comment by konrad on Is Suffering Convex? · 2018-11-23T18:51:01.426Z · EA · GW

I do think we might be able to collapse the dimensions and don't claim intensities, or especially the extreme ends, are equal. Let me try to put it differently: depending on how to collapse the dimensions into one, we could end up with the more complex individuals having larger scales. Ergo they could weigh more into our calculus.

A beings expression of the intensity is probably always in relation to its individual scale. I guess I don't understand how that is necessarily much of an indicator of the absolute intensity of the experience. Is that where we actually diverge?

Comment by konrad on Book Summary: "Messages: The Communication Skills Book" Part I & II · 2018-11-09T20:01:44.259Z · EA · GW
I think I disagree with trying to surround yourself with people who 'suck it up' as it may make it harder to talk to everyone else if you slip into conversational norms with people who never get offended or annoyed.

I think that depends on what we mean by "surround yourself". I was thinking of my five closest friends. Or would you categorically avoid it? I think there's a threshold of a number of relationships underneath which a blunt communication style doesn't quite become your default.

[EDIT, two years later: am pretty convinced that shifting as large a social circle as possible towards more whole-hearted, blunt-yet-humane conversation norms is desirable and possible.]

Comment by konrad on Is Suffering Convex? · 2018-11-05T11:42:00.858Z · EA · GW

1) I don't think we can say much about intensity either. But let's assume that intensity is equal for fully conscious entities (whatever that means). If we then assume that there might be different dimensions to suffering, more sophisticated beings could suffer on "more (morally relevant) levels" than less sophisticated beings.

2) I also think it matters to other forms of consequentialism through flow through effects of highly resilient beings being capable to more effectively help those who aren't.

Comment by konrad on Is Suffering Convex? · 2018-10-22T14:32:03.158Z · EA · GW

Thanks for starting this discussion on here! I feel like part of your conclusion could also go in the opposite direction:

  • Animals could be less morally important because their suffering is less sophisticated in some morally relevant sense.
  • Increasing lifespans of sophisticated beings now who have built the capacity to cope well with pain could be a great intervention (before the intelligence explosion).