Posts

Introducing the Simon Institute for Longterm Governance (SI) 2021-03-29T18:10:12.483Z
How to make people appreciate asynchronous written communication more? 2021-03-10T19:53:24.668Z
Is pursuing EA entrepreneurship becoming more costly to the individual? 2021-03-10T19:35:59.482Z
konrad's Shortform 2021-03-10T19:11:07.902Z
Call for feedback and input on longterm policy book proposal 2020-07-07T15:20:00.854Z
Tactical models to improve institutional decision-making 2019-01-24T01:24:42.329Z
Book Summary: "Messages: The Communication Skills Book" Part I & II 2018-11-09T08:47:39.172Z
Better models for EA development: a network of communities, not a global community 2018-09-18T09:33:24.673Z
Local Community Building Funnel and Activities - EA Geneva 2018-08-09T08:22:13.543Z
Why you should consider going to EA Global 2017-05-09T14:31:18.798Z
Meetup : Geneva, Ethnobar 2015-04-14T14:12:23.365Z

Comments

Comment by konrad on If you had a large amount of money (at least $1M) to spend on philanthropy, how would you spend it? · 2021-09-21T15:32:36.085Z · EA · GW

Dear Khorton, I just wanted to say thank you for this vote of confidence - it is very motivating to see civil servants who think we're on to something.

Comment by konrad on konrad's Shortform · 2021-07-20T07:58:41.120Z · EA · GW

Our World in Data has created two great posts this year, highlighting how the often proposed dichotomy between economic growth & sustainability is false.

In The economies that are home to the poorest billions of people need to grow if we want global poverty to decline substantially, Max Roser points out that given our current wealth,

the average income in the world is int.-$16 per day

Which is far below what we'd think of as the poverty line in developed countries. This means that mere redistribution of what we have is insufficient - we'd all end up poor and unable to continue developing much further because we're too occupied with mere survival. In How much economic growth is necessary to reduce global poverty substantially?, he writes:

I found that $30 per day is, very approximately, the level below which people are considered poor in high-income countries.

in the section Is it possible to achieve both, a reduction of humanity’s negative impact on the environment and a reduction of global poverty?, he adds:

As you will see in our writing there are several important cases in which an increased consumption of specific products gets into unavoidable conflict with important environmental goals; in such cases we aim to emphasize that we all as individuals, but also entire societies, should strongly consider to reduce the consumption of these products – and thereby reduce the use of specific resources and forgo some economic growth – to achieve these environmental goals. We believe a clear understanding of which specific reductions in production and consumption are necessary to reduce our impact on the environment is a much more forceful approach to reducing environmental harm than an unspecific opposition to economic growth in general.

So for discussions on how to approach individual "consumption" or policymaking around it, we could start a list of specific products to avoid. Would somebody be up for compiling this? It would be a resource I'd link to quite regularly. You can apparently just extract them from the 13 links Max Roser put just above the paragraph cited above. It would make for a great, short and crisp EA Forum post, too.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-07-01T15:12:04.731Z · EA · GW

To avoid spamming more comments, one final share: our resource repository is starting to take shape.  Two recent additions that might be of use to others:

In the works: a brief guide to decision-making on wicked problems, an analysis of 28 policymaker interviews on "decision-making under uncertainty and information overload" and a summary of our first working paper.

We have set up an RSS feed for the blog (or just subscribe to the ~quarterly newsletter).

And last but not least, we now have fiscal sponsorship for tax-deductible donations from the US, UK and the Netherlands via EA Funds and a lot of room for more funding.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-06-24T22:43:14.956Z · EA · GW

We have published a few additional blog posts of interest:

Comment by konrad on Effective charities for improving institutional decision making and improving global coordination · 2021-05-24T09:15:08.789Z · EA · GW

Disclaimer: I am a co-founder.

The Simon Institute for Longterm Governance. We help international civil servants understand individual and group decision-making processes to foster the metacognition and tool-use required for tackling wicked problems like global catastrophic risks and the representation of future generations.

We have a well-researched approach and direct access to senior levels in most international organizations. Given that we just launched, we have no sense of our effectiveness yet but hope to provide a guesstimate by 2023

You can donate to us here.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-05-02T16:44:44.757Z · EA · GW

Hi! We uploaded drafts for two pieces last week: 

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-04-02T07:53:34.835Z · EA · GW

It’s all somewhat mixed up - highly targeted advocacy is a great way to build up capacity because you get to identify close allies, can do small-scale testing without too much risk, join more exclusive networks because you’re directly endorsed by “other trusted actor x*, etc. 

Our targeted advocacy will remain general for now - as in “the long-term future matters much more than we are currently accounting for” and “global catastrophic threats are grossly neglected”. With increasing experience and clout, it will likely become more concrete. 

Until then, we think advocating for specific recommendations at the process level, i.e. offering decision-making support, is a great middle way that preserves option value. We are about something very tangible, have more of a pre-existing knowledge base to work with, do not run into conflicts of interest and can incrementally narrow down the most promising pathways for more longtermist advocacy.

Regarding public advocacy: given that we interact mostly with international civil servants, there aren’t any voting constituencies to mobilize. If we take 'public advocacy' to include outreach to a larger set of actors - NGOs, think tanks, diplomatic missions, staff unions and academics - then yes, we have considered targeted media campaigns. That could be impactful in reframing issues/solutions and redirecting attention once we’re confident about context-appropriate messaging.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-04-02T07:53:06.936Z · EA · GW

Yup, the portfolio approach makes a lot of sense to us. Also, as always, thanks for the summary and links!

A big question is how to define “extremely nearby”. Within the next 5 years, SI should be in a position to directly take meaningful action. Ironically, given SI’s starting point, making short-term action the main goal seems like it could make it less likely to attain the necessary capacity. There’s just no sustainable way in which a new actor can act urgently, as they first have to “stand the test of time” in the eyes of the established ones. 

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-04-02T07:52:51.737Z · EA · GW

Yeah, public attention can also be a carrot, not just a stick. But it’s a carrot that grows legs and will run its own way, possibly making it harder when you want to change course upon new learnings.

Our current take here is something like “public advocacy doesn’t create windows of opportunity, it creates windows of implementation”. When public pressure mounts, policymakers want to do something to signal they are trying. And they will often do whatever looks best in that moment. It would only be good to pressure once proposals are worked out and just need to be “pushed through”.

To influence agendas, it seems better, at least mid-term, to pursue insider strategies. However, if all you have is one shot, then you might as well try public advocacy for reprioritization and hope it vaguely goes into the right direction. But if you think there’s time for more targeted and incremental progress, then the best option probably is to become a trusted policy actor in your network of choice.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-04-02T07:52:15.865Z · EA · GW

Thanks a lot for the compliments! Really nice to read.

The metrics are fuzzy as we have yet to establish the baselines. We will do that until the end of September 2021 via our first pilots to then have one year of data to collect for impact analysis.

The board has full power over the decision of whether to continue SI’s existence. In Ralph Hertwig’s words, their role is to figure out whether we “are visionary, entirely naïve, or full of cognitive biases”. For now, we are unsure ourselves. What exactly happens next will depend on the details of the conclusion of the board.

Comment by konrad on New Top EA Causes for 2021? · 2021-04-01T14:32:01.601Z · EA · GW

I prefer the lower pitch "wob-wob-wob" and thus would like to make a bid to simply rename Robert Wiblin to "the Wob". Maybe Naming What We Can could pick this up?

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-03-30T14:46:30.659Z · EA · GW

Hi Khorton, thanks for the pointer - we will make sure to update. Is there something you'd be particularly keen on reading? We're happy to share drafts - just drop me an email konrad@simoninstitute.ch

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-03-30T14:44:15.762Z · EA · GW

4. Two of our forthcoming working papers deal with “the evidence underlying policy change” and “strategies for effective longtermist advocacy”. A common conclusion that could deserve more scrutiny is the relative effectiveness of insider vs outsider strategies (insiders directly work within policy networks and outsiders publicly advocate for policy change). Insider strategies seem more promising. What is well-validated, especially in the US, is that the budget size of advocacy campaigns does not correlate with their success. However, an advocate’s number of network connections and their knowledge of institutions do correlate with their performance. These findings are also consistent with this systematic review on policy engagement for academics.  

As it’s not our top priority, we’re happy to share what we’ve got with somebody who has the capacity to pick this up. To do so, get in touch with Max (max@simoninstitute.ch).

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-03-30T14:43:30.595Z · EA · GW

3. I sympathize strongly with the feeling of urgency but it seems risky to act on it, as long as the longtermist community doesn’t have fully elaborated policy designs on the table that can simply be lobbied into adoption and implementation. 

Given that the design of policies or institutional improvements requires a lot of case-specific knowledge, we see this as another reason to privilege high-bandwidth engagement. In such settings, it’s also possible to become policy-entrepreneurs who can create windows of opportunity, instead of needing to wait for them. 

Whenever there are large-scale windows of opportunity (e.g. a global pandemic causing significant budget shifts), we’d only be confident in attempting to seize them in a rushed manner if (a) the designs are already on the table and just need to be adopted/implemented or if (b) we were in the position to work in direct collaboration with the policymakers. Of course, SI leverages COVID-19 in its messaging but that’s to make its general case, for now.

If an existential catastrophe is happening very soon, SI is not in a position to do much beyond supporting coordination and networking of key actors (which we’re doing). Being overly alarmist would quickly burn the credibility we have only begun to consolidate. Other actors are in positions with higher leverage and we hope to be able to support them indirectly. Overall, we see most of SI’s impact potential 5-20 years down the line - with one potential milestone being the reassessment of the 2030 UN Agenda.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-03-30T14:42:25.575Z · EA · GW

2. You’re right. We’re assuming that policy analysis is being done by more and more organizations in increasing quantities. Highly targeted advocacy is well within the scope of what we mean by “building capacity locally”. There are some things one can propose to advance discussions (see e.g. Toby Ord’s recent Guardian piece). The devil is in the details of these proposals, however. Translating recommendations into concrete policy change isn’t straightforward and highly contextual (see e.g. missteps with LAWS). As advocacy campaigns can easily take on a life of their own, it seems highest leverage to privilege in-depth engagement at this point in time. 

Toby’s Guardian article is an interesting edge case, as it could be seen as “advocacy campaign”-ish. But given its non-sensationalist nature and fit with the UK’s moves towards a national health security agency - in which a bunch of EAs seem to be involved anyway - that’s a well-coordinated multilevel strategy that seems unlikely to catch on fire.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-03-30T14:42:05.686Z · EA · GW

1. Quick definitions first, an explanation below. “Policy engagement” - interacting with policy actors to advance specific objectives; “start locally”: experimenting with actions and recommendations in ways that remain within the scope of organizational influence; “organizational capacity” capability to test, iterate and react to external events in order to preserve course.

Achieving policy change requires organizational capacity to sustain engagement for indefinite amounts of time because (a) organizations have to have sufficient standing within, or strong connections to, the relevant networks in order to be listened to and (b) the funding to hire staff with appropriate experience to react to what arises. 

For example, we wrote this announcement because input from the EA community is of high quality and worth engaging with. If, instead, we had written a big online newspaper announcement for international Geneva and beyond, the reactions would likely have been more overwhelming and interactions more likely to harm SI’s standing than here. This illustrates one way in which SI currently lacks the “capacity” to react to big events in its direct environment and thus needs to build up first.

Comment by konrad on Introducing the Simon Institute for Longterm Governance (SI) · 2021-03-30T14:39:17.023Z · EA · GW

I really liked this comment. I will split up my answer into separate comments to make the discussion easier to follow.  Thanks also for sharing Hard-to-reverse decisions destroy option value, hadn't read it and it seems under-appreciated.

Comment by konrad on How to make people appreciate asynchronous written communication more? · 2021-03-20T19:45:11.168Z · EA · GW

Thanks, I have this wherever possible. Strong upvote for the practical usefulness of the comment.

There are cases, though, where the core problem is not the ability to record  but the lack of appreciation of the value of making things explicit and documenting them as such. Then I can one-sidedly record all I want, it won't shape my environment in the way I want to.

That's why I'm asking about the appreciation aspect in particular. I think there are a lot of gains from attitudes that are common in EA that are just lost in many other circles because people don't have the same commitment to growth. 

This is especially the case when you alone can't do much but need a whole group to buy into this attitude. That's also why I'm less interested in meetings that are clearly only limited to 1-1 exchange.  There are settings where you need to asynchronously update multiple people and having explicit communication would be much better, yet people seem to have a clear preference for 1-1 calls etc.

I'm also not talking about situations where you can impose your norms - but rather about situations where you have to figure out carefully how to go meta while avoiding triggering any individual's defensiveness to then level up the group as a whole.

Essentially, I guess, I'm interested in case studies for what pieces are missing in people's models that this seems so hard for many groups outside of EA. The answers here have already given some insight into it.

Comment by konrad on How to make people appreciate asynchronous written communication more? · 2021-03-18T17:26:02.689Z · EA · GW

it also allows people to qualify and clarify thinking as they go, resulting in what feels like a smooth evolution of thinking as opposed to the seemingly discontinuous and inelegant show of changing your mind after being corrected or learning new information via asynchronous communication.

This gets exactly to the core of the potential I see: groups get stuck in a local equilibrium where progress happens and everybody is content but the payoff from going meta and improving self-knowledge and transparency would compound over time - and that seems to be easier to achieve in written form, exactly because people can't ignore their kinks. And that seems harmful at first because vulnerability does but in many environments it could easily lead to very productive dynamics because then everybody can help one another become the best possible version of themselves, more easily insure each other,

 etc.

Comment by konrad on Is pursuing EA entrepreneurship becoming more costly to the individual? · 2021-03-11T08:13:23.327Z · EA · GW

Thanks for the feedback! I gave it another pass. Is there anything concrete that threw you off or still does? I'd appreciate pointers as I had other people look at it before.

Comment by konrad on How to make people appreciate asynchronous written communication more? · 2021-03-11T07:22:10.033Z · EA · GW

Yeah, agreed that your conclusion applies to the majority of interactions from a 1-off perspective.

But I observe a decent amount of cases where it would be good to have literal documentation of statements, take-aways etc. because otherwise, you'll have to have many more phone calls.

I'm especially thinking of co-working and other mutually agreed upon mid- to long-term coordination scenarios. In order to do collective world-modelling better, i.e. to find cruxes, prioritize, research, introspect, etc., it seems good to have more bandwidth AND more memory. But people routinely choose bandwidth over memory, without having considered the trade-off. 

I suspect that this is an unconscious choice and often leads to quite suboptimal outcomes for groups, as they become reliant on human superconnectors and those people's memory - as a local community-builder, this is what I am. And I can't trust my memory, so I outsource most of it in ways that are legible mostly to me - as it would be too costly for me to make it such that it's legible also for others.

It is these superconnectors who have a disproportionate effect on the common knowledge and overall culture of a group. If the culture is being developed purposefully, you'd want really good documentation of it to remind, improve and onboard. 

Instead, most groups seem to have to rely on leadership and oral communication to coordinate. In part this might be because the pay-off of good documentation and building a culture that uses it is so long-term, that few are currently willing to pay for it?

I am essentially wondering about the causal relationship here: are we (a) not paying for more resource-intensive coordination systems because we consciously aren't convinced of the value/possibility of it or are we (b) not convinced of the value/possibility of more resource-intensive coordination systems because we haven't actually tried all that much yet?

I suspect that we're in the scenario of "not actually having tried enough" because of a) general culture and norms around communication that discourage trying and b) only having had the necessary level of tech adoption to even make this a possibility for <20 years.

Communities of people with mostly technical backgrounds seem to fare massively better on the existence of asynchronous and formal coordination mechanisms than most other groups (e.g. GitLab's remote culture). Is this because these people are a specific kind of person? Is it because they've been trying harder/for longer? How easily transferrable is their culture? What does it take to make it more popular? Or do we believe this attempt is doomed to fail? If so, why?

And if we agree that this seems valuable to popularize, then why is it so hard to mobilize the necessary resources to make it happen more? Is it just general inertia or is there more?

I am afraid that any single individual is making your observation for any single instance but at the collective level and across time, I would be surprised if the calculus holds.

Comment by konrad on konrad's Shortform · 2021-03-10T20:08:30.563Z · EA · GW

Evaluating the UN based on news from the security council is like evaluating the US government based on news from hollywood.

The SC is a circus, but the UN fosters lots of multilateral progress through meetings you don't hear about because everybody's scared of showing that they just want world peace in a world where realism reigns. 

Hollywood shows American superheroes fighting evil, while the government tries to operationalize the coordination of 300mio people. Sure, hollywood memes might foster popular American dream narratives and the government fails regularly but it's doing surprisingly well even when a hollywood-esque clown was its face for four years.

UN soft power has given us beautiful multilateralism that lets a virus spread faster than any, even more virulent ones, before - take a moment to appreciate it when you brush your teeth. Our tooth brushes also are the product of a crazy amount of coordination that no national security focused perspective would have enabled.

The national security perspective is a potentially harmful self-fulfilling prophecy and the people meeting in Geneva to undermine it or those running governments as a career are much closer to superheroes than most others. 

Of course, the UN is highly dysfunctional in many ways but there's also a lot of room for improvement. Of course, there's Musk et al, but they are crazy outliers, unreachable for most. 

I think, if more EAs focused on international governance and diplomacy, the UN system would already be substantially better off. From what I've seen so far, what it's lacking most is memetic leadership and EA+LW have done a great job at reifying values through memes that would actually catch on and could be well-maintained in the cosmopolitan UN context.

Comment by konrad on konrad's Shortform · 2021-03-10T20:01:13.745Z · EA · GW

Pre- vs post-Cuban-Missile-Crisis Kennedy quotes illustrating a too common development pattern I have observed in people who dabble in world-improvement. They start out extremely determined to do Good and end up simply reminding themselves of our humanity. In a somewhat desperate way, holding onto the last straw of hope they could find.

Pre: 

> We dare not tempt them with weakness. For only when our arms are sufficient beyond doubt can we be certain beyond doubt that they will never be employed.

Post: 

> weapons acquired for the purpose of making sure we never need to use them is essential to keeping the peace. But surely the acquisition of such idle stockpiles--which can only destroy and never create--is not the only, much less the most efficient, means of assuring peace

Pre: 

> So let us begin anew--remembering on both sides that civility is not a sign of weakness, and sincerity is always subject to proof. Let us never negotiate out of fear. But let us never fear to negotiate.

Post: 

> For, in the final analysis, our most basic common link is that we all inhabit this small planet. We all breathe the same air. We all cherish our children's future. And we are all mortal.

This is essentially what I see in many parts of international organization circles: lots of people who were very serious about improving the world in the past but are now mostly reassuring each other that they are good people instead of busting their asses to make sure they *actually* try.

This seems unfortunately mainly due to the innate human need for safety and acceptance. Needy humans produce bad culture. Even though we're living in abundance, most of us do not perceive the world as such.

My two-step plan: 

1. Engineer my local bubble such that I am constantly reminded of living in abundance. 

2. Figure out interventions at increasingly larger scales to get more and more people to do the same, more and more easily.

Quotes from:

1961 President John F. Kennedy's Inaugural Address: https://ourdocuments.gov/doc.php?flash=false&doc=91&page=transcript… 

1963 COMMENCEMENT ADDRESS AT AMERICAN UNIVERSITY, WASHINGTON, D.C.: https://jfklibrary.org/archives/other-resources/john-f-kennedy-speeches/american-university-19630610

Comment by konrad on Is pursuing EA entrepreneurship becoming more costly to the individual? · 2021-03-10T19:37:43.221Z · EA · GW

[Epistemic status: patchwork of unsystematically tracked personal impressions since first encountering EA in 2014 noted down over the course of a work day]

So here's an attempt at partially explaining, from a historical perspective, why it might be getting more difficult to fulfil the necessary conditions to start independent EA projects without burning your"EA" career capital (and why that might have been different in the early days).

This perspective seems important if true because it would imply making more of an effort to update common EA career advice and culture.  

It is only a partial answer, and I am sharing this because I would appreciate other perspectives.

tl;dr

With the increasing establishment of any field, such as EA, the likelihood of success of new projects decreases while the gravitas of failure for first-time founders increases. If you aren't yet part of one of the core groups that provide a safety net for their entrepreneurs, I would build career capital via safer paths first.

Trust-based networks scale badly

Networks provide access to their resources in a mostly trust-based manner. Resources are often allocated in somewhat formalized ways but still heavily reliant on personal references.

Verification of alignment is always costly

The ways in which a trust-based network can grow are limited: either through explicit entry criteria or through recommender-systems. Designing explicit entry criteria that work is hard, so our civilization mostly relies on opaque arbitration systems running on human brains.

EA organizations and thought leaders do an extraordinary job at documenting their thinking. But no matter how well you are aligned, verification will always remain fairly costly.

Core groups are increasingly hard to access

As an EA founder, you have to make verification of your alignment as cheap as possible. That's costly for a resource-strapped project in its starting phase. Thus, you want to get maximally relevant feedback - ideally from the gatekeepers of the network you want to join.

But if you are on the other side of things, somebody who has a lot of "EA capital", and get asked for feedback from someone you barely know, you are unlikely to engage. You don't have enough time to properly vet them and their project. Being even associated with it could suggest you endorse it - and you have better things to do than to create a nuanced write-up explaining your engagement and lessons learned from it.

As a result, the important EA people will not engage with cool new independent project unless the founder has sufficient "EA capital" prior to going "off track".

Losses look worse relative to safe bets

Even with relevant feedback it is difficult to build up a successful project. And if you don't have much status, few will recognize a worthy attempt in case of failure - and least likely the busy important people.

Unless you are already highly skilled or have a lot of resources to bring your project to fruition, you're likely to fail at a stage that doesn't provide much information about your skill-level. In that case, any failure provides at least a little bit of information about you not being good enough (assuming success is not pure luck).

As there are more and more smart youngsters, it gets more and more difficult not to be burnt by failure simply because failure looks worse relative to everyone else who is playing it safe or has succeeded.

EA used to be different because it was young

In the early days of EA, a handful of smart youngsters could start a bunch of projects because everyone knew everyone else. There was a lot of excitement about people exploring. The early entrepreneurs received status just by starting things - even if they weren't immediately successful.

Today, just starting a new project is getting you less and less "EA capital". This is because the network has grown, the space has been carved out. Growth of the network means more unknown people and thus core groups are becoming more prudent about who to extend their trust to. 

This is not a bad thing; it's a sign of maturity. I just worry that it hasn't been made explicit often enough:

You're more likely not to get credit for trying nowadays because it's harder to interact with the key nodes of the network, as they now have to protect themselves more vigilantly from incurring significant opportunity and reputational costs.

Conclusion

If this were the dominant dynamic, entrepreneurship would not be worth going into for most people who currently self-identify as EAs. Especially the younger ones, let's say under 30. Even more so if being part of EA is already considered weird in your support circle. Not having a "proper" job might cost you too much social capital to have a larger impact later on in life (when you'd likely have most of your impact).

Given EA demographics, there only are few highly skilled, wealthy or well-supported people who can afford to resist these incentive structures. Most are better off continuing on the beaten career paths until they have accumulated enough capital to not lose status when taking risky bets - no matter how well calculated.

Comment by konrad on konrad's Shortform · 2021-03-10T19:11:08.295Z · EA · GW

I am somewhat disappointed by Yuval Noah Harari's Lessons from a year of Covid (https://ft.com/content/f1b30f2c-84aa-4595-84f2-7816796d6841…). He says many great things in the article but furthers a weird misconception of political decision-making.

Two quotes to illuminate what bothers me 

One reason for the gap between scientific success and political failure is that scientists co-operated globally, whereas politicians [...] have failed to form an international alliance against the virus and to agree on a global plan.

 

a global anti-plague system already exists in [...] the World Health Organization and several other institutions. [...] We need to give this system some political clout and a lot more money, so that it won’t be entirely dependent on the whims of self-serving politicians.

I agree with his framing of this being a political failure but think he's drawing a false dichotomy that perpetuates the problem. Scientists have an easy time collaborating on COVID related matters because it's mainly **natural** scientists. They all more or less agree on something like scientific realism. Scientists utterly fail in political arenas all the time. There is no way around politics, given this planet's diverse societies, so claiming that politicians are self-serving is a little bit of a dick move.

At their core, scientists are as self-serving as politicians. It's just that the political environment compounds minor discrepancies into giant gaping cleavages, in the face of which any single individual feels fear and the need to save their own ass first. To compare political failure to scientific success, we should not look at the individuals but their environment. Unless we have reason to believe that selection effects direct a super disproportionate amount of sociopaths into politics. That doesn't seem very plausible.

We need to foster a culture that evaluates policy-makers and politicians based on their decision-making processes, not outcomes. Essentially, a culture closer to the ideal of science: what matters most are your methods and even negative results can be great learning. One key problem here is a problem scientists routinely avoid: public relations. When you're being judged on outcomes, massive uncertainty incentivizes you to do the most defensible thing - which usually doesn't correlate with upside potential.

The trick we need to achieve: make the most defensible thing not the least risky but the most methodological. Everybody can learn to sit with uncertainty and communicate it. It doesn't matter if the broader public is demanding certainty. What matters is the immediate environment of policy-makers and politicians - their teams and institutions. Each and every one of us can impact that incentive landscape. We can vote, we can advocate, we can become policy actors ourselves. 

But whatever you do, the first step is to not criticize the people who are usually really trying their best under many many constraints. Going into science is, in some ways, the route of least resistance for anybody who values truth. If you have the capacity: play in hard mode, go into international policy-making and incrementally change the face of it such that it becomes an easy mode for future generations.

[crossposted from my twitter]

Comment by konrad on Local Community Building Funnel and Activities - EA Geneva · 2021-02-22T16:59:46.212Z · EA · GW

Hi Markus, only just saw this, sorry! 

Might still be helpful: you can find somewhat more extensive answers in our annual reports.

In short:

We have quite good engagement data now, since starting a zulip chat server, allowing better tracking of activity. We have stopped running individual workshops and replaced them with a standardized intro seminar series and a personalized fellowship program.

The core group of heavily-involved individuals is still growing: >30 people now, which is more than double what we had at the time of the previous comment. With 10-20 core members a year leaving, we have to have 20-30 new core members each year to keep up the growth, which is within the bounds of my growth projections for the most engaged member segment (people who join our fellowship). 

I suspect growth will start stagnating in 0.5-2 years and deepening engagement will remain the main focus of staff, as it has been more and more so for the past ~1.5 years. This is because the community size is becoming more self-sustaining, due to a critical mass of people who provide organic outreach and retention. 

Comment by konrad on Make a $10 donation into $35 · 2020-12-04T13:51:47.733Z · EA · GW

Just got an email saying 230k are still left, so worth pushing this further. 

Comment by konrad on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-27T11:41:08.499Z · EA · GW

"I have a dream," said Harry's voice, "that one day sentient beings will be judged by the patterns of their minds, and not their color or their shape or the stuff they're made of, or who their parents were. Because if we can get along with crystal things someday, how silly would it be not to get along with Muggleborns, who are shaped like us, and think like us, as alike to us as peas in a pod? The crystal things wouldn't even be able to tell the difference. How impossible is it to imagine that the hatred poisoning Slytherin House would be worth taking with us to the stars? Every life is precious, everything that thinks and knows itself and doesn't want to die. Lily Potter's life was precious, and Narcissa Malfoy's life was precious, even though it's too late for them now, it was sad when they died. But there are other lives that are still alive to be fought for. Your life, and my life, and Hermione Granger's life, all the lives of Earth, and all the lives beyond, to be defended and protected, EXPECTO PATRONUM! " 

And there was light.

Harry Potter and the Methods of Rationality, Chapter 47: Personhood Theory

Comment by konrad on Introducing Probably Good: A New Career Guidance Organization · 2020-11-16T18:56:18.230Z · EA · GW

Yes, happily! konrad@eageneva.org

Comment by konrad on Introducing Probably Good: A New Career Guidance Organization · 2020-11-11T15:04:31.891Z · EA · GW

As a data point:

We have organized different "collective ABZ planning sessions" in Geneva that hinge on peer feedback given in a setting I would call a light version of CFAR's hamming circles.

This has worked rather well so far and with the efficient pre-selection of the participants can probably scale quite well. We tried to do so at the Student Summit and it seemed to have been useful to 100+ participants, even though we didn't get to collect detailed feedback in the short time frame. 

Already providing the Schelling point for people to meet, pre-selecting participants & improving the format  seems potentially quite valuable.

Comment by konrad on Can my self-worth compare to my instrumental value? · 2020-10-17T12:50:29.482Z · EA · GW

I think we can assume that people on this forum seek truth and personal growth. Of course, this is challenging for all of us from time to time.

I think having a norm of speaking truthfully and not withholding information is important for community health. Each one of us has to assume the responsibility of knowing our own boundaries and pushing them within reasonable bounds, as few others can be expected to know ourselves well enough. Combined with the fact that in this case people have consciously decided to *opt in* to the discussion by posting a comment, I would think it overly cautious to refrain from replying.

There surely are edge cases that are more precarious and deserve tailored thought but I think this isn't one.

If you know somebody well enough to think they are pushing their boundaries in unsustainable ways, I would reach out to them and mention exactly that thought in a personal message. Add some advice on how to engage with the community and its norms sustainably, link to posts like this showing that we all struggle with similar problems, and then people can also work through possible problems regarding "not feeling good enough".

Personally, I'd rather be forced to live in reality than be protected because people worry I might not be able to come to grips with it. One important reason for which I like the EA community is that it feels like we all have consented to hearing the truth, even if it might be uncomfortable and imply labour.

Comment by konrad on SHOW: A framework for shaping your talent for direct work · 2019-05-27T15:15:00.890Z · EA · GW

Didn't downvote but my two cents:

I am unsure about the net value of encouraging people to simply need less management and wait for less approval.

  • Some (most?) people do need guidance until they are able to run projects independently and successfully, ignoring the need doesn't make it go away.
  • The unilateralist's curse is scary. A lot of decisions about EA network growth and strategy that the core organizations have come to are rather counter-intuitive to most of us until we got the chance to talk it through with someone who has spent significant amounts of time thinking about them.
  • Even with value-aligned actors, coordination might become close to impossible if we accelerate the amount of nodes without accelerating the development of culture. I currently prefer preserving the option of coordination being possible over "many individuals try different things because coordination seemed too difficult a problem to overcome".
Comment by konrad on Is Suffering Convex? · 2018-11-23T18:51:01.426Z · EA · GW

I do think we might be able to collapse the dimensions and don't claim intensities, or especially the extreme ends, are equal. Let me try to put it differently: depending on how to collapse the dimensions into one, we could end up with the more complex individuals having larger scales. Ergo they could weigh more into our calculus.

A beings expression of the intensity is probably always in relation to its individual scale. I guess I don't understand how that is necessarily much of an indicator of the absolute intensity of the experience. Is that where we actually diverge?

Comment by konrad on Book Summary: "Messages: The Communication Skills Book" Part I & II · 2018-11-09T20:01:44.259Z · EA · GW
I think I disagree with trying to surround yourself with people who 'suck it up' as it may make it harder to talk to everyone else if you slip into conversational norms with people who never get offended or annoyed.

I think that depends on what we mean by "surround yourself". I was thinking of my five closest friends. Or would you categorically avoid it? I think there's a threshold of a number of relationships underneath which a blunt communication style doesn't quite become your default.

[EDIT, two years later: am pretty convinced that shifting as large a social circle as possible towards more whole-hearted, blunt-yet-humane conversation norms is desirable and possible.]

Comment by konrad on Is Suffering Convex? · 2018-11-05T11:42:00.858Z · EA · GW

1) I don't think we can say much about intensity either. But let's assume that intensity is equal for fully conscious entities (whatever that means). If we then assume that there might be different dimensions to suffering, more sophisticated beings could suffer on "more (morally relevant) levels" than less sophisticated beings.

2) I also think it matters to other forms of consequentialism through flow through effects of highly resilient beings being capable to more effectively help those who aren't.

Comment by konrad on Is Suffering Convex? · 2018-10-22T14:32:03.158Z · EA · GW

Thanks for starting this discussion on here! I feel like part of your conclusion could also go in the opposite direction:

  • Animals could be less morally important because their suffering is less sophisticated in some morally relevant sense.
  • Increasing lifespans of sophisticated beings now who have built the capacity to cope well with pain could be a great intervention (before the intelligence explosion).
Comment by konrad on Better models for EA development: a network of communities, not a global community · 2018-09-25T11:56:39.231Z · EA · GW

Thanks a lot for this, much appreciated! This gave me the chance to clear up some things for myself. It’s hard to get direct_feedback. ;)

There are two key points I tried to get across with this post and that I should have highlighted more clearly:

  1. Propose new language to talk more productively about network and community building; and
  2. Present and illustrate reasons for why I think this lingo is needed and closer to reality.

Regarding your points:

I) Effectiveness and receiving money: I would want to encourage people who are able to/want to invest significant amounts of time into EA work to figure out what kind of direct, non-”community building” project they could start/contribute to (without significant downside risks) before they start building a local group or alike.

Most of such work will likely look similar in many places: offer career coaching to the most promising people you can find. Being able to coach people requires you to stay on top of things. 1-on-1 discussions leave plenty of room to avoid negative impact and learn quickly.

I could see community development happening in a more meaningful way through such outcome oriented work than through "starting a local group and organizing meetups". Such concrete work helps to a) develop individuals’ expertise much more directly and b) produces the outcomes that can prove alignment to the larger network with fairly tight feedback loops. Later, they can figure out their comparative advantage and, with support, tackle more risky prospects.

To have the time to do that though, one has to have money. My recommendation here wouldn't be to simply pay more people to have this time. I could e.g. imagine that the "network development organisation" offers “EA trainings” to promising individuals. If completed successfully, people receive a first grant to build up their community through such direct work. Grants get renewed based on performance on a few standard metrics that can be built upon over time.

Some of this is already happening, but I see much room for improvement by modeling these structures more explicitly and driving their development more openly.

II) Conclusion: I’d recommend to define labels, roles and accountabilities within the network more clearly.

We often label CEA, LEAN, and EAS as "community building orgs" - but all three actually have quite different roles. I believe that it would be better if these organisations explicitly defined their respective roles. It is not clear to me that these three organisations really are working on similar things beyond the fact that the same label is used for their activities. I would claim they mostly aren't, and the few things they all do could be more efficient and improved faster by only one.

What is different from reality? Mostly the labels and definitions - which I hope should give a clearer sense of what everyone is doing and thereby ease the development of the network as a whole.


I aimed to contribute to a common understanding of what the network is, what communities are, how to build good ones, who has which responsibilities, how to define them better, how to make sure the network maintains high quality, and how to make people learn/understand all of that.

In the process of writing the other articles in our series on EA Geneva’s “community building”, we got much feedback that especially the latter point of “how to make people learn/see/understand all of that” is currently a big issue. Many people seem upset with how they are received when they are trying to contribute/start something in good will. Due to a lack of clarity, they end up wasting their own or EA orgs time and it is frustrating for everyone involved.

We could make network building and community building much more effective if we employed better terminology, had a clearer vision of what the ideal network development structure might look like and could be collectively working towards it - or at least discuss it better. I hope this contributes a little to that process.

Comment by konrad on Local Community Building Funnel and Activities - EA Geneva · 2018-08-20T10:27:13.252Z · EA · GW

Hey Josh, Relevant question, thanks!

1) Our public meetups attract 15-30 people, varying with the theme. Sometimes there are a lot of newbies/random people.

2) We currently have 83 members, our growth seems likely to continue at ~20ppl/quarter but we expect only 20-50% to become regulars and only around 10-30% to become actively involved beyond attending a meetup here and there. We currently don't have data on how many people really stay around for more than a year - we have now introduced an annual member status renewal.

3) As we will run our first advanced workshop next week, this number is currently at 15 only (people actively involved whom we know have the knowledge already). We expect it to go up 2-4 fold until the end of this year and then grow more linearly.

4) We have:

  • 1 public and 1 non-public themed social each month
  • During semesters, monthly intro and advanced workshops
  • One of our student groups has weekly meetups during semesters and runs an intro seminar
  • We aim for monthly discussion dinners, this is less fixed though
  • We meet monthly with our self-improvement group and have bi-weekly open individual debugging/training/planning/gettingshitdone sessions
  • 1-on-1s are usually at around 1-3/week per FTE, but there are weeks with double that in Fall and Spring
  • Sub-communities and working groups have had similar monthly rhythms so far (hard to say because we only properly started those in May)
  • We seem to have a co-organised introductory event once a quarter
Comment by konrad on Moral Anti-Realism Sequence #1: What Is Moral Realism? · 2018-06-08T12:32:27.592Z · EA · GW

Thanks, this makes sense.

As someone who spent a year at a Tennessean high school surrounded by Baptists, I understand your experience. I just ended up with a different conclusion: no one is interested in the metaphysical questions because they have to be settled if you want to continue living your "normal" life. What looks like interest in the metaphysical questions is a mere self-preservation mechanism for the normative ethical claims and not to be taken at face value.

To me, it seems faulty to assume any believer "reasons" about the existence of god, their brains just successfully trick them into thinking that. That's why I felt it was weak as a metaphor for anti-realism vs realism. So from an outside view your metaphor makes sense if you take believers to be "reasoning" about anything but felt to me like it was more distracting from the thing you meant to point at, than actually pointing at it. The thing being:

I think it's going to be less useful to discuss "what are moral claims usually about." What we should instead do is instead what Chalmers describes (see the quote in footnote 4). Discussing what moral claims are usually about is not the same as making up one's mind about normative ethics. I think it's very useful to discuss normative ethics, and I'd even say that discussing whether anti-realism or realism is true might be slightly less important than making up one's mind about normative ethics. Sure, it informs to some extent how to reason about morality, but as has been pointed out, you can make some progress about moral questions also from a lens of agnosticism about realism vs. anti-realism.

Comment by konrad on Moral Anti-Realism Sequence #1: What Is Moral Realism? · 2018-05-24T13:15:34.972Z · EA · GW

Thanks for writing this up in a fairly accessible manner for laypeople like me. I am looking forward to the next posts. So far, I have only one reflection on the following bit of your thinking. It is a side point but it probably would help me to better model your thinking.

And all I’m thinking is, “Why are we so focused on interpreting religious claims? Isn’t the major question here whether there are things such as God, or life after death!?” The question that is of utmost relevance to our lives is whether religion’s metaphysical claims, interpreted in a straightforward and realist fashion, are true or not. An analysis of other claims can come later.

Do you think analyses of the other claims are never of more value than analyses of the metaphysical claims?

Because my initial reaction to your claim was something like "why would we focus on whether there is a god or life after death - it seems hardly possible to make substantial advances there in a meaningful way and these texts were meant to point at something a lot more trivial. They are disguised as profound and with metaphysical explanations only to make people engage with and respect them in times where no other tools were available to do so on a global level."

I.e. no matter the answer to the metaphysical questions, it could be useful to interpret religious claims because they could be pointing at something that people thought would help to structure society, whether the metaphysical claims hold or not.

Thus, I wonder whether the bible example is a little weak. You would have to clarify that you assume that people sometimes actually believe they are having a meaningful discussion around "what's Real Good?", assuming moral realism through god(?), as opposed to just engaging in intellectual masturbation, consciously or not.

If I do not take those people (who suppose moral realism proven through bible) seriously, I can operate based on the assumption that the authors of such writings supposed any form of moral non-naturalism, subjectivism, intersubjectivism or objectivism, as described by you. Any of which could have led to the idea of creating better mechanisms to enforce either the normative Good, the social contract, or allow everyone to maximally realise their own desires by creating an authority ("god") that allows to move society into a better equilibrium for any of these theories.

In that case, taking the claims about the (metaphysical) nature of that authority to be of any value of information/as providing valuable ground for discussion seems to be a waste of time or even giving them undeserved attention and credit, distracting from more important questions. Your described reaction though takes the ideas seriously and I wonder why you think there is any ground to even consider them as such?

I think this concern is somewhat relevant to the broader discussion, too, because you seem to imply that we can't (or even shouldn't?) make any advances on non-metaphysical claims before we haven't figured out the metaphysical ones. Though, what you mean is probably more along the lines of "be ready to change everything once we have figured out moral philosophy", not implying that we shouldn't do anything else in the meantime. Is that correct? If so, this point might get lost if not pronounced more prominently.

Comment by konrad on Announcing Rethink Priorities · 2018-05-11T15:05:56.452Z · EA · GW

Just to clarify: EA Geneva has not received any funding from CEA to date - we are waiting on the decision from the recent community grants round.

Comment by konrad on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-02T08:22:39.073Z · EA · GW

Awesome, thanks a lot for this work!

From what I understood when talking to CEA staff, this is also thought to replace handing out copies of Doing Good Better, yes? If so, I would emphasise this more explicitly, too.

Comment by konrad on General lessons on how to build EA communities. Lessons from a full-time movement builder, part 2 of 4 · 2017-10-11T06:50:30.714Z · EA · GW

Further, I think there are more case studies that could be provided by other groups (like ours). Do you have documents where others could contribute? That way we can compile a bunch of anecdata over time that might turn out more valuable insights.

Comment by konrad on [deleted post] 2017-05-22T12:25:46.372Z

I find your posts a little hard to read because of the long hyperlinks that stand in stark contrast to the black. I'd recommend linking just one word, maximum two or three (you can still edit these posts).

Will read properly later! Thanks for writing these five apparently very interesting pieces!

Comment by konrad on Why you should consider going to EA Global · 2017-05-10T09:08:55.430Z · EA · GW

Thanks for making the effort of writing it up, very much appreciated.

Corrected TAPs, interesting mix-up, thanks for pointing it out!

Comment by konrad on Save the Date for EA Global Boston and San Francisco · 2017-05-09T06:08:15.262Z · EA · GW

I wrote a piece that I would have liked to post on the forum. However, I just figured out that I can't post unless I get four more upvotes. So please upvote this, I'm not a bot. Blog post to read here on gdocs.

Comment by konrad on List of Introductory EA Presentations · 2015-11-04T14:42:35.141Z · EA · GW

We should definitely add Beth Barnes' talk, it seems to currently be the best first general intro talk.