[Linkpost] How Humanity Gave Itself an Extra Life - NYT 2021-05-01T07:34:27.397Z
Launching An Introductory Online Textbook on Utilitarianism 2020-03-09T17:13:04.555Z
Application Process for the 2019 Charity Entrepreneurship Incubation Program 2019-09-17T07:32:36.941Z
Framing Effective Altruism as Overcoming Indifference 2019-05-25T12:10:19.474Z
Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift 2018-05-08T09:50:14.302Z


Comment by Darius_Meissner on [deleted post] 2021-04-14T11:18:41.824Z

Suggestion to change this tag's URL from "/moral-circle-expansion-1/" to "/moral-circle-expansion/".

Comment by Darius_Meissner on Effective Altruism and Utilitarianism · 2021-04-11T12:39:54.296Z · EA · GW

Here are several more recent resources addressing the differences between effective altruism and utilitarianism/consequentialism:

Comment by Darius_Meissner on Act utilitarianism: criterion of rightness vs. decision procedure · 2021-04-11T12:29:34.531Z · EA · GW

To learn more about the difference between criteria of rightness and decision procedures, and how this difference entails a distinction between "single-level utilitarianism" and "multi-level utilitarianism", please see the section Chapter 3: Elements and Types of Utilitarianism: Multi-level Utilitarianism Versus Single-level Utilitarianism on

Comment by Darius_Meissner on On the longtermist case for working on farmed animals [Uncertainties & research ideas] · 2021-04-11T11:18:36.594Z · EA · GW

Another way to approach this is to ensure that people who are already interested in learning about utilitarianism are able to find high-quality resources that explicitly cover topics like the idea of the expanding moral circle, sentiocentrism/pathocentrism, and the implications for considering the welfare of geographically distant people, other species, and future generations. 

Improving educational opportunities of this kind was one motivation for writing this section on Chapter 3: Utilitarianism and Practical Ethics: The Expanding Moral Circle.

Comment by Darius_Meissner on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-11T12:38:15.133Z · EA · GW

Another indicator: Wikipedia pageviews show fairly stable interest in articles on EA and related topics over the last five years.

Comment by Darius_Meissner on [deleted post] 2021-03-06T16:53:33.246Z

Hi Pablo, I have only just seen your comments. Yes, of course, I am more than happy with all the changes you have made and trust your sense for how this Wiki should be designed/structured! Thank you and keep up the good work.

Comment by Darius_Meissner on How many hits do the hits of different EA sites get each year? · 2021-03-04T22:25:03.065Z · EA · GW

Wikipedia pageviews could serve as a useful indicator that I expect is strongly correlated with website views.

E.g. see the following comparison of the pageviews of several EA-related Wikipedia pages in 2020. As it turns out, Peter Singer gets about 2x the number of views of Nick Bostrom, 2.5x of effective altruism, and 12x FHI or GiveWell.

Comment by Darius_Meissner on Notes on "Bioterror and Biowarfare" (2006) · 2021-03-01T10:07:17.058Z · EA · GW

A somewhat related thought I had while reading this post:  Several of the nuclear-weapon states (including the US for all I remember) retain the right to retaliate with nuclear weapons against an attack with bioweapons, chemical weapons, and even cyber weapons. On the one hand, this might make the overall situation more stable because hostile actors (at least states, probably not so much terrorist groups) are deterred from using these other weapons types. On the other hand, it may be destabilising since many more actors (including non-state ones) may trigger a nuclear conflict.

Comment by Darius_Meissner on Books / book reviews on nuclear risk, WMDs, great power war? · 2020-12-15T14:28:49.332Z · EA · GW

On the topic of nuclear warfare, I have also read and can recommend The Bomb: Presidents, Generals, and the Secret History of Nuclear War by Fred Kaplan. The book provides a deep dive into the development of the US nuclear doctrine over time , covering all administrations across 70 years and outlining in great detail many issues and arguments around nuclear policy.

If you're also interested in books on biological weapons, I particularly recommend (HT Chris Bakerlee):

 1. Bioterror and Biowarfare: A Beginner's Guide by Malcolm Dando

2. Deadliest Enemy: Our War Against Killer Germs by Michael T. Osterholm and Mark Olshaker

On the rise of China (relevant to Great Power Competition), I have found it interesting to read Superpower Interrupted: The Chinese History of the World by Michael Schuman. However, I am not too excited to recommend it, because the great majority of the book covers developments in ancient China for which the level of "insights per page" was fairly low for me.

All of the above books are also available as audio books on Audible.

Comment by Darius_Meissner on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T10:46:27.933Z · EA · GW

What are your thoughts on the desirability and feasibility of differential technological development (DTD) as a governance strategy for emerging technologies? 

For instance, Toby Ord briefly touches on DTD in The Precipice, writing that "While it may be too difficult to prevent the development of a risky technology, we may be able to reduce existential risk by speeding up the development of protective technologies relative to dangerous ones."

Comment by Darius_Meissner on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T10:36:16.021Z · EA · GW

What are your long-term goals for The Roots of Progress? Are you pleased with how far you have come so far (e.g. quantity and quality of content produced, page-view or subscriber numbers)?

Comment by Darius_Meissner on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T10:33:22.043Z · EA · GW

How do you prioritise between the various projects you are working on? What other projects, if any, do you consider working on to advance progress studies in future?

Comment by Darius_Meissner on Make a $10 donation into $35 · 2020-12-01T21:16:13.028Z · EA · GW

I just did this and can attest to it working and being as easy as described in the post. Thanks a lot for the recommendation! 

Comment by Darius_Meissner on some concerns with classical utilitarianism · 2020-11-14T17:57:17.737Z · EA · GW

Hi Nil, thanks for linking to Unfortunately, the website is temporarily unavailable under the .net domain due to a technical problem.  You can, however, still access the full website via this link:

Comment by Darius_Meissner on Some thoughts on EA outreach to high schoolers · 2020-09-15T15:19:06.950Z · EA · GW

Brief meta comment: I would generally recommend being very cautious about (and mostly avoid) using language like "converting" others to EA, as in your sentence "Younger people might be easier to convert (...)". This type of language seems fairly easy to avoid, whiled using it may make many people feel uncomfortable and even pose reputational risks for the community.

Comment by Darius_Meissner on Launching An Introductory Online Textbook on Utilitarianism · 2020-03-11T08:30:42.114Z · EA · GW

Thank you for your comment!

There is a part of me which dislikes you presenting utilitarianism which includes animals as the standard form of utilitarianism. (...) I'd prefer you to disambiguate between versions of utilitarianism which aggregate over humans, and those who aggregate over all sentient/conscious beings, and maybe point out how this developed over time (i.e., Peter Singer had to come and make the argument forcefully, because before it was not obvious)?

My impression is that the major utilitarian academics were rather united in extending equal moral consideration to non-human animals (in line with technicalities' comment). I'm not aware of any influential attempts to promote a version of utilitarianism that explicitly does not include the wellbeing of non-human animals (though, for example, a preference utilitarian may give different weight to some non-human animals than a hedonistic utilitarian would). In the future, I hope we'll be able to add more content to the website on the link between utilitarianism and anti-speciesism, with the intention of bridging the inferential distance to which you rightly point.

Similarly, maybe you would also want to disambiguate a little bit more between effective altruism and utilitarianism, and explicitly mention it when you're linking it to effective altruism websites, or use effective altruism examples?

In the section on effective altruism on the website, we already explicitly disambiguate between EA and utilitarianism. I don't currently see the need to e.g. add a disclaimer when we link to GiveWell's website on, but we do include disclaimers when we link to one of the organisations co-founded by Will (e.g. "Note that Professor William MacAskill, coauthor of this website, is a cofounder of 80,000 Hours.")

Also, what's up with attributing the veil of ignorance to Harsanyi but not mentioning Rawls?

We hope to produce a longer article on how the Veil of Ignorance argument relates to utilitarianism at some point. We currently include a footnote on the website, saying that "This [Veil of Ignorance] argument was originally proposed by Harsanyi, though nowadays it is more often associated with John Rawls, who arrived at a different conclusion." For what it's worth, Harsanyi's version of the argument seems more plausible than Rawls' version. Will commented on this matter in his first appearance on the 80,000 Hours Podcast, saying that "I do think he [Rawls] was mistaken. I think that Rawls’s Veil of Ignorance argument is the biggest own goal in the history of moral philosophy. I also think it’s a bit of a travesty that people think that Rawls came up with this argument. In fact, he acknowledged that he took it from Harsayni and changed it a little bit."

The section on Multi-level Utilitarianism Versus Single-level Utilitarianism seems exceedingly strange. In particular, you can totally use utilitarianism as a decision procedure (and if you don't, what's the point?).

Historically, one of the major criticisms of utilitarianism was that it supposedly required us to calculate the expected consequences of our actions all the time, which would indeed be impractical. However, this is not true, since it conflates using utilitarianism as a decision procedure and as a criterion or rightness. The section on multi-level utilitarianism aims to clarify this point. Of course, multi-level utilitarianism does still permit attempting to calculate the expected consequences of ones actions in certain situations, but it makes it clear that doing so all the time is not necessary.

For more information on this topic, I recommend Amanda Askell's EA Forum post "Act utilitarianism: criterion of rightness vs. decision procedure".

Comment by Darius_Meissner on We should choose between moral theories based on the scale of the problem · 2019-11-05T12:37:31.775Z · EA · GW

I like the general thrust of your argument and would like to point out that within moral philosophy there is already an (in my view) satisfactory way to incorporate judgements associated with deontology and virtue ethics within a utilitarian framework—by going from “single-level utilitarianism” to “multi-level utilitarianism“:

I'm currently writing a text on this topic and will copy an excerpt here:

"Utilitarians believe that their moral theory is the appropriate standard of moral rightness, in that it specifies what makes an act (or rule, policy, etc) right or wrong. However, as Henry Sidgwick noted, “it is not necessary that the end which gives the criterion of rightness should always be the end at which we consciously aim”.

Most, if not all, utilitarians discourage the use of utilitarianism as a decision procedure to guide all their everyday actions. Using utilitarianism as a decision procedure means always calculating the expected consequences of our day-to-day actions in an attempt to deliberately try to promote overall wellbeing. For example, we might pick what breakfast cereal to buy at the grocery store by trying to determine which one best contributes to overall wellbeing. To try and do so would be to follow single-level utilitarianism, which treats the utilitarian theory as both a standard of moral rightness and a decision procedure. But using such a decision procedure for all our decisions is a bad and fruitless idea, which explains why almost no one ever defended it. Jeremy Bentham rejected it, writing that “it is not to be expected that this process [of calculating expected consequences] should be strictly pursued previously to every moral judgment.” Deliberately calculating the expected consequences of our actions is error-prone and takes a lot of time. Thus, we have reason to think that following single-level utilitarianism would itself not lead to the best consequences, which is why the theory is often criticized as “self-defeating”.

For these reasons, many advocates of utilitarianism have instead argued for multi-level utilitarianism, which is defined as follows:

Multi-level utilitarianism is the view that, in most situations, individuals should follow tried-and-tested heuristics rather than trying to calculate which action will produce the most wellbeing.

Multi-level utilitarianism implies that we should, under most circumstances, follow a set of simple moral heuristics—do not lie, steal, kill etc.—knowing that this will lead to the best outcomes overall. To this end, we should use the commonsense moral norms and laws of our society as rules of thumb to guide our actions. Following these norms and laws will save time and usually lead to good outcomes, in part because they are based on society’s experience of what promotes individual wellbeing. The fact that honesty, integrity, keeping promises and sticking to the law have generally good consequences explains why in practice utilitarians value such things highly and use them to guide their everyday actions."

Comment by Darius_Meissner on Keeping everyone motivated: a case for effective careers outside of the highest impact EA organizations · 2019-08-22T17:25:24.237Z · EA · GW

Thanks for writing this up! I really appreciated how you describe the problem of the competitive hiring landscape within the EA community, and especially that you connected this to a potentially increased risk of value drift for community members who grow frustrated after not being hired by their preferred employers within the community. I agree that this presents a major challenge for the EA community as a whole and would like to see more proposed solutions.

Having said all that, I also have two quarrels with your proposed solutions:

First, the EAs in academia who are in the best positions to be able to 'steer their fields' in the future are probably the ones who need this type of advice the least, because they would seem to be in the best position to be hired within the EA community. Of course, if they are in such a special position within their academic field, it might be more impactful for them to stay in academia (depending on their field) regardless of whether they could get a job at an EA org.

Second, I have found it difficult to understand from your two points about local EA groups what you wish they would change about their strategy. You advise them to work on "creating a nice and welcoming environment, where members want to come back to in regular intervals for years". However this seems like standard local group advice to me that most (all?) local groups aspire to implement anyway. (Note that this advice anyway does not really apply to EA university groups which by their very nature mostly attract students on a fairly short-term basis (~ 1-3 years).

I would be interested in your specific recommendations for how local groups could achieve this goal of long-term member engagement. Thanks!

Comment by Darius_Meissner on Ask Me Anything! · 2019-08-16T10:56:06.789Z · EA · GW

I'm surprised by how much low-hanging fruit there is still left to edit Wikipedia in order to make more people aware of (and provide them with a more sophisticated understanding of) important ideas that are relevant to EA. I've been adding and improving Wikipedia content on the side for two years now, with a clear focus on articles that are related to altruism.

In my experience, editing Wikipedia is really i) easy, ii) fun, iii) there are many content gaps left to fill, and iv) it exposes the content you write to a much larger audience (sometimes several orders of magnitude larger) than if you wrote instead for a private blog or the EA Forum. Against this background, I'm surprised that not more knowledgeable EAs contribute to Wikipedia (feel free to reach out to me if you would potentially like to do just that).

A word of caution: the quality control on Wikipedia is fairly strong and it is generally disliked if people make edits that come across as ideologically-motivated marketing rather than as useful information. For this reason, I aspire to genuinely improve the quality of the article with all the edits I make, though my choice of articles to edit is informed by my altruistic values.

A useful resource on this topic is Brian Tomasik's "The Value of Wikipedia Contributions in Social Sciences".

[I'm collaborating with Will on creating the content for, but this comment is written in my private capacity]

Comment by Darius_Meissner on Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift · 2019-08-14T08:26:56.284Z · EA · GW

Daniel Gambacorta has discussed value drift in two episodes of his Global Optimum Podcast (one & two) and recommends the following, which I found really helpful:

"Choose effective altruist endeavors that also grant you selfish benefits.  There are a number of standard human motivators.  Status, friends, mates, money, fame.  When these things are on the line work actually gets done.  Without these things it’s a lot harder.  If your effective altruism gets you none of the things that you selfishly want, that’s going to make things harder on you.  If your plan is to go off into a cave, do something brilliant and never get credit for it, your plan’s fatal flaw is you won’t actually do it.  If you can’t get things you selfishly want through effective altruism, you are liable to drift towards values that better enable you to get what you selfishly want.  We humans are extremely good at fulfilling selfish goals while being self-deceived about it. With this in mind, you might pick some EA endeavor which is impactful but also gets you some standard things that humans want, because you are a human and you probably want the standard things other humans want.  Even if the endeavor that grants you selfish benefits is less impactful in the abstract, this could be outweighed by the chance that you actually do it, and also how much more productive you will be when you work on something that is incentivized.  If you do something that grants you significant selfish benefits, you just have to watch out for optimizing for those benefits instead of effective altruism, which would of course defeat the purpose."

Comment by Darius_Meissner on Is value drift net-positive, net-negative, or neither? · 2019-05-05T09:58:54.470Z · EA · GW

How bad (or possibly good) value drift and lifestyle drift are will depend your definition of the phenomenon, as you acknowledge yourself. The way I conceptualise them in the EA Forum article I wrote on the topic ('Concrete Ways to Reduce Risks of Value Drift'), makes them strongly net-negative. In the post I (briefly) make the case that reducing risks of value drift and lifestyle drift may be an altruistic top-priority.

Here's how I think about the topic:

I use the terms value drift and lifestyle drift in a broad sense to mean internal or external changes that would lead you to lose most of the expected altruistic value of your life. Value drift is internal; it describes changes to your value system or motivation. Lifestyle drift is external; the term captures changes in your life circumstances leading to difficulties implementing your values. Internally, value drift could occur by ceasing to see helping others as one of your life’s priorities (losing the ‘A’ in EA), or loosing the motivation to work on the highest-priority cause areas or interventions (losing the ‘E’ in EA). Externally, lifestyle drift could occur (as described in Joey's post) by giving up a substantial fraction of your effectively altruistic resources for non-effectively altruistic purposes, thus reducing your capacity to do good. Concretely, this could involve deciding to spend a lot of money on buying a (larger) house, having a (fancier) wedding, traveling around the world (more frequently or expensively), etc. Quoting myself:

Of course, changing your cause area or intervention to something that is equally or more effective within the EA framework does not count as value drift. Note that even if your future self were to decide to leave the EA community, as long as you still see ‘helping others effectively’ as one of your top-priorities in life it might not constitute value drift. (...)
Most of the potential value of EAs lies in the mid- to long-term, when more and more people in the community take up highly effective career paths and build their professional expertise to reach their ‘peak productivity’ (likely in their 40s). If value drift is common, then many of the people currently active in the community will cease to be interested in doing the most good long before they reach this point. This is why, speaking for myself, losing my altruistic motivation in the future would equal a small moral tragedy to my present self. I think that as EAs we can reasonably have a preference for our future selves not to abandon our fundamental commitment to altruism or effectiveness.
Comment by Darius_Meissner on How to Get the Maximum Value Out of Effective Altruism Conferences · 2019-04-24T23:15:33.292Z · EA · GW

I'd guess it is common for people to underweight the expected value (EV) of attending EA Globals, because they focus on the predictable and easy-to-measure benefits of doing so. However, the EV of attending these conferences (according to my intuitive model) is dominated by 'Black Swan'-like benefits (i.e. low-probability, hard-to-predict, disproportionately-high-impact benefits). For this reason, it may be the case that even if (suppose) most EA Global attendees got little value out of the conference, there will likely be a few individuals reaping very large benefits that justify the whole event for everyone else.

These underappreciated benefits of attending EA Globals likely include: 1) starting a causal chain that will (eventually) result in a job or internship, 2) finding co-founders for highly valuable projects, 3) making new connections (or deepening existing ones) that will (eventually) provide you with substantial support (e.g. financial, advisory, emotional) or vice versa, 4) changing your mind about an empirical or philosophical crucial consideration that radically alters your priorities (e.g. by changing which cause area to focus on, or which interventions to prioritise).

To account for these potential Black Swan-like benefits when thinking about the opportunity cost of attending events such as EA Global, I deliberately attempt to follow the heuristic of asking myself: "Is this event more likely to give rise to Black Swan-like benefits compared to the best alternative use of my time?". I prioritise events that have 'Black Swan'-generating circumstances (e.g. meeting new people and organisations working on important topics, having opportunities to reflect on major life choices and philosophical beliefs, meeting smart and well-informed people who have major disagreements with my views).

Comment by Darius_Meissner on Why are you here? An origin stories thread. · 2018-08-10T13:03:12.678Z · EA · GW

Given how incredibly positive I see the influence that EA has had on my own life, this post is a fantastic opportunity for me to say ‘thank you’. Thanks to all of you for your contributions to building such an awesome community around (the) ‘one thing you’ll never regret’ – altruism (I got this quote from Ben Todd). I have never before met a group of people this smart, caring and dedicated to improving the world, and I am deeply, deeply grateful that I can be a part of this.

I remember that in elementary school was the first time I was confronted with other students believing in what they referred to as ‘GOD’. Having grown up in a secular family myself, I was at first confused by their belief, and then started debating them. This went on to the point when one day I screamed insults at the sky to prove that there was no one up there listening and no lightning would strike to pulverize me. My identity started to grow, and after reading the Wikipedia article on atheism in early middle school, ‘agnostic-atheist’ was the first of a number of ‘-isms’ that I added to my identity over the years (though, as I will describe, some of these ‘-isms’ were only temporary). Unsurprisingly, when I encountered the writings and speeches of Richard Dawkins in my teens, I quickly became a staunch fan (let it be pointed out that I am more critical nowadays about his communication style and some of his content).

I can contribute my early political socialization to attending summer camps and weekend seminars of a socialist youth organisation in Germany in middle school. There, for the first time, I met people who really cared about improving the world, and I learned about social problems such as racism, sexism, homophobia, and – the mother of all problems, from the socialist perspective – capitalism. Furthering this process of ideological adaptation, I learned that the supposed solutions for these and other social problems were creating a socialist, communist or possibly anarchist world-order – if need be, by means of violent revolution. In hindsight, it’s interesting for me to look back and see that this belief in a violent revolution required an element of consequentialist thinking (along with very twisted empirical beliefs largely grounded in Marxism): to create a better society for the rest of all time, we might need to make sacrifices today and fight. I always had a great time with the other young socialists, made friends, had my first kiss, went to various left-wing protests and sat around camp fires where we sang old socialist workers’ songs. (A note on the songs: I remember how powerful and determined they would make me feel in my identity as a social-ist, connected to a cause that was larger than myself and celebrating those ‘partisans’ who were killed fighting (violently) in socialist revolutions. Hopefully, this was a lasting lesson with regards to methods of ideological indoctrination). The most long-lasting and positive effect this part of my life had on my personality, was in igniting a strong dedication to improving the world – I had found my ultimate and main goal in life (provided and hoping that won’t change again).

During my last lesson in ethics class in middle school, we (around 30 omnivore students) debated the ethics of eating animals. The (to me at the time) surprising conclusion we reached was that, in the absence of an existential necessity for humans to eat meat to survive, it was ethically wrong to raise, harm and slaughter animals. On this day, I decided to try vegetarianism. I began to look into the issue of animal farming, animal ethics, vegetarianism and veganism, and I was shocked by the tremendous suffering endured by billions of non-human animals around the world, and that I had contributed to my whole life. Greedy for knowledge, I read as much as I could about these topics. It still took me a year to decide to be vegan henceforth. I read Peter Singer’s ‘Animal Liberation’ only after I went vegan, but it certainly increased my motivational drive to dedicate my life to reducing the suffering of non-human animals – what I then perceived as the most pressing ethical problem in the world (+ the book was my first real exposure to utilitarian thought). Throughout my high school years, I would write articles about veganism for our school’s student magazine, organise public screenings of the animal-rights movie ‘Earthlings’, distribute brochures of animal rights organisations, debate other students on the ethics of eating meat and supply our school’s cafeteria with plant-based milk alternatives. Later, as part of my high school graduation exams I wrote a 40-page philosophical treaty on animal ethics.

In high school I also learned about environmental degradation – caused, of course, by evil multinationals and, ultimately, capitalism – and started caring about environmental preservation (considering myself an environmental-ist). Reasoning that changing only my own consumer behaviour would have limited effects, once again I started taking actions to affect the behaviour of others. For instance, I started a shop from my room in the boarding school, reselling environmentally-friendly products, such as recycled toilet paper, to other students (I would sell the goods at the market price, without making a profit). I also decided that after my graduation from school, I would take a gap year and go to India to volunteer for a small environmental non-profit organisation. (Perhaps unsurprisingly, in hindsight I don't think that my work as a volunteer had a big impact).

And then I attended the single most transformational event of my life: an introductory talk on effective altruism, brilliantly presented by the EA Max Kocher, who at the time interned with the predecessor organisation of what would later become the Effective Altruism Foundation. I was immediately attracted by the EA perspective on reducing animal suffering (though I remember finding the ‘risks to the far-future from emerging technologies’ part of the presentation weird). Previously, I had read a lot of stuff online written by vegans and animal rights activists, but somehow I had never come across a group of people who were thinking as rationally and strategically about achieving their ethical goals as EAs. Once again, I became greedy for knowledge, and – in reading many EA articles, books, listening to podcasts and watching talks – felt like a whole new world was opening up to me. A world that I couldn’t get enough of. And in the process of engaging with EA, I encountered a great many arguments that challenged some of my dearly held beliefs – many of which I subsequently abandoned.

Some of the major ways I changed my mind through EA include:

  • I got convinced that what ultimately counts morally are the conscious experiences of sentient beings, and thus stopped caring about ‘the environment’ for its own sake. Learning about the prevalence and magnitude of the suffering of animals living in the wild, I left behind my beliefs in environmental preservation, the protection of species over individuals, and the intrinsic importance of biodiversity.

  • The most important normative change I underwent is growing closer to hedonistic utilitarianism, and totalism in population ethics. In parallel to this process, I engaged more with arguments like Bostrom’s astronomical waste argument, and ultimately accepted the long-term value hypothesis. That said, keeping epistemic modesty in mind and the wild divergence in favoured moral theories among moral philosophers, I do attempt to take moral uncertainty seriously.

  • The most important change in my empirical worldview came with learning more about the benefits and achievements of market economies and the tremendous historical failures of its so-called socialist and communist alternatives. I stopped attributing everything that was going wrong in the world to ‘capitalism’ and adopted (what I now think of as) a much more nuanced view on the costs and benefits of adopting particular economic policies.

  • Relatedly, I became much more uncertain with regards to many political questions, due to giving up many of my former tribe-determined answers to policy questions. In particular, I have reduced my certainty in policies with strong factual disagreement among relevant experts.

After having engaged with EA intensely, though passively for more than one year in India, upon my return to Germany I was aching to get active and finally meet other EAs in person. Subsequently, I completed two internships with EAF in Berlin, started and led an EA university chapter at the University of Bayreuth, before ultimately transitioning to the University of Oxford, where I am now one of the co-presidents of EA Oxford.

The philosophy and community behind effective altruism have transformed my life in a myriad of beneficial ways. I am excited about all the achievements of EA since its inception and look forward to contributing to its future success!

Comment by Darius_Meissner on Effective Thesis project review · 2018-06-02T19:16:30.000Z · EA · GW

This is a fantastic project! I encourage other EA university chapters to share the Effective Thesis website on their social media pages and internal groups 1-2x per year. When you share it on Facebook, make sure to mention the Effective Thesis Facebook page on your post.

Comment by Darius_Meissner on Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift · 2018-05-10T19:56:18.109Z · EA · GW

Great points, thanks for raising them!

It is possible that a graph plotting a typical EA’s degree of involvement/commitment with the movement would not look like a horizontal line but rather like a zigzag.

It would be very encouraging if this is a common phenomenon and many people 'dropping out' might potentially come back at some point to EA ideals. It provides a counterexample to something I have commented earlier:

It is worth pointing out that most of this discussion is just speculation. The very limited anecdata we have from Joey and others seems too weak to draw detailed conclusions. Anyway: From talking to people who are in their 40s and 50s now, it seems to me that a significant fraction of them were at some point during their youth or at university very engaged in politics and wanted to contribute to 'changing the world for the better'. However, most of these people have reduced their altruistic engagement over time and have at some point started a family, bought a house etc. and have never come back to their altruistic roots. This common story is what seems to be captured by the saying (that I neither like nor endorse): "If you're not a socialist at the age of 20 you have no heart. If you're not a conservative at the age of 40, you have no head".

Regarding your related point:

Is it optimal to expect a constant involvement/commitment with the movement? As EAs, we should think of maximizing our lifetime contributions (...) and find ways of accommodating it within a “lifetime contribution strategy”

I strongly agree with this, which was my motivation to write the post in the first place! I don't think constant involvement/commitment to (effective) altruism is necessary to maximise your lifetime impact. That said, it seems like for many people there is a considerable chance to never 'find their way back' to this commitment after they spent years/decades in non-altruistic environments, on starting a family, on settling down etc. This is why I'd generally think people with EA values in their twenties should consider ways to at the least stay loosely involved/updated over the mid- to long-term to reduce the chance of this happening. So it provides a great example to hear that you actually managed to do just that! In any case, more research is needed on this - I somewhat want to caution against survivorship bias, which could become an issue if we mostly talk to the people who did what is possibly exceptional (e.g. took up a strong altruistic commitment in their forties or having been around EA for for a long time).

Comment by Darius_Meissner on Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift · 2018-05-10T14:16:58.779Z · EA · GW

Thanks, Tom! I agree with with you that all else being equal

solutions that destroy less option value are preferable

though I still think that in some cases the benefits of hard-to-reverse decisions can outweigh the costs.

It seems strange to override what your future self wants to do, if you expect your future self to be in an equally good epistemic position. If anything, future you is better informed and wiser...

This seems to assume that our future selves will actually make important decisions purely (or mostly) based on their epistemic status. However, as CalebWithers points out in a comment:

I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think.

If this is valid (as it seems to me) than many of the important decisions of our future selves are a result of some more or less conscious psychological drives rather than an all-things-considered, reflective and value-based judgment. It is very hard for me to imagine that my future self could ever decide to stop being altruistic or caring about effectiveness on the basis of being better informed and more rational. However, I find it much more plausible that other psychological drives could bring my future self to abandon these core values (and find a rationalization for it). To be frank, though I generally appreciate the idea of 'being loyal to and cooperating with my future self', it seems to me that I place a considerably lower trust in the driving motivations of my future self than many others. From my perspective now, it is my future self that might act disloyally with regards to my current values and that is what I want to find ways to prevent.

It is worth pointing out that in the whole article and this comment I mostly speak about high-level, abstract values such as a fundamental commitment to altruism and to effectiveness. This is what I don't want to lose and what I'd like to lock in for my future self. As illustrated by RandomEAs comment, I would be much more careful about attempting to tie-myself-to-the-mast with respect to very specific values such as discount rates between humans and non-human animals, specific cause area or intervention preferences etc.

Comment by Darius_Meissner on Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift · 2018-05-10T13:44:37.198Z · EA · GW

Thanks for your comment, Karolina!

That also stresses the importance of untapped potential of local groups outside the main EA hubs.

Yep, I see engaging people & keeping up their motivation in one location as a major contribution of EA groups to the movement!

maybe we have sth like altruistic adaptation, that changes after a significant live event (changing the city, marriage etc.) and then comes back to baseline.

This is an interesting suggestion, though I think it unlikely. It is worth pointing out that most of this discussion is just speculation. The very limited anecdata we have from Joey and others seems too weak to draw detailed conclusions. Anyway: From talking to people who are in their 40s and 50s now, it seems to me that a significant fraction of them were at some point during their youth or at university very engaged in politics and wanted to contribute to 'changing the world for the better'. However, most of these people have reduced their altruistic engagement over time and have at some point started a family, bought a house etc. and have never come back to their altruistic roots. This common story is what seems to be captured by the saying (that I neither like nor endorse): "If you're not a socialist at the age of 20 you have no heart. If you're not a conservative at the age of 40, you have no head".

More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap

This is a valuable and under-discussed point that I endorse!

Comment by Darius_Meissner on Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift · 2018-05-10T13:22:33.955Z · EA · GW

Thanks for your comment! I agree with everything you have said and like the framing you suggest.

I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously

This is what I tried to address though you have expressed it more clearly than I could! As some others have pointed out as well, it might make sense to differentiate between 'value drift' (i.e. change of internal motivation) and 'lifestyle drift' (i.e. change of external factors that make implementation of values more difficult). I acknowledge that, as Denise's comment points out, the term 'value drift' is not ideal in the way that Joey and I used it and that:

As the EA community we should treat people sharing goals and values of EA but finding it hard to act towards implementing them very differently to people simply not sharing our goals and values anymore. Those groups require different responses. (Denise_Melchin comment).

However, it seems reasonable to me to be concerned and attempt to avoid both about value and lifestyle drift and in many cases it will be hard to draw a line between the two (as changes in lifestyle likely precipitate changes in values and the other way around).

Comment by Darius_Meissner on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-03T09:41:42.157Z · EA · GW

Now that a new version of the handbook is out, could you update the 'More on Effective Altruism' link? It is quite prominent in the 'Getting Started' navigation panel on the right-hand side of the EA Forum.

Comment by Darius_Meissner on [deleted post] 2018-05-03T09:38:45.030Z

In light of the recently published 2nd edition of the EA Handbook, could this page be updated as well? The 'more on effective altruism' link in the navigation menu is quite prominent and it would be great to lead visitors to the most up-to-date content.