Posts

Population Ethics Without Axiology: A Framework 2022-08-02T15:59:22.519Z
Moral Anti-Realism: Introduction & Summary 2022-04-02T14:32:42.846Z
The “Moral Uncertainty” Rabbit Hole, Fully Excavated 2022-04-01T13:14:00.137Z
The Life-Goals Framework: How I Reason About Morality as an Anti-Realist 2022-02-03T11:40:51.142Z
Dismantling Hedonism-inspired Moral Realism 2022-01-27T17:06:55.008Z
Moral Uncertainty and Moral Realism Are in Tension 2022-01-25T11:07:59.174Z
Lukas_Gloor's Shortform 2020-07-27T14:35:50.329Z
Metaethical Fanaticism (Dialogue) 2020-06-17T12:33:05.392Z
Why the Irreducible Normativity Wager (Mostly) Fails 2020-06-14T13:33:41.638Z
Against Irreducible Normativity 2020-06-09T14:38:49.163Z
Why Realists and Anti-Realists Disagree 2020-06-05T07:51:59.975Z
What Is Moral Realism? 2018-05-22T15:49:52.516Z
Cause prioritization for downside-focused value systems 2018-01-31T14:47:11.961Z
Multiverse-wide cooperation in a nutshell 2017-11-02T10:17:14.386Z
Room for Other Things: How to adjust if EA seems overwhelming 2015-03-26T14:10:52.928Z

Comments

Comment by Lukas_Gloor on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T19:19:27.955Z · EA · GW

I agree with your edit more than with the rest of your comment.

It would be uncharitable to interpret "takes" to be about people's specific views. Instead, it's about things like the following: 

Do I learn something from talking to this person? When I dig deeper into the reasons why they believe what they believe, do I find myself surprised by good arguments or depth of thought, or something of the like? Or does it just seem like they're parroting something or are ideologically clouded and can't seem to reason well? Do they seem interested in truth-seeking, intellectually honest, etc? Do they seem to have "good judgment" or do they make arguments where it feels like the conclusions don't even follow from their premises and they're just generally off about the way things work? [There are tons of other factors that go into this; I'm just gesturing at some of the things.]

Regarding competence, there's no single axis but that doesn't mean the concept isn't meaningful. Lots of concepts work like that – they're fuzzy but still meaningful.

To be fair, some things might be less about competence and more about not having equally "high standards." For instance, I notice that sometimes people new to EA make posts on some specific topic that are less thorough than some post from 5-10 years ago that long-term EAs would consider "canonical." And these new posts don't add new considerations or even miss important considerations discussed in the older post. In that case, the new person may still be "competent" in terms of intelligence or even reasoning ability, but they would lack a kind of obsessiveness and high standards about what they're doing (otherwise they'd probably have done more reading about the topic they were going to make a top-level post about – instead of asking questions, which is always an option!). So, it could also be a cultural thing that's more about lack of obsessiveness ("not bothering to read most of what seems relevant") or high standards, rather than (just) about "competence."

(And, for what it's worth, I think it's totally forgivable to occasionally make posts that weren't aware of everything that's previously been written. It would be absurd to expect newcomers to read everything. It just gets weird if most of someone's posts are "worse than redundant" in that way and if they make lots of such posts and they're  all confidently phrased  so you get the impression that the person writing the post is convinced they'll be changing the minds of lots of EAs.)

Comment by Lukas_Gloor on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T14:06:10.010Z · EA · GW

"Quality of person" sounds bad to me too. I also find it weird that someone already gave the same feedback on the shortform and the OP didn't change it.

The other wordings seem fine to me. I understand that not everyone would want to phrase things that way, but we need some kind of language to express differences in quality of people's contributions. Less direct wordings wouldn't be, in my opinion, obviously better. Maybe they come across as kinder, but the sort of rephrasings I'm now envisioning can easily seem a bit fake/artificial in the sense that it's clear to anyone what's being communicated. If someone thought my "takes" were bad, I'd rather they tell me that in clear language instead of saying something that sounds stilted and has me infer that they also don't expect me to be capable of hearing criticism. 

(I might feel differently in a context where I care a lot about what others think of me as a person, like if I was among friends or roomates. By contrast, most people on the EA forums are "loose acquaintances" in a context that's more about "figuring things out" or "getting things done" than it's about being together in a community. In that context, friendliness and respect still remain important, but it isn't per se unfriendly [and sometimes it's even a positive mark of respect] to say things one considers to be true and important.) 

Especially point 3 - you want the distribution of styles and opinions (what you think is "quality of thought") to be as close as possible to that of people already employed by EA organisations - which would mean blocking diversification as much as possible.

Based on the OP's own beliefs, they don't primarily "want the distribution of styles and opinions to be as close as possible to that of people already employed by EA organisations." The OP's view is "competence differences exist and paying attention to them is important for making the world better." (I think this view is obviously correct.)  Therefore, the driver in their hypothesis about people working at EA orgs was obviously an assumption like "EA orgs that try to that try to hire the best candidates succeed more often than average."Somehow,  you make it sound like the OP has some kind of partiality bias or anti-diversity stance when all they did was voice a hypothesis that makes sense on the view "competence differences exist and paying attention to them is important for making the world better." I think that's super unfair.

Comment by Lukas_Gloor on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-23T16:20:22.884Z · EA · GW

There was a vague tone of "the goal is to get accepted to EAG" instead of "the goal is to make the world better," which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world. 

Hm, I understand why you say that, and you might be right (e.g., I see some signs of the OP that are compatible with this interpretation). Still, I want to point out that there's a risk of being a bit uncharitable. It seems worth saying that anyone who cares a lot about having a lot of impact should naturally try hard to get accepted to EAG (assuming that they see concrete ways to benefit from it). Therefore, the fact that someone seems to be trying hard can also be evidence that EA is very important to them. Especially when you're working on a cause area that is under-represented among EAG-attending EAs, like animal welfare, it might matter more (based on your personal moral and empirical views) to get invited.[1]
 

  1. ^

    Compare the following two scenarios. If you're the 130th applicant focused on trying out AI safety research and the conference committee decides that they think the AI safety conversations at the conference will be more productive without you in expectation because they think other candidates are better suited, you might react to these news in a saint-like way. You might think: "Okay, at least this means others get to reduce AI safety effectively, which benefits my understanding of doing the most good." By contrast, imagine you get rejected as an advocate for animal welfare. In that situation, you might legitimately worry that your cause area – which you naturally could think is especially important at least according to your moral views and empirical views – ends up neglected. Accordingly, the saint-like reaction of "at least the conference will be impactful without me" doesn't feel as appropriate (it might be more impactful based on other people's moral and empirical views, but not necessarily yours). (That doesn't mean that people from under-represented cause areas should be included just for the sake of better representation, nor that everyone with an empirical view that differs from what's common in EA is entitled to have their perspective validated. I'm just pointing out that we can't fault people from under-represented cause areas for thinking that it's altruistically important for them to get invited – that's what's rational when you worry that the conference wouldn't represent your cause area all that well otherwise. [Even so, I also think it's important for everyone to be understanding of others' perspectives on this. E.g., if lots of people don't share your views, you simply can't be too entitled about getting representation because a norm that gave all rare views a lot of representation would lead to a chaotic and scattered and low-quality conference. Besides, if your views or cause area are too uncommon, you may not benefit from the conference as much, anyway.]

Comment by Lukas_Gloor on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-23T15:27:22.367Z · EA · GW

Hi Amy, I think it's hard to justify a policy of never discussing someone's application publicly even when they agree to it and it's in the public interest. This is completely different from protecting people's privacy.

If you read Amy's reply carefully, it sounds like she told Constance some of the reasons for rejection in private and then Constance didn't summarize those reasons (accurately, or at all?) in her post. If so, it's understandable why Amy isn't sure whether Constance would be okay having them shared (because if she was okay, she'd have already shared them?). See this part of Amy's reply:

I did explain to Constance why she was initially rejected as one of the things we discussed on an hour-long call. 
[...]
I don’t think this post reflects what I told Constance, perhaps because she disagrees with us. So, I want to stick to the policy for now.

FWIW, based on everything Constance writes, I think she seems like a good fit for EAG to me and, more importantly, can be extremely proud of her altruism and accomplishments  (and doesn't need validation from other EAs  for that).

I'm just saying that on the particular topic of sharing reasons for the initial rejections, it seems like Amy gave an argument that's more specific than "we never discuss reasons, ever, not even when the person herself is okay with public discussion." And you seem to have missed that in your reply or assumed an uncharitable interpretation. 

Comment by Lukas_Gloor on The $100,000 Truman Prize: Rewarding Anonymous EA Work · 2022-09-23T12:07:18.771Z · EA · GW

Okay, I think you have a good point. The post "PR" is corrosive, "reputation" is not, which I really like and agree with, argues that "reputation" is the thing that actually matters. A good way to describe reputation is indeed "how you come across to people who interact with you in good faith." Based on this definition, I agree with your point!

That said, I interpreted the OP charitably in that I assumed they're talking about what Anna Salomon (author of the linked post) would call "PR risks." Anna's recommendation there is to basically not care about PR risk at all. By contrast, I think it's sometimes okay (but kind of a necessary evil) to care about PR risks. For instance, you have more to lose if you're running for a seat in politics than if you're a niche organization that anyway doesn't do a ton of public-facing communications. (But it's annoying and I would often recommend that orgs don't worry about them much and focus on the things that uphold their reputation, more narrowly construed, i.e., "among people whose opinions are worth caring about.")

Anyway, I reversed my downvote of your comment because I like a definition of "reputational risk" where it's basically generally bad not to care about it. I didn't change it into an upvote because you seem to disagree with the secrecy/censorship elements of the post in general (you gave "reputational risks" as an example, but worded your post in a way that implies you also have qualms with a bunch of other aspects – so far, I don't share this aversion; I think secrecy/censorship are sometimes appropriate).

Comment by Lukas_Gloor on The $100,000 Truman Prize: Rewarding Anonymous EA Work · 2022-09-23T10:59:58.708Z · EA · GW

and most importantly, people who interact with the organisation in good faith would think is bad

Those are your words, not the words in the OP. 

If I was in the evaluation committee it would be one of my  evaluation criteria that people interacting with the organization in good faith would think it was a good deed / good involvement on part of the prize contender (and it would be strange to do it differently, so I don't expect the evaluation committee to think differently).

Comment by Lukas_Gloor on Criticism of the 80k job board listing strategy · 2022-09-21T19:22:12.066Z · EA · GW

Thanks, those are good examples and I think you're changing my mind a bit! If the board just lists all kinds of jobs at a particular org and that org also hires for developers (or some other role that requires comparatively little involvement with organizational strategy, perhaps operations in some cases – though note that operations people often take on various responsibility that shape the direction of an organization), that could be quite misleading. This would be a problem even if we don't expect 80k to directly recommend to developers to take developer jobs at an org that they don't think has positive impact.

"does this AI company do more safety or more capabilities?"

That's yet another challenge, yeah. Especially because there may not even always be a consensus among thoughtful EAs on how much safety work (and what sort of org structure) is enough. 

Comment by Lukas_Gloor on Criticism of the 80k job board listing strategy · 2022-09-21T15:18:50.888Z · EA · GW

I'd worry that this leads to a false sense of security. Just like jobs that people take purely for career capital require some active thinking on the part of the person about when it's enough and when to pivot, one could make a case that most highly impactful jobs wouldn't be exceptionally impactful without "active thinking" of a similar kind.

For instance, any sort of policy work has more or less impact depending on what specific policies you advocate for, not just how well one does it.

Unfortunately, I think it's somewhat rare that for-profit organizations (especially outside of EA) or governments have streamlined missions and the type of culture that encourages "having impact" as a natural part of one's job description. Hospitals are the main counter-example I could think of, since your job description as a doctor or nurse or even as almost any hospital staff is literally about saving lives and may include instructions for working under triage conditions. By contrast, the way I envision work in policy (you obviously know more about this than I do) or things like biosecurity research, I'd imagine it depends a lot on the specific program / group and that people can make a big difference if they have personal initiative – which are things that require paying close attention to one's path to impact (on top of excelling at one's immediate job description). 

What IMO could be quite useful is if 80k would say how much of a given job's impact comes from "following the job description and doing well in a conventional sense" vs. "introducing particular ideas or policies to this organization based on EA principles." 
 

Comment by Lukas_Gloor on Criticism of the 80k job board listing strategy · 2022-09-21T15:02:11.134Z · EA · GW

If somebody can't evaluate jobs on the job board for themselves, I'm not that confident that they'll take a good path regardless.


That was also my instinctive reaction to this post. At least in the sense of "if someone can't distinguish what's mostly for career capital vs. where a specific role ends up saving lives or improving the world, that's a bit strange."

That said, I agree with the post that the communication around the job board can probably be improved!

Comment by Lukas_Gloor on Civilization Recovery Kits · 2022-09-21T14:42:28.948Z · EA · GW

Do you think this disqualifies the project?


Probably not, especially not in the sense that anyone wanting to implement a low-effort version of this project should feel discouraged. ("Low-effort versions" of this would mostly help make the lives for people in post-apocalyptic scenarios less scary and more easily survivable, which seems obviously valuable. Beyond that, insofar as you manage to preserve information, that seems likely positive despite the caveats I mentioned!)

Still, before people start high-effort versions of the idea that go more in the direction of "civilization re-starter kits" (like vast storages of items to build self-functioning communities) or super bunkers, I'd personally like to see a more in-depth evaluation of the concerns. 

For what it's worth, improving the quality of a newly rebuilt civilization seems more important than making sure rebuilding happens at all even according to the total view on population ethics (that's my guess at least – though it depends on how totalists  would view futures controlled by non-aligned AI), so investigating whether there are ways to focus especially on wisdom and coordination abilities of a new civilization seems important also from that perspective.

Comment by Lukas_Gloor on Civilization Recovery Kits · 2022-09-21T11:34:28.135Z · EA · GW

It's worth noting that ensuring recovery after a near-extinction event is less robust under moral uncertainty and less cooperative given disagreements on population-ethical views than just "prevent our still-functioning civilization from going extinct." In particular, the latter scenario (preventing extinction for a still-functioning civilization) is really good not just on a totalist view of aggregative consequentialism, but also for all existing people who don't want to die, don't want their relatives or friends or loved ones to die, and want civilization to go on for their personal contributions to continue to matter. All of that gets disrupted in a near-extinction collapse. 

(There's also an effect from considerations of "Which worlds get saved?" where, in a post-collapse scenario, you've updated that humans just aren't very good at getting their shit together. All else equal, you should be less optimistic about our ability to pull off good things in the long-run future compared to in a world where we didn't bring about a self-imposed civilizational collapse / near-extinction event.)

Therefore, one thing that makes the type of intervention you're proposing more robust would be to also focus on improving the quality of the future conditional on successful rebuilding. That is, if you have information or resources that would help a second stage civilization to do better than it otherwise would (at preventing particularly bad future outcomes), that would make the intervention more robustly positive. 

There's an argument to be made that extinction is rather unlikely in general even with the massive population decreases you're describing, and that rebuilding from a "higher base" is likely to lead to a wiser or otherwise morally better civilization than rebuilding from a lower base. (For instance, perhaps because more structures from the previous civilization are preserved, which makes it easier to "learn lessons" and have an inspiring narrative about what mistakes to avoid). That said, these things are hard to predict.[1]

  1. ^

    Firstly, we can tell probable-sounding just-so stories where slower rebuilding leads to better outcomes. Secondly, there isn't necessarily even a straightforward relationship between things like "civilizational wisdom" or "civilization's ability to coordinate" to averting some of the worst possible outcomes with earth-originating space colonization ("s-risks"). In particular, sometimes it's better to fail at some high-risk endeavor in a very stupid way rather than in a way that is "almost right." It's not obvious where on that spectrum a civilization would end up if you just make it a bit wiser and better-coordinated. You could argue that "being wiser is always better" because wisdom means people will want to pause, reflect, and make use of option value when they're faced with an invention that has some chance of turning out to be a Pandora's box. However, the ability to pause and reflect again requires being above a certain threshold on things like wisdom and ability to coordinate – otherwise there may be no "option value" in practice. (When it comes to evaluating whether a given intervention is robust, it concerns me that EAs have historically applied the "option value argument" without caveats to our present civilization, which seems quite distinctly below that threshold the way things are going – though one may hope that we'll somehow be able to change that trajectory, which gives the basis for a more nuanced option-value argument.) 

Comment by Lukas_Gloor on Democratising Risk - or how EA deals with critics · 2022-09-16T15:51:16.857Z · EA · GW

Update: Zoe and I had a call and the private info she shared with me convinced me that some people with credentials or track record in EA/longtermist research indeed discouraged publication of the paper based on funding concerns. I realized that I originally wasn't imaginative enough to think of situations where those sorts of concerns could apply (in the sense that people would be motivated to voice them for common psychological reasons and not as cartoon villains). When I thought about how EA funding generates pressure to conform, I was much too focused on the parts of EA I was most familiar with. That said, the situation in question arose because of specific features coming together – it wouldn't be accurate to say that all areas of the EA ecosystem face the same pressures to conform. (I think Zoe agrees with this last bit.) Nonetheless, looking forward I can see similar dynamics happening again, so I think it's important to have identified this as a source of bias.

Comment by Lukas_Gloor on Population Ethics Without Axiology: A Framework · 2022-09-16T14:08:56.469Z · EA · GW

I found the "court hearing analogy" and the overall discussion of population ethics in terms of the anticipated complains/appeals/preferences of future people a bit confusing (because, as you point out, it's not clear how it makes sense in light of the non-identity problem). In particular your tentative solution of talking about the interests of 'interest groups' seems like it's kind of veering into the axiological territory that you wanted to avoid, no? As in: groups don't literally have desires or preferences or goals or interests above and beyond the individuals that make them up. But we can't compare across individuals here, so it's not clear how we can meaningfully compare the interests of groups in this sense. So what are we comparing? Well, groups can be said to have different kinds of intrinsic value, and while that value could be manifested/realised/determined only by individuals, you can comfortably compare value across groups with different sets of individuals.

I don’t think my use of “interest groups” smuggles in the need for an objective axiology, but I agree that I should flesh out better how I think about this. (The way it’s currently in the text, it’s hard to interpret how I’m thinking of it.)

Let me create a sketch based on the simplest formulation of the non-identity problem (have child A now or delay pregnancy a few days to have a healthier, better off child B). The challenge is to explain why minimal morality would want us to wait. If we think of objections by newly created people/beings as tied to their personality after birth, then A cannot complain that they weren’t given full consideration of their interests. By contrast, if A and B envision themselves as an “interest group” behind some veil of ignorance of who is who after birth, they would agree that they prefer a procedure where it matters that the better off person is created.

Does that approach introduce axiology back in? In my view, not necessarily. But I can see that there’s a bit of a tension here (e.g., imaging not-yet-born people trying to agree with each other behind a veil of ignorance on demands to make from their potential creators could lead to them agreeing on some kind of axiology? But then again, it depends on their respective psychologies, on "who are the people we're envisioning creating" The way I think about it, the potential new people would only reach agreement on demands in cases where one action is worse than another action on basically all defensible population-ethical positions. Accordingly, it's only those no-go actions that minimal morality prohibits.) Therefore, I flagged the non-identity problem as an area of further exploration regarding its implications for my view.

I understand the difference  in emphasis between saying that the moral significance of people's well-being is derivative of its contribution to valuable states of affairs, as contrasted with saying that what makes states of affairs valuable just is people's well-being (or something to that effect). But I'm curious what this means in a decision-relevant sense?

As I make clear in my replies to Richard Ngo, I’m not against the idea of having an axiology per se. I just claim that there’s no objective axiology. I understand the appeal of having a subjective axiology so I would perhaps even count myself as a “fan of axiology.” (For these reasons, I don't necessarily disagree with the two examples you gave / your two points about axiology – though I'm not sure I understood the second bullet point. I'm not familiar with that particular concept by Bernard Williams.)

I can explain again the role of subjectivist axiologies in my framework: 

Minimal morality doesn’t have an axiology – interests/goals matter for minimal morality, but minimal morality doesn’t tell us exactly what to do under all circumstances. It’s more like a set of constraints principles on what not to do (or things you have to take into account if you want to find a permissible way of doing something). However, minimal morality isn’t satisfying for people with high altruistic motivation. This is where the need for a subjective axiology comes in. And I’m saying that there are several different equally defensible options for subjective axiologies. What this means in practice is mostly just "To not be jerks, people with a subjective axiology like hedonism or tranquilism shouldn't act like classical utilitarians or hedonist utilitarians in all circumstances. Instead, they should only follow their preferred morality in cases where minimal morality is silent – i.e., where minimal morality permits the implications of these single-minded moral frameworks. 

As I say in the text: 

Where people have well-specified interests/goals, it would be a preposterous conception of [ambitious (care-)morality] to stick someone into an experience machine against their will or kill them against their will to protect them from future suffering.

(My comment replies to Richard Ngo cover some more points along the same theme.) 

Comment by Lukas_Gloor on Population Ethics Without Axiology: A Framework · 2022-09-16T13:59:55.607Z · EA · GW

Am I right in thinking that in order to creatively duck things like the RP, pinprick argument, arguments against asymmetry (etc) you are rejecting that there is a meaningful "better than" relation between certain states of affairs in population ethics contexts? If so this seems somewhat implausible because there do seem to be some cases where one state of affairs is better than another, and views which say "sure, some comparisons are clear, but others are vague or subjective" seem complicated. Do you just need to opt out of the entire game of "some states of affairs are better than other states of affairs (discontinuous with our own world)"? Curious how you frame this in your own mind.

This description doesn’t quite resonate with me. Maybe it’s close, though. I would rephrase it as something like “We can often say that one outcome is meaningfully better than another outcome on some list of evaluation criteria, but there’s no objective list of evaluation criteria that everyone ought to follow. (But that’s just a rephrasing of the central point “there’s no objective axiology.”)

I want to emphasize that I agree that, e.g., there’s an important sense in which creating a merely medium-happy person is worse than creating a very happy person. My framework allows for this type of better-than relation even though my framework also says “person-affecting views are a defensible systematization of ambitious morality.” How are these two takes compatible? Here, I point out that person-affecting views aren’t about what’s best for newly created people/beings. Person-affecting views, the way I motivate them in my framework, would be about doing what’s best for existing and sure-to-exist people/beings. Sometimes, existing and sure-to-exist people/beings want to bring new people/beings into existence. In those cases, we need some guidance about things like whether it’s permissible to bring unhappy people into existence. My solution: minimal morality already has a few things to say about how not to create new people/beings. So, person-affecting views are essentially about maximally benefitting the interests/goals of existing people/beings without overstepping minimal morality when it comes to new people/beings. 

The above explains how my view “creatively ducks” arguments against the asymmetry.

I wouldn’t necessarily say that my view ducks the repugnant conclusion – at least not in all instances! Mostly, my view avoids versions of the repugnant conclusion where the initial paradise-like population is already actual. This also means it blocks the very repugnant conclusion. By contrast, when it comes to a choice between colonizing a new galaxy with either a small paradise-like population or a very large population with people with lower but still positive life quality, my framework actually says “both of these are compatible with minimal morality.”

(Personally, I've always found the repugnant conclusion a lot less problematic when it plays out with entirely new people in a far-away galaxy.)

Comment by Lukas_Gloor on Population Ethics Without Axiology: A Framework · 2022-09-16T13:56:01.038Z · EA · GW

You say The exact reach of minimal morality is fuzzy/under-defined. How much is entailed by “don’t be a jerk?”. This seems important. For instance, you might see 'drowning child' framings as (compellling) efforts to move charitable giving within the purview of "you're a jerk if you don't do this when you comfortably could." Especially given the size of the stakes, could you imagine certain longtermist causes like "protecting future generations" similarly being framed as a component of minimal morality?

Yes, I do see the drowning child thought experiment as an example where minimal morality applies!

Regarding “protecting future generations as a component of minimal morality:”

My framework could maybe be adapted to incorporate this, but I suspect it would be difficult to make a coherent version of the framework where the reason we'd (fully/always) count newly created future generations (and “cooperating through time” framings) don’t somehow re-introduce the assumption “something has intrinsic value.” I’d say the most central, most unalterable building blocks of my framework are “don’t use ‘intrinsic value’ (or related concepts) in your framing of the option space” and “think about ethics (at least partly) from the perspective of interests/goals.” So, to argue that minimal morality includes protecting our ability to bring future generations into existence (and actually doing this) regardless of present generations' concerns about this, you’d have to explain why it’s indefensible/being a jerk to prioritize existing people over people who could exist. The relevant arguments I brought up against this are this section, which includes endnote 21 for my main argument. I’ll quote them here: 

Arguably, [minimal morality] also contains a procreation asymmetry for the more substantial reason that creating a specific person singles them out from the sea of all possible people/beings in a way that “not creating them” does not.[21]

And here the endnote:

If I fail to create a happy life, I’m acting suboptimally towards the subset of possible people who’d wish to be in that spot – but I’m not necessarily doing anything to highlight that particular subset. (Other possible people who wouldn’t mind non-existence and others yet would want to be created, but only under more specific conditions/circumstances.) By contrast, when I make a person who wishes they had never been born, I singled out that particular person in the most real sense. If I could foresee that they would be unhappy, the excuse “Some other possible minds wouldn’t be unhappy in your shoes” isn’t defensible. ↩︎

A key ingredient to my argument is that there’s no “universal psychology” that makes all possible people have the same interests/goals or the same way of thinking about existence vs. non-existence. Therefore, we can’t say “being born into a happy life is good for anyone.” At best, we could say “being born into a happy life is good for the sort of person who would find themselves grateful for it and would start to argue for totalist population ethics once they’re alive.” This begs the question: What about happy people who develop a different view on population ethics?

I develop this theme in a bunch of places throughout the article, for instance in places where I comment on the specific ways interests/goals-based ethics seem under-defined:  

(1) Someone has under-defined interests/goals. 

(2) It’s under-defined how many people/beings with interests/goals there will be. 

(3) It’s under-defined which interests/goals a new person will have.


Point (3) in particular is sometimes under-appreciated. Without an objective axiology, I don't think we can generalize about what’s good for newly created people/beings – there’s always the question “Which ones??” 

Accordingly, there (IMO) seems to be an asymmetry here related to how creating a particular person singles out that particular person’s psychology in a way that not creating anyone does not. When you create a particular person, you better make sure that this particular person doesn’t object to what you did. 

(You could argue that we just have to create happy people who will be grateful for their existence – but that would still feel a bit arbitrary in the sense that you're singling out a particular type of psychology (why focus on people with the potential for gratefulness to exist?), and it would imply things like "creating a happy Buddhist monk has no moral value, but creating a happy life-hungry enterprenuer or explorer has great moral value." In the other direction, you could challenge the basis for my asymmetry by calling into question whether only looking at a new mind's self-assessment about their existence is too weak to prevent bad things. You could ask "What if we created a mind that doesn't mind being in misery? Would it be permissible to engineer slaves who don't mind working hard under miserable conditions?" In reply to that, I'd point out how even if the mind ranks death after being born as worse than anything else, that doesn't make it okay to bring such a conflicted being into existence. The particular mind in question wouldn't object to what you did, but nowhere in your decision to create that particular mind did you show any concern for newly created people/beings – otherwise you'd have created minds that don't let you exploit them maximally and don't have the type of psychology that puts them into internally conflicted states like "ARRRGHH PAIN ARRRGHH PAIN ARRRGHH PAIN, but I have to keep existing, have to keep going!!!" You'd only ever create that particular type of mind if you wanted to get away with not having to care about the mind's well-being, and this isn't a defensible motive under minimal morality.)

At this point, I want to emphasize that the main appeal of minimal morality is that it’s really uncontroversial. Whether potential people count the same as existing and sure-to-exist people is quite a controversial issue. My framework doesn’t say “possible people don’t count.” It only says that it’s wrong to think everyone has to care about potential happy future people.

All that said, the fact that EAs exist who stake most or even all of their caring budget on helping future generations come into a flourishing existence is an indirect reason why minimal morality may include caring for future generations! So, minimal morality takes this detour – which you might find a counterintuitive reason to care about the future, but nonetheless – where one reason people should care about future generations (in low-demanding ways) is because many other existing people care strongly and sincerely about there being future generations.

Comment by Lukas_Gloor on Population Ethics Without Axiology: A Framework · 2022-09-16T13:34:02.280Z · EA · GW

Thanks for these questions! Your descriptions capture what I meant in most bullet points, but there are some areas where I think I failed to communicate some features of my position.

 I’ll reply to your points in a different order than you made them (because that makes a few things easier). I’ll also make several comments in a thread rather than replying to everything at once

I had an overall sense that you are both explaining the broad themes of an alternative to population ethics grounded in axiology; and then building your own richer view on top of that (with the court hearing analogy, distinction between minimal and ambitious morality, etc), such that your own view is like a plausible instance of this broad family of alternatives, but doesn't obviously follow from the original motivation for an alternative?  Is that roughly right?

That’s right! I’m not particularly attached to the details of the court hearing analogy, for instance. By contrast, the distinction between minimal morality and ambitious morality feels quite central to my framework. I wouldn’t know how to motivate person-affecting views without it. Better developing and explaining my intuition “person-affecting views are more palatable than many people seem to give them credit for” was one of the key motivations I had in writing the post.

(However, like I say in my post’s introduction and the summary, my framework is compatible with subjectivist totalism – if someone wants to dedicate their life toward an ambitious morality of classical total utilitarianism and cooperate with people with other goals in the style of minimal morality, that works perfectly well within the framework [and is even compatible with all the details I suggested for how I would flesh out and apply the framework].)

I also had a sense that you could have written a similar post just focused on simpler kinds of aggregative consequentialism (maybe you have in other posts, afraid I haven't read them all); in some sense you picked an especially ambitious challenge in (i) developing a perspective on ethics that can be applied broadly; and then (ii) applying it to an especially complex part of ethics. So double props I guess!

Yeah. I think the distinction between minimal morality and ambitious morality could have been a standalone post. For what it’s worth, my impression is that many moral anti-realists in EA already internalized something like this distinction. That is, even anti-realists who already know what to value (as opposed to feeling very uncertain and deferring the point where they form convictions to a time after more moral reflection or to the output of a hypothetical "reflection procedure") tend to respect the fact that others have different goals. I don’t think that’s just because they think they are cooperating with aliens. Instead, as anti-realists, they are perfectly aware that their way of looking at morality isn’t the only one, so they understand they’d need to be jerks in some sense to disrespect others’ goals or moral convictions.

In any case, explaining this distinction took up some space. Then, I added examples and discussions of population ethics issues because I thought a good way to explain the framework is by showing how it handles some of the dilemma cases people are already familiar with.

On this framework, on what grounds can someone not "defensibly ignore" another's complaint? Am I right in thinking this is because ignoring some complaints means frustrating others' goals or preferences, and not frustrating others' goals or preferences is indefensible, as long as we care about getting along/cooperating at all (minimal morality)?

(Probably you meant to say “and [] frustrating others’ goals or preferences is indefensible”?)

Yes, that’s what it’s about on a first pass. Other things that matter: 

  • The lesser of several evils is always defensible.
  • If it would be quite demanding to avoid thwarting someone’s interests/goals, then thwarting is defensible. [Minimal morality is low-demanding.]
Comment by Lukas_Gloor on [deleted post] 2022-09-15T19:08:29.648Z

Kudos for noticing this incongruency! I think I and others should have noticed confusion more here (even though many people did flag that Torres's tweet could be misrepresenting what happened). 

Comment by Lukas_Gloor on [deleted post] 2022-09-15T09:51:14.869Z

Thanks!

If some of these other people you had calls with about the topic could have posted on the same thread or same comment section and said something like "Talked to Zoe (or Luke) and they have info they can't disclose publicly that underscores their account and it seemed to all make sense to me" – that would have been enough to take care of my curiosity and skepticism! 

At this point, the main thing I'm curious about is your thoughts on Torres' involvement (edit, just saw that you made a long comment on that!). I don't think a call is necessary for that because it seems that after all the speculations in the OP and this comment section, a public comment from you or Luke would probably be best rather than private calls.

That said, if you for some reason prefer to explain some things only in a private call and want someone to report back to the community with their overall impression and updates (positive or negative depending on their feelings on the call and without sharing an of the specifics), I'm happy to volunteer for that!

Comment by Lukas_Gloor on [deleted post] 2022-09-13T21:07:33.304Z

Exactly! 

Comment by Lukas_Gloor on [deleted post] 2022-09-13T21:03:25.003Z

I think you may not be aware of relevant context. (See the comment by Matis for the same point.) This has practically nothing to do with anonymity norms or evading forum bans.  The point is that Zoe and Luke complained about the events before they posted their paper, implying that prominent people in EA tried to prevent them from publishing their thoughts and accused them of bad faith (and warnings that EA funders would not longer fund them, etc.). These accusations would seem to put "prominent EAs or EA funders" in a very bad light if their complaints and warnings were only directed at Zoe and Luke, two EAs with a good track record writing up their thoughts in a way that they consider fair and appropriate. By contrast, if such complaints/warnings were levelled against Torres or because of Torres (or even just in the context of the two of them collaborating closely with Torres as an initial co-author), that would make a lot of sense and seems hard to object to given Torres's track record (which they already had at the time) of repeatedly making bizarre and wrong accusations and generally being on a kind of crusade against longtermist EA.

Comment by Lukas_Gloor on [deleted post] 2022-09-13T11:29:57.005Z

The OP literally created a throw away account called throwaway151 just to attack a transgendered individual and has refused,

It's obvious that the OP would have made the exact same type of post if Torres hadn't changed their name and gender identity (and the post seems to be more about Zoe and Luke), so you're being incredibly misleading here. I assume it's probably due to the strong emotions involved – it's unfortunate how this situation developed. I'm not planning to engage further.

Edit: In light of new comments by the throwaway account, I retract my statement that "the post seems to be more about Zoe and Luke") – it seems like the OP also has strong views on associating with Torres all by itself. I still see absolutely no reason to believe that they're acting differently due to the change of gender identity, but I want to flag that I now understand better why the now anonymous account above felt like the OP "had it out for Torres"). (I'm not necessarily saying "having it out for Torres" is unwarranted; I'm just acknowledging a point.)

Comment by Lukas_Gloor on [deleted post] 2022-09-13T11:12:28.974Z

As a (minor) point related to this discussion, I want to flag that Zoe never got back to me after this interaction

[Edit: This no longer applies – Zoe and I had a call and she could indeed convince me. See here for my update. Also, I want to flag that after it came out that Torres' involvement wasn't as a co-author at the time when Zoe faced criticism of the draft, it anyway wouldn't make sense to assume that she'd be misleading anyone about any sort of info. I didn't have a call with her to vet her or anything because that wasn't needed. I just took the opportunity to learn more about what happened before the publication of the paper and so I can post an update to this comment here which would be unfair to Zoe if left unaltered.]

 It's very possible that this may just be due to not using FB frequently (I messaged her on FB). Even so, I think it's bad form to pocket the credit (e.g., I at first somewhat changed my mind about a comment where I initially voiced skepticism about something, and could imagine that others updated in the same direction) for having specific info by saying that one is going to share it privately and then not sharing it privately

For context, the specific claim under question was me saying that I'm skeptical that she presented an accurate view of the pushback she received on publishing the paper.

As soon as there's some evidence for non-optimal integrity (the evidence brought forward here about Torres IMO qualifies if it is accurate – though I wouldn't necessarily trust Torres to represent accurately what happened), then it becomes also an issue of game theory rather than just epistemics whether to give someone the benefit of the doubt.

Of course, if anyone else has talked to Zoe about the topic or if Zoe herself wants to share more about the situation, we could more easily decide what's going and if her initial account of the pushback against the paper was roughly accurate.

Comment by Lukas_Gloor on [deleted post] 2022-09-13T10:54:59.638Z

In this instance, the "formerly X" seems quite relevant because of Torres's history in EA. If I was the OP, I wouldn't immediately know how to unambiguously make the point that we're talking about the person who made all these crazy bad-faith accusations against EA without something like "formerly X." (Of course, I'd see no need to mention "formerly X" if Torres was entirely new to EA or didn't have a public persona beforehand.) 

If you know of a better way to handle this issue with previous EA involvement, maybe it would be helpful for others to post a suggestion. 

Comment by Lukas_Gloor on [deleted post] 2022-09-13T10:47:01.282Z

I think you're only being downvoted for the "Just a thought" segment, not for pointing out that the name was still wrong (at the time you wrote the comment – it seems to be updated now). 

In the "Just a thought" section, you're IMO coming across as a fanatic on a crusade rather than someone who cares about EA being more welcoming and inclusive (or "taking the right side on a human rights issue" – as you view it; but others may not quite see it in the exact same way even if they generally agree that it's good to take low-effort actions to prevent others from potentially feeling bad or making a space more accessible for them).

As a comparison, I think factory farming is really bad and I think it's legitimate that vegans in 2014 or so criticized an EA conference for serving meat. Still, I would downvote vegans who include a rant about how it means EA is a terrible place for altruists if that's how they approach the issue. Instead, I think vegans who care about EAs not promoting meat at conferences should approach a strategy "continue to criticize, but don't assume that the target of your criticism is flawed beyond repair for seeing things differently from you." 

Likewise, I want a culture where people are receptive to criticism and ready to make low-effort accommodations even if they disagree with some aspects of the moral position in question.

You were insinuating that someone making a mistake (related to perhaps thoughtlessness or carelessness) is equivalent to a really bad action and calling into question the integrity of EA as a movement (if it happens that a significant portion of EAs would be likely to do that kind of thing). You're doing this even after the OP showed willingness to update their statements (by changing pronouns at first – they  then also changed Torres's name later [but I see there's also the issue of "formerly X" that you object to]).

Comment by Lukas_Gloor on Puzzles for Everyone · 2022-09-12T15:28:38.429Z · EA · GW

Sorry, I didn’t mean to accuse you of dishonesty (I'm adding an edit to the OP to make that completely clear). I still think the framing isn’t defensible (but philosophy is contested and people can disagree over what's defensible).

Even if you come to expect extinction, it remains accurate to view extinction as extinguishing the potential for future life.

Yes, but that’s different from extinguishing future people. If the last remaining members of a family name tradition voluntarily decide against having children, are they “extinguishing their lineage”? To me, “extinguishing a lineage” evokes central examples like killing the last person in the lineage or carrying out an evil plot to make the remaining members infertile. It doesn’t evoke examples like “a couple decides not to have children.”

To be clear, I didn’t mean to say that I expect extinction. I agree that what we expect in reality doesn’t matter for figuring out philosophical views (caveat). I mentioned the point about trajectories to highlight that we can conceive of worlds where no one wants humanity to stick around for non-moral reasons (see this example by Singer). (By “non-moral reasons,” I’m not just thinking of some people wanting to have children. When people plant trees in their neighbourhoods or contribute to science, art, business or institutions, political philosophy, perhaps even youtube and tik tok, they often do so because it provides personal meaning in a context where we expect civilization to stay around. A lot of these activities would lose their meaning if civilization was coming to an end in the foreseeable future.) To evaluate whether neutrality about new lives  is repugnant, we should note that it only straightforwardly implies “there should be no future people” in that sort of world.

Your response to the Cleopatra example is similarly misguided. I'm not appealing to "existing people not wanting to die", but rather existing people being glad that they got to come into existence, which is rather more obviously relevant.

I think I was aware that this is what you meant. I should have explained my objection more clearly. My point is that there's clearly an element of deprivation when we as existing people imagine Cleopatra doing something that prevents us from coming to exist. It's kind of hard – arguably even impossible – for existing people to imagine non-existence as something different from no-longer-existence. By contrast, the deprivation element is absent when we imagine not creating future people (assuming they never come to exist and therefore aren’t looking back to us from the vantage point of existence).

To be clear, it's perfectly legitimate to paint a picture of a rich future where many people exist and flourish to get present people to care about creating such a future. However, I felt that your point about Cleopatra had a kind of "gotcha" quality that made it seem like people don't have coherent beliefs if they (1) really enjoy their lives but (2) wouldn't mind if people at some point in history decide to be the last generation. I wanted to point out that (1) and (2) can go together. 

For instance, I could be "grateful"  in a sense that's more limited than the axiologically relevant sense – "grateful" in a personal sense but not in the sense of "this means it's important to create other people like me." (I'm thinking out loud here, but perhaps this personal sense could be similar to how one can be grateful for person-specific attributes like introversion or a strong sense of justice. If I was grateful about these attributes in myself, that doesn't necessarily mean I'm committed to it being morally important to create people with those same attributes. In this way, people with the neutrality intuition may see existence as a person-specific attribute that only people who have that attribute can meaningfully feel grateful about. [I haven't put a lot of thought into this specific account. Another reply could be that it's simply unclear how to go about comparing one's existence to never having been born.])

Comment by Lukas_Gloor on Venn diagrams of existential, global, and suffering catastrophes · 2022-09-12T09:41:06.412Z · EA · GW

Edit: I just noticed that this post I'm commenting on is 2 years old (it came up in my feed and I thought it was new). So, the post wasn't outdated at the time!

Suffering risks (also known as risks of astronomical suffering, or s-risks) are typically defined as “risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far” (Daniel, 2017).[7]

That definition is outdated (at least with respect to how CLR thinks about it). The newer definition is the first sentence in the source you link to (it's a commentary by CLR on the 2017 talk):

S-risks are risks of events that bring about suffering in cosmically significant amounts. By “significant”, we mean significant relative to expected future suffering.

Reasons for the change: (1) Calling the future scenario “galaxy-wide utopia where people still suffer headaches every now and then” an “s-risk” may come with the connotation (always unintended) that this entire future scenario ought to be prevented. Over the years, my former colleagues at CLR and I received a lot of feedback (e.g., here and here) that this was off-putting about the older definition.

 (2) Calling something an “s-risk” when it doesn’t constitute a plausible practical priority even for strongly suffering-focused longtermists may generate the impression that s-risks are generally unimportant. The new definition means they're unlikely to be a rounding error for most longtermist views as they are defined* (except maybe if your normative views imply a 1:1 exchange rate between utopia and dystopia).

(*S-risks may still turn out to be negligible in practice for longtermist views that aren't strongly focused on reducing suffering if particularly bad futures are really unlikely  empirically or if we can't find promising interventions. [Edit: FWIW, I think there are tractable interventions and s-risks don't seem crazy unlikely to me.]) 

Comment by Lukas_Gloor on Venn diagrams of existential, global, and suffering catastrophes · 2022-09-12T09:27:39.621Z · EA · GW

Suffering risks (also known as risks of astronomical suffering, or s-risks) are typically defined as “risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far” (Daniel, 2017).[7]
 

Comment by Lukas_Gloor on Puzzles for Everyone · 2022-09-12T06:57:04.731Z · EA · GW

I think it would be horribly evil for the present generation to extinguish all future life, merely to moderately benefit ourselves (even in not purely frivolous ways).

"Extinguish" evokes the wrong connotations since neutrality is just about not creating new lives. You make it seem like there's going to be all this life in the future and the proponents of neutrality want to change the trajectory. This introduces misleading connotations because some views with neutrality say that it's good to create new people if this is what existing people want, but not good to create new people for its own sake.

I think using the word "extinguish" is borderline disingenuous. [edit: I didn't mean to imply dishonesty – I was being hyperbolic in a way that isn't conducive to good discussion norms.]

Likewise, the Cleopatra example in the OP is misleading – at the very least it begs the question. It isn't obvious that existing people not wanting to die is a reason to bring them into existence. Existing people not wanting to die is more obviously a reason to not kill them once they exist. 

Comment by Lukas_Gloor on Samotsvety's AI risk forecasts · 2022-09-09T13:41:58.789Z · EA · GW

I think it's fair to interpret the Covid question to some extent as superforecasters not trying, but I'm confused about how you seem to be attributing little of it to prediction error? It could be a combination of both.

Good point. I over-updated on my feeling of "this particular question felt so easy at the time" so that I couldn't imagine why anyone who puts serious time into it would get it badly wrong.

However, on reflection, I think it's most plausible that different types of information were salient to different people, which could have caused superforecasters to make prediction errors even if they were trying seriously. (Specifically, the question felt easy to me because I happened to have a lot of detailed info on the UK situation, which presented one of the best available examples to use for forming a reference class.) 

You're right that I essentially gave even more evidence for the claim you were making. 

Comment by Lukas_Gloor on Samotsvety's AI risk forecasts · 2022-09-09T12:18:43.694Z · EA · GW

This doesn't sound like an outlandish claim to me. Still, I'm not yet convinced.

I was really into Covid forecasting at the time, so I was tempted to go back through my comment history and noticed that this seemed like an extremely easy call at the time. (I made this comment 15 days before yours where I was predicting >100,000 cases with 98% confidence, saying I'd probably go to 99% after more checking of my assumptions.  Admittedly, >100,000 cases in a single day is significantly less than >140,000 cases for the 7-day average. Still, a confidence level of 98%+ suggests that I'd definitely have put a lot more than 14% on the latter.) This makes me suspect that maybe that particular question  was quite unrepresentative for the average track record of superforecasters? Relatedly, if we only focus on instances where it's obvious that some group's consensus is wrong, it's probably somewhat easy to find such instances (even for elite groups) because of the favorable selection effect at work. A through analysis would look at the track record on a pre-registered selection of questions.

Edit: The particular Covid question is strong evidence for "sometimes superforecasters don't seem to be trying as much as they could." So maybe your point is something like "On questions where we try as hard as possible, I trust us more than the average superforecaster prediction." I think that stance might be reasonable. 

Comment by Lukas_Gloor on Samotsvety's AI risk forecasts · 2022-09-09T05:44:27.413Z · EA · GW

A superforecaster aggregate (I’m biased re: quality of Samotsvety vs. superforecasters, but I’m pretty confident based on personal experience)

Is this on specific topic areas (e.g., "TAI forecasting" or "EA topics") or more generally? 

Comment by Lukas_Gloor on Bernard Williams: Ethics and the limits of impartiality · 2022-09-07T15:03:59.639Z · EA · GW

I was recently wrong about the same thing. I think her position has some similarities with non-naturalism, but it's true that she labels it as naturalism.

 

Comment by Lukas_Gloor on The Long Reflection as the Great Stagnation · 2022-09-07T12:42:24.918Z · EA · GW

Great post! The section "Truth-seeking requires grounding in reality" describes some points I've previously wanted to make but didn't have good examples for.

I discuss a few similar issues in my post The Moral Uncertainty Rabbit Hole, Fully Excavated. Instead of discussing "the Long Reflection" as MacAskill described it, my post there discusses the more general class of "reflection procedures" (could be society-wide or just for a given individual) where we hit pause and think about values for a long time. The post points out how reflection procedures change the way we reflect and how this requires us to make judgment calls about which of these changes are intended or okay. I also discuss "pitfalls" of reflection procedures (things that are unwanted and avoidable at least in theory, but might make reflection somewhat risky in practice). 

One consideration I discovered seems particularly underappreciated among EAs in the sense that I haven't seen it discussed anywhere. I've called it "lack of morally urgent causes." In short, I think high levels of altruistic dedication and people forming self-identities as altruists dedicated to a particular cause often come from a kind of desperation about the state of the world (see Nate Soares' "On Caring"). During the Long Reflection (or other "reflection procedures" more generally), the state of the world is assumed to be okay/good/taken care of. So, any serious problems are assumed to be mostly taken care of or put on hold. What results is a "lack of morally urgent causes" – which will likely affect the values and self-identities that people who are reflecting might form. That is, compared to someone who forms their values prior to the moral reflection, people in the moral reflection may be less likely to adopt identities that were strongly shaped by ongoing "morally urgent causes." For better or worse. This is neither good nor bad per se – it just seems like something to be aware of. 

Here's a longer excerpt from the post where I provide a non-exhaustive list of factors to consider for setting up reflection environments and choosing reflection strategies: 

Reflection strategies require judgment calls

In this section, I’ll elaborate on how specifying reflection strategies requires many judgment calls. The following are some dimensions alongside which judgment calls are required (many of these categories are interrelated/overlapping):

  • Social distortions: Spending years alone in the reflection environment could induce loneliness and boredom, which may have undesired effects on the reflection outcome. You could add other people to the reflection environment, but who you add is likely to influence your reflection (e.g., because of social signaling or via the added sympathy you may experience for the values of loved ones).
  • Transformative changes: Faced with questions like whether to augment your reasoning or capacity to experience things, there’s always the question “Would I still trust the judgment of this newly created version of myself?”
  • Distortions from (lack of) competition: As Wei Dai points out in this Lesswrong comment: “Current human deliberation and discourse are strongly tied up with a kind of resource gathering and competition.” By competition, he means things like “the need to signal intelligence, loyalty, wealth, or other ‘positive’ attributes.” Within some reflection procedures (and possibly depending on your reflection strategy), you may not have much of an incentive to compete. On the one hand, a lack of competition or status considerations could lead to “purer” or more careful reflection. On the other hand, perhaps competition functions as a safeguard, preventing people from adopting values where they cannot summon sufficient motivation under everyday circumstances. Without competition, people’s values could become decoupled from what ordinarily motivates them and more susceptible to idiosyncratic influences, perhaps becoming more extreme.
  • Lack of morally urgent causes: In the blogpost On Caring, Nate Soares writes: “It's not enough to think you should change the world — you also need the sort of desperation that comes from realizing that you would dedicate your entire life to solving the world's 100th biggest problem if you could, but you can't, because there are 99 bigger problems you have to address first.”
    In that passage, Soares points out that desperation can strongly motivate why some people develop an identity around effective altruism. Interestingly enough, in some reflection environments (including “My favorite thinking environment”), the outside world is on pause. As a result, the phenomenology of “desperation” that Soares described would be out of place. If you suffered from poverty, illnesses, or abuse, these hardships are no longer an issue. Also, there are no other people to lift out of poverty and no factory farms to shut down. You’re no longer in a race against time to prevent bad things from happening, seeking friends and allies while trying to defend your cause against corrosion from influence seekers. This constitutes a massive change in your “situation in the world.” Without morally urgent causes, you arguably become less likely to go all-out by adopting an identity around solving a class of problems you’d deem urgent in the real world but which don’t appear pressing inside the reflection procedure. Reflection inside the reflection procedure may feel more like writing that novel you’ve always wanted to write – it has less the feel of a “mission” and more of “doing justice to your long-term dream.”[11]
  • Ordering effects: The order in which you learn new considerations can influence your reflection outcome. (See page 7 in this paper. Consider a model of internal deliberation where your attachment to moral principles strengthens whenever you reach reflective equilibrium given everything you already know/endorse.)
  • Persuasion and framing effects: Even with an AI assistant designed to give you “value-neutral” advice, there will be free parameters in the AI’s reasoning that affect its guidance and how it words things. Framing effects may also play a role when interacting with other humans (e.g., epistemic peers, expert philosophers, friends, and loved ones).

Pitfalls of reflection procedures

There are also pitfalls to avoid when picking a reflection strategy. The failure modes I list below are avoidable in theory,[12] but they could be difficult to avoid in practice:

  • Going off the rails: Moral reflection environments could be unintentionally alienating (enormous option space; time spent reflecting could be unusually long). Failure modes related to the strangeness of the moral reflection environment include existential breakdown and impulsively deciding to lock in specific values to be done with it.
  • Issues with motivation and compliance: When you set up experiments in virtual reality, the people in them (including copies of you) may not always want to play along.
  • Value attacks: Attackers could simulate people’s reflection environments in the hope of influencing their reflection outcomes.
  • Addiction traps: Superstimuli in the reflection environment could cause you to lose track of your goals. For instance, imagine you started asking your AI assistant for an experiment in virtual reality to learn about pleasure-pain tradeoffs or different types of pleasures. Then, next thing you know, you’ve spent centuries in pleasure simulations and have forgotten many of your lofty ideals.
  • Unfairly persuasive arguments: Some arguments may appeal to people because they exploit design features of our minds rather than because they tell us  “What humans truly want.” Reflection procedures with argument search (e.g., asking the AI assistant for arguments that are persuasive to lots of people) could run into these unfairly compelling arguments. For illustration, imagine a story like “Atlas Shrugged” but highly persuasive to most people. We can also think of “arguments” as sequences of experiences: Inspired by the Narnia story, perhaps there exists a sensation of eating a piece of candy so delicious that many people become willing to sell out all their other values for eating more of it. Internally, this may feel like becoming convinced of some candy-focused morality, but looking at it from the outside, we’ll feel like there’s something problematic about how the moral update came about.)
  • Subtle pressures exerted by AI assistants: AI assistants trained to be “maximally helpful in a value-neutral fashion” may not be fully neutral, after all. (Complete) value-neutrality may be an illusory notion, and if the AI assistants mistakenly think they know our values better than we do, their advice could lead us astray. (See Wei Dai’s comments in this thread for more discussion and analysis.)
Comment by Lukas_Gloor on Universal Objective Meaning: UOM · 2022-09-06T15:41:16.921Z · EA · GW

I do not agree that UOM is necessarily non-naturalist in essence, it might very well be that some natural property of the world turns out to be synonymous with good/meaningful/right/UOM.

Views that say "we don't know the content of good/meaningful/right but it's what's important nonetheless" are usually non-naturalist because of the open-question argument: For any naturalist property we might identify as synonymous with good/meaningful/right, one can ask "Did we really identify the right property?"

Moral naturalists would answer: "That's a superfluous question. We've already determined that the property in question is relevant to things we care about in ways xyz. That's what we mean when we use moral terminology."

By contrast, non-naturalists believe that the open question argument has a point. I'd say the same intuition that drives the open question argument against moral naturalism seems to be a core intuition behind your post. The intuition says that the concepts "good/meaningful/right" have a well-specified and action-relevant meaning even though we're clueless about it and can't describe the success criteria for having found the answer.

(Some non-naturalists might say is that good/meaningful/right may turn out to be co-extensional with some natural property, but not synonymous. This feels a bit like trying to have the cake and eat it; I'm confused about how to interpret that sort of talk. I can't think up a good story of how we could come into the epistemic position of understanding that non-naturalist moral concepts are co-extensional with specific naturalist concepts while maintaining that "things could have been otherwise.")

I don't think these distinctions are inherently particularly important, but it's useful to think about whether your brand of moral realism is more likely to fail because of (1) "accommodation charges" ("queerness") or due to (2) expert moral disagreement / not being able to compellingly demonstrate that a specific natural property is unambiguously the thing everyone (who's altruistic?) ought to orient their lives towards. (I'd have thought that 2 is more typically associated with moral naturalism, but there seem to be edge cases. For instance, Parfit's metaethical view strikes me as more susceptible to counterarguments of the second type – see for instance his "climbing the same mountain" anology or his claim that his life's work would be in vain if he's wrong about his convergence arguments. While I've seen people call Parfit's view "non-naturalist" in some places, I've also heard the term "quietist," which seems to have the loose meaning of "kind of non-naturalist, but the naturalism vs. non-naturalism is beside the point and my view therefore doesn't make any strange metaphysical claims." In any case, it seems to me that Parfit thinks we know a great deal about "good/meaningful/right" and, moreover, that this knowledge is essential for his particular metaethical position, so his brand of moral realism seems strictly different from yours.)

You are right, this begs the question, but so do subjectivist stances.

Subjectivist stances feel more intellectually satisfying, IMO. I argue here that a moral ontology ("conceptual option space") based on subjective life goals fulfills the following criteria:

Relevant: Life goals matter to us by definition. Complete: The life-goals framework allows us to ask any (prescriptive)[33] ethics-related questions we might be interested in, as long as these questions are clear/meaningful. (In the appendix, I’ll sketch what this could look like for a broad range of questions. Of course, there’s a class of questions that don’t fit into the framework. As I have argued in previous posts, questions about irreducible normativity don’t seem meaningful.) Clear: The life-goals framework doesn’t contain confused terminology. Some features may still be vague or left under-defined, but the questions and thoughts we can express within the framework are (so I hope) intelligible.

By contrast, I think non-naturalist views fail some of these criteria. Elsewhere (see links in my previous comment), I've argued that moral realism based on irreducible normativity is not worth wanting because it cannot be made to work in anywhere close to the way our intuition about the terminology would indicate (similar to the concept of libertarian free will).

One heuristic for deciding whether further search for UOM would be misguided could be to consider current knowledge of the universe and the nature of reality, current rate of change of that knowledge and existence of evidence that there is no UOM. If knowledge is high, rate of change is almost zero (i.e. we seem to be converging on maximum understanding) and especially if there is evidence of non-existence, search for UOM is likely misguided.

I think we are far from this point currently, knowing almost nothing about the universe and not even knowing the full extent of how much we actually do not know about it.

I feel like the disagreement is more about "how to reason" than "how much do we know?." My main argument against something like UOM is "this contains concepts that aren't part of my philosophical repertoire." I argue for this in posts 2 and 3 of my anti-realism sequence.

I have more thoughts but they're best explained in the sequence.

Comment by Lukas_Gloor on On the Philosophical Foundations of EA · 2022-09-01T11:21:45.340Z · EA · GW

Great post! I enjoyed reading it and found myself nodding along.

I think you could maybe say more about what follows from your critique. In the beginning, you write this:

I think EAs should care more about debates around which ethical theory is true and why.

You then argue (quite persuasively, IMO) that consequentialism isn't the only way to conceptualize the option space in ethics.

But you don't say much about what would change if more EAs became Pluralistic Moral Reductionists ("both consequentialism and Kantianism [or contractualism?] apply/have merits, depending on how we frame the question" – my favored option) or if they entirely adopted a non-consequentialist outlook (where duties to benefit could remain in place).

Comment by Lukas_Gloor on Universal Objective Meaning: UOM · 2022-08-29T23:43:57.619Z · EA · GW

There are a lot of convincing arguments against certain forms of moral realism but there are neither good arguments nor evidence that would allow us to entirely write off the possibility of an answer to these questions existing that is objectively and universally true. (there is no proof that objectivist moral realism is not true)

That's an interesting endnote.

I think the arguments against this type of non-naturalist moral realism (or "moral realism based on irreducible normativity," as I sometimes call it) are indeed pretty decisive – but I'd agree that you can't get to 100%. Still, do you have thoughts on the success criteria for how someone could determine if they have found the answer to universal objective meaning? If not, why do you think there's an answer? Without knowing anything about the concept's content and without understanding the success criteria for having found the right content, is there a way for the concept to have a well-specified meaning (instead of being a pointer to a subjective feeling)? Based on my understanding of how words obtain reference, I don't see any such way.

In any case, the endnote makes it sound like you'd want people to continue searching anyway because of some wager. But why does universal objective meaning trump subjective meaning? That seems to beg the question..

Also, putting time and effort into researching the content of some obscure concept (that most likely doesn't have any content) has opportunity costs. What would you do if, even after training the future's most advanced AI systems to do philosophy, the answer continues to be "We can't answer your question; you're using concepts in a way that doesn't make sense"?

If continuing the search would take up most of the world's resources, at what point would you say "Okay, I'm sufficiently convinced that this endeavor – the search for OUM – was misguided. Let's optimize for things that people find subjectively good and meaningful, like helping others, reducing suffering, accomplishing personal life goals, perhaps (for some people) creating new happy people, etc."

If there would come such a point, then why do you think we haven't yet reached it? (Or maybe we could say we have mostly reached it, but it could be worthwhile to keep the possibility of moral realism based on irreducible normativity in the back of our heads to double-check our assumptions in case we ever build those AI philosophy advisors?)

Alternatively, if there'd be no such point where you'd give up on an (ex hypothesi increasingly more costly) search, doesn't that seem strangely fanatical?

(To be clear, I don't think everything that goes under the label "non-naturalist moral realism" is >99% likely to be meaningless or confused. Some of it just seems under-defined and therefore a bit pointless, and some of it seems to be so similar to naturalist moral realism that we can just discuss the arguments for and against naturalist moral realism instead – which are pretty different and don't apply to the way the OP is arguing.)

Comment by Lukas_Gloor on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-29T08:44:57.264Z · EA · GW

Under this view, is another (pro tanto) reason why it's bad to kill (not-entirely-satisfied) people that their satisfaction/fulfillment is worth preserving (i.e. is good in a way that outweighs associated frustration)?

I would answer "No."

If we answer "No," on the grounds that fulfillment can't outweigh frustration, this would seem to imply that one should kill people, whenever their being killed would frustrate them less than their continued living. Problematically, that seems like it would probably apply to many people, including many pretty happy people.

The preference against being killed is as strong as the happy person wants it to be. If they have a strong preference against being killed then the preference frustration from being killed would be lot worse than the preference frustration from an unhappy decade or two – it depends how the person herself would want to make these choices.

I haven't worked this out as a formal theory but here are some thoughts on how I'd think about "preferences."

(The post I linked to primarily focuses on cases where people have well-specified preferences/goals. Many people will have under-defined preferences and preference utilitarians would also want to have a way to deal with these cases. One way to deal with under-defined preferences could be "fill in the gaps with what's good on our experience-focused account of what matters.")

Comment by Lukas_Gloor on What domains do you wish some EA was an expert in? · 2022-08-29T08:28:16.595Z · EA · GW

Computational linguistics and evolutionary biology focus on hominids in the last few million years. (AI forecasting relevance and maybe language model comparisons?)

Psychology related to dark triad/tetrad traits. (Relevant to reducing the influence of malevolent actors.)

Comment by Lukas_Gloor on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-28T11:00:47.652Z · EA · GW

I think this misunderstands the point I was making. I meant to highlight how, if you're adopting a pluralistic view, then to defend a strong population asymmetry (the view emphasized in the post's title), you need reasons why none of the components of your pluralistic view value making happy people.

Thanks for elaborating! I agree I misunderstood your point here.

(I think preference-based views fit neatly into the asymmetry. For instance, Peter Singer initially weakly defended an asymmetric view in Practical Ethics, as arguably the most popular exponent of preference utilitarianism at the time. He only changed his view on population ethics once he became a hedonist. I don't think I'm even aware of a text that explicitly defends preference-based totalism. By contrast, there are several texts defending asymmetric preference-based views: Benatar, Fehige, Frick, younger version of Singer.)

as you suggest, you can get the needed reasons by introducing additional assumptions/frameworks, like rejecting the principle that it's better for there to be more good things.

Or that “(intrinsically) good things” don’t have to be a fixed component in our “ontology” (in how we conceptualize the philosophical option space). Or, relatedly, that the formula “maximize goods minus bads” isn’t the only way to approach (population) ethics. Not because it's conceptually obvious that specific states of the world aren't worthy of taking serious effort (and even risks, if necessary) to bring about. Instead, because it's questionable to assume that "good states" are intrinsically good, that we should bring them about regardless of circumstances, independently of people’s interests/goals.

Besides that, I think at this point we're largely in agreement on the main points we've been discussing?

I agree that we’re mainly in agreement. To summarize the thread, I think we’ve kept discussing because we both felt like the other party was presenting a slightly unfair summary of how many views a specific criticism applies or doesn’t apply to (or applies “easily” vs. “applies only with some additional, non-obvious assumptions”).

I still feel a bit like that now, so I want to flag that out of all the citations from the OP, the NU FAQ is really the only one where it’s straightforward to say that one of the two views within the text – NHU but not NIPU – implies that it would (on some level, before other caveats) be good to kill people against their will (as you claimed in your original comment).

From further discussion, I then gathered that you probably meant that specific arguments from the OP could straightforwardly imply that it’s good to kill people. I see the connection there. Still, two points I tried to make that speak against this interpretation:

(1) People who buy into these arguments mostly don’t think their views imply killing people. (2) To judge what an argument “in isolation” implies, we need some framework for (population) ethics. The framework that totalists in EA rely on is question begging and often not shared by proponents of the asymmetry.

Comment by Lukas_Gloor on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-27T12:07:13.967Z · EA · GW

That does resolve some problems, but I think it also breaks most of the original post's arguments, since they weren't made in (and don't easily fit into) a preference-focused framework.

My impression of the OP's primary point was that asymmetric views are under-discussed. Many asymmetric views are preference-based and this is mentioned in the OP (e.g., the link to Anti-frustrationism or mention of Benatar).

Of the experience-based asymmetric views discussed in the OP, my posts on tranquilism and suffering-focused ethics mention value pluralism and the idea that things other than experiences (i.e., preferences mostly) could also be valuable. Given these explicit mentions it seems false to claim that "these views don't easily fit into a preference-focused framework."

Probably similarly, the OP links to posts by Teo Ajantaival which I've only skimmed but there's a lengthy and nuanced-seeming discussion on why minimalist axiologies, properly construed, don't have the implications you ascribed to them.

The NU FAQ is a bit more single-minded in its style/approach, but on the question "Does negative utilitarianism solve ethics" it says "ethics is nothing that can be 'solved.'" This at least tones down the fanaticism a bit and opens up options to incorporate other principles or other perspectives. (Also, it contains an entire section on NIPU – negative idealized preference utilitarianism. So, that may count as another preference-based view alluded in the OP, since the NU FAQ doesn't say whether it finds N(H)U or NIPU "more convincing.")

The post argues that making happy people isn't good and making miserable people is bad, because creating happiness isn't good and creating suffering is bad. But it's unclear how this argument can be translated into a preference-focused framework.

I'm not sure why you think the argument would have to be translated into a preference-focused framework. In my previous comment I wanted to say the following: (1) The OP mentions that asymmetric positions are underappreciated and cites some examples, including Anti-Frustrationism, which is (already) a preference-based view.
(2) While the OP does discuss experience-focused views that say nothing is of intrinsic value, those views are compatible with a pluralistic conception of "ethics/morality" where preferences could matter too. Therefore, killing people against their will to reduce suffering isn't a clear implication of the views.

Neither (1) or (2) require translating a specific argument from experiences to preferences. (That said, I think it's actually easier to argue for an asymmetry in preference contexts. The notion that acquiring a new preference and then fulfilling it is a good in itself seems counterintuitive. Relatedly, the tranquilist conception of suffering is more like a momentary preference rather than an 'experience' and this shift IMO made it easier to justify the asymmetry.)

Could it be that "satisfying preferences isn't good, and frustrating preferences is bad"?

Why do you want to pack the argument into the framing "What is good and what is bad?" I feel like that's an artificially limited approach to population ethics, this approach of talking about what's good or bad. When something is good, it means that we have to create as much of it as possible? That's a weird framework! At the very least, I want to emphasize that this is far from the only way to think about what matters.

In my post Dismantling Hedonism-inspired Moral Realism, I wrote the following:

  • Pleasure’s “goodness” is under-defined

I concede that there’s a sense in which “pleasure is good” and “suffering is bad.” However, I don’t think that brings us to hedonist axiology, or any comprehensively-specified axiology for that matter.

Behind the statement “pleasure is good,” there’s an under-defined and uncontroversial claim and a specific but controversial one. Only the under-defined and uncontroversial claim is correct.

Under-defined and uncontroversial claim: All else equal, pleasure is always unobjectionable and often something we come to desire.

Specific and controversial claim: All else equal, we should pursue pleasure with an optimizing mindset. This claim is meant to capture things like:

  • that, all else equal, it would be a mistake not to value all pleasures
  • that no mental states without pleasure are in themselves desirable
  • that, all else equal, more pleasure is always better than less pleasure

According to moral realist proponents of hedonist axiology, we can establish, via introspection, that pleasure is good in the second, “specific and controversial” sense. However, I don’t see how that’s possible from mere introspection!

Unlike the under-defined and uncontroversial claim, the specific and controversial claim not only concerns what pleasure feels like, but also how we are to behave toward pleasure in all contexts of life. To make that claim, we have to go far beyond introspecting about pleasure’s nature.

Introspection fundamentally can’t account for false consensus effects (“typical mind fallacy”). My error theory is that moral realist proponents of hedonist axiology tend to reify intuitions they have about pleasure as intrinsic components to pleasure.

Even if it seems obvious to a person that the way pleasure feels automatically warrants the pursuit of such pleasures (at some proportionate effort cost), the fact that other people don’t always see things that way should give them pause. Many hedonist axiology critics are philosophically sophisticated reasoners (consider, for example, that hedonism is not too popular in academic philosophy), so it would be uncharitable to shrug off this disagreement. For instance, it would be uncharitable and unconvincing to say that the non-hedonists are (e.g.) chronically anhedonic or confused about the difference between instrumental and intrinsic goods. To maintain that hedonist axiology is the foundation for objective morality, one would need a more convincing error theory.

I suspect that many proponents of hedonist axiology indeed don’t just “introspect on the nature of pleasure.” Instead, I get the impression that they rely on an additional consideration, a hidden background assumption that does most of the heavy lifting. I think that background assumption has them put the cart before the horse.

In the quoted passages above, I argued that the way hedonists think of "pleasure is good" smuggles in unwarranted connotations. Similarly and more generally, I think the concept "x is good," the way you and others use it for framing discussions on population ethics, bakes in an optimizing mindset around "good things ought to be promoted." This should be labelled as an assumption we can question, rather than as the default for how to hold any discussion on population ethics. It really isn't the only way to do moral philosophy. (In addition, I personally find it counterintuitive.)

(I make similar points in my recent post on a framework proposal for population ethics, which I've linked to in previous comments her.)

(I also agree--as I tried to note in my original comment's first bullet point--that pluralistic or "all-things-considered" views avoid the implications I mentioned. But I think ethical views should be partly judged based on the implications they have on their own. The original post also seems to assume this, since it highlights the implications total utilitarianism has on its own rather than as a part of some broader pluralistic framework.)

Okay, that helps me understand where you're coming from. I feel like "ethical views should be partly judged based on the implications they have on their own" is downstream of the question of pluralism vs. single-minded theory. In other words, when you evaluate a particular view, it already has to be clear what scope it has. Are we evaluating the view as "the solution to everything in ethics?" or are we evaluating it as "a theory about the value of experiences that doesn't necessary say that experiences are all that matters?" If the view is presented as the latter (which, again, is explicitly the case for at least two articles the OP cited), then that's what it should be evaluated as. Views should be evaluated on exactly the scope that they aspire to have.

Overall, I get the impression that you approach population ethics with an artificially narrow lens about what sort of features views "should" have and this seems to lead to a bunch of interrelated misunderstandings about how some others think about their views. I think this applies to probably >50% of the views the OP discussed rather than just edge cases. That said, your criticisms apply to some particular proponents of suffering-focused ethics and some texts.

Comment by Lukas_Gloor on [Cause Exploration Prizes] Fix the Money, Fix the world · 2022-08-24T18:24:34.768Z · EA · GW

The objections you go through in the post seem a bit weak to me. When I think about whether crypto is good for the world, apart from the speculative bubble objection, the main concern I think of is "Isn't crypto a scammer's paradise and isn't it going to stay that way given that 'decentralization' is much uglier in practice than the naive rosy view would have you think?"

We've just seen a bunch of crypto entities blow up and hurt people badly who thought (and were promised that) their funds were safe. The highest-volume stablecoin (tether) still looks like a massive fraud and yet many of the biggest crypto exchanges keep dealing with it.

I know that some people in crypto would welcome better regulation, but is that going to work in practice and don't you lose part of the appeal of it when governments have to get heavily involved after all?

I'd like to see more engagement with these sorts of questions.

(Another point I'm unconvinced about is "Why bitcoin?" as opposed to other coins. Bitcoin has some supremely annoying exponents. Smart contracts seem like an interesting invention with use cases. Wouldn't the fact that something has a lot of use cases arguably make it safer as a store of value? (I mean, I bought mostly bitcoin at first but by now most of my crypto is ethereum because I can gamble with NFTs and that's more fun.))

Comment by Lukas_Gloor on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-24T11:05:15.348Z · EA · GW

It implies that there wouldn't be anything wrong with immediately killing everyone reading this, their families, and everyone else, since this supposedly wouldn't be destroying anything positive.

That's not how many people with the views Magnus described would interpret their views.

For instance, let's take my article on tranquilism, which Magnus cites. It says this in the introduction:

Tranquilism is not meant as a standalone moral theory, but as a way to think about well-being and the value of different experiences. Tranquilism can then serve as a building block for more complex moral views where things other than experiences also matter morally.

Further in the text, it contains the following passage:

As a theory limited to the evaluation of experienced well-being, tranquilism is compatible with pluralistic moral views where things other than experiences – for instance the accomplishment of preferences and life goals – can be (dis)valuable too.

And at the end in the summary:

Tranquilism is not committed to the view that cravings are all that matter. Our motivation is multifaceted, and next to impulsive motivation through cravings, we are also motivated by desires to achieve certain goals. People do not solely live for the sake of their personal well-being; we may also (or even exclusively) hold other goals, including goals about the welfare of others or goals about the state of the world. Thinking about morality in terms of goals (or “ends”) has inspired rationality-based accounts of cooperation such as Kantianism (arguably), contractualism, or coordinated decision-making between different value systems.28 Furthermore, if one chooses to regard the achievement of preferences or goals as valuable in itself, this can inspire moral axiologies such as preference-based consequentialism or coherent extrapolation, either as a complement or an alternative to one’s theory of well-being. (And of course, one’s goals may include many other components, including non-moral or non-altruistic ones.)

I generally think EAs are too fond of single-minded conceptions of morality. I see ethics as being largely about people's interests/goals. On that perspective, it would be preposterous to kill people against their will to prevent future suffering.

That said, people's "goals" are often under-defined, and population ethics as a whole is under-defined (it isn't fixed how many people there will be or what types of goals new people will have), so there's also room for an experience-focused "axiology" like tranquilism to deal with cases that are under-defined according to goal-focused morality.

I think there's a bit of confusion around the conclusion "there's nothing with intrinsic value." You seem to be assuming that the people who come to this conclusion completely share your framework for how to think about population ethics and then conclude that where you see "intrinsic value," there's nothing in its place. So you interpret them as thinking that killing people is okay (edit: or would be okay absent considerations around cooperation or perhaps moral uncertainty). However, when I argue that "nothing has intrinsic value," I mostly mean "this way of thinking is a bit confused and we should think about population ethics in an entirely different way." (Specifically, things can beconditionally valuable if they're grounded in people's interests/goals, but they aren't "intrinsically valuable" in the sense that it's morally pressing to bring them about regardless of circumstances.)

Comment by Lukas_Gloor on The standard person-affecting view doesn't solve the Repugnant Conclusion. · 2022-08-23T20:25:28.700Z · EA · GW

That's a good point!

I would say "creating happy people is neutral; creating unhappy people is bad" is the cliffnotes of (asymmetric) person-affecting views, but there are further things to work out to make the view palatable. There are various attempts to do this (e.g. Meacham introduces "saturating counterpart relations") . My attempt is here.

In short, I think person-affecting views can be framed as "preference utilitarianism for existing people/beings, low-demanding contractualism (i.e., 'don't be a jerk') for possible people/beings."

"Low-demanding contractualism" comes with principles like:

  • Don’t create minds that regret being born.
  • Don’t create minds and place them in situations where their interests are only somewhat fulfilled if you could easily have provided them with better circumstances.
  • Don’t create minds destined for constant misery even if you also equipped them with a strict preference for existence over non-existence.

See also the discussion in this comment thread.

Comment by Lukas_Gloor on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-23T18:40:54.024Z · EA · GW

I agree with the 'spawned an industry' point and how that makes it difficult to assess how widespread various views really are.

As usual (cf. the founding impetus of 'experimental philosophy'), philosophers don't usually check whether the intuition is in fact widely held, and recent empirical work casts some doubt on that.

Magnus in the OP discusses the paper you link to in the quoted passage and points out that it also contains findings we can interpret in support of a (weak) asymmetry of some kind. Also, David (the David who's a co-author of the paper) told me recently that he thinks these types of surveys are not worth updating on by much [edit: but "casts some doubt on" is still accurate if we previously believed people would have clear answers that favor the asymmetry] because the subjects often interpret things in all kinds of ways or don't seem to have consistent views across multiple answers. (The publication itself mentions in the "Supplementary Materials" that framing effects play a huge role.)

Comment by Lukas_Gloor on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-23T15:24:52.903Z · EA · GW

Is the argument something like we should only care about fulfilling preferences that already exist, and adding people to the world doesn't automatically do that, so there's no general reason to add happy people if it doesn't satisfy a preference of someone who is here already?

Pretty much, but my point is only that this is a perfectly defensible way to think about population ethics, not that I expect everyone to find it compelling over alternatives.

As I say in the longer post:

Just like the concept “athletic fitness” has several defensible interpretations (e.g., the difference between a 100m sprinter and a marathon runner), so (I argue) does “doing the most moral/altruistic thing.”

I agree with what you write about "objective" – I'm guilty of violating your advice.

(That said, I think there's a sense in which preference utilitarianism would be unsatisfying as a "moral realist" answer to all of ethics because it doesn't say anything about what preferences to adopt. Or, if it did say what preferences to adopt, then it would again be subject to my criticism – what if objective preference utilitarianism says I should think of my preferences in one particular way but that doesn't resonate with me?)

Couldn't you show that adding suffering people isn't automatically bad by the same reasoning, since it doesn't necessarily violate an existing preference?

I tried to address this in the last paragraph of my previous comment. It gets a bit complicated because I'm relying on a distinction between "ambitious morality" and "minimal morality" ( = "don't be a jerk") which also only makes sense if there's no objective axiology.

I don't expect the following to be easily intelligible to people used to thinking within the moral realist framework, but for more context, I recommend the section "minimal morality vs. ambitious morality" here. This link explains why I think it makes sense to have a distinction between minimal morality and ambitious morality, instead of treating all of morality as the same thing. ("Care morality" vs. "cooperation morality" is a similar framing, which probably tells you more about what I mean here.) And my earlier comment (in particular, the last paragraph in my previous comment) already explained why I think minimal morality contains a population-ethical asymmetry.

Comment by Lukas_Gloor on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-23T14:07:26.982Z · EA · GW

The short answer:

Thinking in terms of "something has intrinsic value" privileges particular answers. For instance, in this comment today, MichaelPlant asked Magnus the following:

[...] why do we have reason to prevent what is bad but no reason to bring about what is good?"

The comment presupposes that there's "something that is bad" and "something that is good" (in a sense independent of particular people's judgments – this is what I meant by "objective"). If we grant this framing, any arguments for why "create what's good" is less important than "don't create what's bad" will seem ad hoc!

Instead, for people interested in exploring person-affecting intuitions (and possibly defending them), I recommend taking a step back to investigate what we mean when we say things like "what's good" or "something has intrinsic value." I think things are good when they're connected to the interests/goals of people/beings, but not in some absolute sense that goes beyond it. In other words, I only understand the notion of (something like) "conditional value," but I don't understand "intrinsic value."

The longer answer:

Here's a related intuition:

  • There’s a tension between the beliefs “there’s an objective axiology” and “people are free to choose their life goals.”

In my post, "Population Ethics Without [an Objective] Axiology," I defended a specific framework for thinking about population ethics. From the post:

Many effective altruists hesitate to say, “One of you must be wrong!” when one person cares greatly about living forever while the other doesn’t. By contrast, when two people disagree on population ethics “One of you must be wrong!” seems to be the standard (implicit) opinion. I think these two attitudes are in tension. To the degree people are confident that life goals are up to the individual to decide/pursue, I suggest they lean in on this belief. I expect that resolving the tension in that way – leaning in on the belief “people are free to choose their life goals;” giving up on “there’s an axiology that applies to everyone” – makes my framework more intuitive and gives a better sense of what the framework is for, what it’s trying to accomplish.

If there were an objective axiology, I might be making a mistake in how I plan to live a fulfilled, self-oriented life. Namely, if the way I chose to live my life doesn't give sufficient weight to things that are intrinsically good according to the objective axiology, then I'm making some kind of mistake. I think it's occasionally possible for people to make "mistakes" about their goals/values if they're insufficiently aware of alternatives and would change their minds if they knew more, etc. However, I don't think it's possible for truly-well-informed reasoners to be wrong about what they think they deeply care about, and I don't think "becoming well-informed" leads to convergence of life goals among people/reasoners.

I'd say that the main force behind arguments against person-affecting views in population ethics is usually something like the following:

"We want to figure out what's best for morally relevant others. Well-being differences in morally relevant others should always matter – if they don't matter on someone's account, then this particular account couldn't be concerned with what's best for morally relevant others."

As you know, person-affecting views tend to come out in such a way that they say things like "it’s neutral to create the perfect life and (equally) neutral to create a merely quite good life." (Or they may say that whether to create a specific life depends on other options we have available, thereby violating the axiom of independence of irrelevant alternatives.)

These features of person-affecting views show that well-being differences don't always matter on those views. Some people will interpret this as "person-affecting views are incompatible with the goal of ethics – figuring out what's best for morally relevant others."

However, all of this is begging the question. Who says that the same ethical rules should govern existing (and sure-to-exist) people/beings as well as possible people/beings? If there's an objective axiology, it's implicit that the same rules would apply (why wouldn't they?). However, without an objective axiology, all we're left is the following:

  • Ethics is about interests/goals.
  • Nothing is intrinsically valuable, but various things can be conditionally valuable if grounded in someone’s interests/goals.
  • The rule “focus on interests/goals” has comparatively clear implications in fixed population contexts. The minimal morality of “don’t be a jerk” means we shouldn’t violate others’ interests/goals (and perhaps even help them where it’s easy and our comparative advantage). The ambitious morality of “do the most moral/altruistic thing” has a lot of overlap with something like preference utilitarianism. (Though there are instances where people's life goals are under-defined, in which case people with different takes on "do the most moral/altruistic thing" may wish to fill in the gaps according to subjectivist "axiologies" that they endorse.)
  • On creating new people/beings, “focus on interests/goals” no longer gives unambiguous results: (1) The number of interests/goals isn’t fixed (2) The types of interests/goals aren’t fixed
  • This leaves population ethics under-defined with two different perspectives: that of existing or sure-to-exist people/beings (what they want from the future) and that of possible people/beings (what they want from their potential creators).
  • Without an objective axiology, any attempt to unify these perspectives involves subjective judgment calls.

So, without an objective axiology, there are these two separate perspectives. We can view person-affecting views as making the following statement:

"'Doing the most moral/altruistic thing' isn't about creating new people with new interests/goals. Instead, it's about benefitting existing (or sure-to-exist) people/beings according to their interests/goals."

In other words, person-affecting views concentrate their caring budget on one of two possible perspectives (instead of trying to design an axiology that incorporates both). That seems like a perfectly defensible approach to me!

Still, we're left with the question, "If your view focuses on existing (and sure-to-exist) people, why is it bad to create a miserable person?"

Someone with person-affecting views could reply the following:

"While I concentrate my caring budget on one perspective (existing and sure-to-exist people/beings), that doesn't mean my concern for the interests of possible people/beings is zero. My approach to dealing with merely possible people is essentially 'don't be a jerk.' That's exactly why I'm sometimes indifferent between creating a medium-happy possible person and a very happy possible person. I understand that the latter is better for possible people/beings, but since I concentrate my caring budget on existing (and sure-to-exist) people/beings, bringing the happier person into existence usually isn't a priority to me. Lastly, you're probably going to ask 'why is your notion of 'don't be a jerk' asymmetric?.' I.e., why not 'don't be a jerk' by creating people who would be grateful to be alive (at least in instances where it's easy/cheap to do so)? To this, my reply is that creating a specific person singles out that person (from the sea of possible people/beings) in a way that not creating them does not. There's no answer to 'What do possible people/beings want?' that applies to all conceivable beings, so I cannot do right by all of them, anyway. By not giving an existence slot to someone who would be grateful to exist, I admit that I'm arguably failing to benefit a particular subset of possible people/beings (the ones who would be grateful to get the slot). Still, other possible people/beings don't mind not getting the spot, so there's at least a sense in which I didn't disrespect possible people/beings as a whole interest group. By contrast, if I create someone who hates being alive, saying 'Other people would be grateful in your spot' doesn't seem like a defensible excuse. 'Not creating happy people' only means I'm not giving maximum concern to possible people/beings, whereas 'creating a miserable person' means I'm flat-out disrespecting someone specific, who I chose to 'highlight' from the sea of all possible people/beings (in the most real sense) – there doesn't seem to be a defensible excuse for that."

The long answer: My post Population Ethics Without ((an Objective)) Axiology: A Framework.

Comment by Lukas_Gloor on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-23T12:24:35.842Z · EA · GW

Wow, I'd have said 30-65% for my 50% confidence interval, and <5% is only about 5-10% of my probability mass. But maybe we're envisioning this survey very differently.

Comment by Lukas_Gloor on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-23T11:46:13.736Z · EA · GW

I upvoted this comment because I think there's something to it.

That said, see the comment I made elsewhere in this thread about the existence of selection effects. The asymmetry is hard to justify for believers in an objective axiology, but philosophers who don't believe in an objective axiology likely won't write paper after paper on population ethics.

Another selection effect is that consequentialists are morally motivated to spread their views, which could amplify consensus effects (even if it applies to consequentialists on both sides of the split, one group being larger and better positioned to start with can amplify the proportions after a growth phase). For instance, before the EA-driven wave of population ethics papers, presumably the field would have been split more evenly?

Of course, if EA were to come out largely against any sort of population-ethical asymmetry, that's itself evidence for (a lack of) convincingness of the position. (At the same time, a lot of EAs take moral realism seriously* and I don't think they're right – I'd be curious what a poll of anti-realist EAs would tell us about population-ethical asymmetries of various kinds and various strengths.)

*I should mention that this includes Magnus, author of the OP. I probably don't agree with his specific arguments for there being an asymmetry, but I do agree with the claim that the topic is underexplored/underappreciated.

Comment by Lukas_Gloor on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-23T11:21:12.788Z · EA · GW

That doesn't seem true to me (see MichaelPlant's comment).

Also, there's a selection effect in academic moral philosophy where people who don't find the concept of "intrinsic value" / "the ethical value of a life" compelling won't go on to write paper after paper about it. For instance, David Heyd wrote one of the earliest books on "population ethics" (the book was called "Genethics" but the term didn't catch on) and argued that it's maybe "outside the scope of ethics." Once you said that, there isn't a lot else to say. Similarly, according to this comment by peterhartree, Bernard Williams also has issues with the way other philosophers approach population ethics. He argues for his position of reasons anti-realism, which says that there's no perspective external to people's subjective reasons for action that has the authority to tell us how to live.

If you want an accurate count on philosophers' views on population ethics, you have to throw the net wide to include people who looked at the field, considered that it's a bit confused because of reasons anti-realism, and then moved on rather than repeating arguments for reasons anti-realism. (The latter would be a bit boring because you'd conclude by saying something like "different positions on population ethics are similarly defensible – it depends on what people care to emphasize.")

Comment by Lukas_Gloor on The Repugnant Conclusion Isn't · 2022-08-23T10:55:21.882Z · EA · GW

I agree that some people don't seem to give hedonism a fair hearing when discussing experience machine thought experiments. But also, I think that some people have genuine reservations that make sense given their life goals.

Personally, I very much see the appeal of experience machines. Under the right circumstances, I'd be thrilled to enter! If I was single and my effective altruist goals were taken care of, I would leave my friends and family behind for a solipsistic experience machine. (I think I do care about having authentic relationships with friends and family to some degree, but definitely not enough!) I'd also enter a non-solipsistic experience machine if my girlfriend wanted to join and we'd continue to have authentic interactions (even if that opens up the possibility of having negative experiences). The reason I wouldn't want to enter under default circumstances is because the machine would replace the person I love with a virtual person (this holds even if my girlfriend got her own experience machine, and everyone else on the planet too for that matter). I know I wouldn't necessarily be aware of the difference and that things with a virtual girlfriend (or girlfriends?) could be incredibly good. Still, entering this solipsistic experience machine would go against the idea of loving someone for the person they are (instead of how they make me feel).

I wrote more experience machine thought experiments here.

but I don't think that intuition does much to generate a believe that the ethical value of a life necessarily requires some kind of nebulous thing beyond experienced utility.

I don't think there's such a thing as "the ethical value of a life," at least not in a well-defined objective sense. (There are clearly instances where people's lives aren't worth living and instances where it would be a tragedy to end someone's life against their will, so when I say the concept "isn't objective," I'm not saying that there's nothing we can say about the matter. I just mean that it's defensible for different people to emphasize different aspects of "the value of a life." [Especially when we're considering different contexts such as the value of an existing or sure-to-exist person vs. the value of newly creating a person that is merely a possible person at the time we face the decision.])