You're welcome. Plz, write a post (even if a shortform) about it someday. Something that attracts me in this literature (particularly in Scheffler) is how they can pick different intuitions that often collide with premises / conclusions of reasons based on something like the rational agent model (i.e., VnM decision theory). I think that, even for a philosophical theorist, it could be useful to know more about how prevalent are these intuitions, and what possible (social or psychological) explanations could be offered for them. (I admit that, just like the modus ponens of one philosopher might be the modus tollens of the other, someone's intuition might be someone else's cognitive bias) For instance, Scheffler mentions we (at least me and him) have a "primitive" preference for humanity's existence (I think by "humanity" he usually means rational agents similar to us - being extinct by trisolarans would be bad, but not as bad as the end of all conscious rational agents); we usually prefer that humanity exists for a long time, rather than a short period, even if both timelines have the same amount of utility - which seems to imply some sort of negative discount rate of the future,so violating usual "pure time preference" reasoning. Besides, we prefer world histories where there's a causal connection between generations / individuals, instead of possible worlds with the same amount of utility (and the same length in time) where communities spring and get extinct without any relation between them - I admit this sounds weird, but I think it might explain my malaise towards discussions on infinite ethics.
I was Reading about Meghan Sullivan “principle of non-arbitrariness,” and it reminded me Parfit’s argument against subjectivist reasoning in On What Matters… but why are philosophers (well, and people in general) against arbitrariness? I mean, I do agree it’s a tempting intuition, but I’ve never seen (a) a formal enunciation of what counts as arbitrary (is "arbitrary" arbitrary?), and (b) an a priori argument against. Of course, if someone’s preference ordering varies totally randomly, we can’t represent them with a utility function, and perhaps we could accuse them of being inconsistent. But that’s not what philosophers' examples usually chastise: if one has a predictable preference for eating shrimps only on Friday, or disregards pain only on Thursday, there’s no instability here – you can represent it with a utility function (having time as a dimension).
There isn’t even any a priori feature allowing us to say that is evolutionarily unstable, since this could only be assessed when we look at whom our agent will interact with. Which makes me think that arbitrariness is not a priori at all, of course – it depends on social practices such as “giving reasons” for actions and decisions (i don't think Parfit would deny that; idk about Sullivan). There might be a thriving community of people who only love shrimp on Friday, for no reason at all; but, if you don’t share this abnormal preference, it might be hard to model their behavior, to cooperate with them - at least, in this example, when it comes gastronomic enterprises. On the other hand, if you can just have a story (even if kinda unbelievable: “it’s a psychosomatic allergy”) to explain this preference, it’s ok: you’re just another peculiar human. I can understand you now; your explanation works as a salience that allows me to better predict your behavior.
I suspect many philosophical (a priori-like) intuitions depend on things like Schelling points (i.e., the problem of finding salient solutions for social practices people can converge to) than most philosophers would admit. Of course, late Wittgenstein scholars are OK with that, since for them everything is about forms of life, language games, etc. But I think relativistic / conventionalist philosophers unduly trivialize this feature, and so neglect an important point: whatever counts as arbitrary is not, well, arbitrary – and we can often demonstrate that what we call “arbitrary” is suboptimal, inconsistent with other preferences or intuitions, or hard to communicate (and so a poor candidate for a social norm / convention / intuition).
I guess you already have a bunch of questions prepared... I have a peculiar curiosity / interest in hearing Sachs talk about how a warmer climate might impact economic development. I think he could summarize his own view, then conflicting opinions, and draw conclusions about future impacts of climate change.
I wonder what other areas have failed to get into SDGs - e.g., there's absolutely no concern for animal welfare, as the goals and targets are explicitly worded in conservationist terms. Most material I've read about this is limited to argue that animal welfare and SDGs are compatible - even this call for papers from MDPI (due on 30 jun), which might interest someone doing research on the area.
Could we have catastrophic risk insurance?
Mati Roy once suggested, in this shortform, that we could have "nuclear war insurance," a mutual guarantee to cover losses due to nukes, to deter nations from a first strike; I dismissed the idea because, in this case, it'd not be an effective deterrent (if you have power and reasons enough to nuke someone, insurance costs won't be among your relevant concerns).
However, I wonder if this could be extrapolated to other C-risks, such as climate change - something insurance and financial markets are already trying to price. Particularly for C-risks that are not equally distributed (eg., climate change will probably be worse for poor tropical countries) and that are subject to great uncertainty...
I mean, of course I don't expect countries would willingly cover losses in case of something akin to societal collapse, but, given the level of uncertainty, this could still foster more cooperation, as it'd internalize and dillute future costs through all participant countries... on the other hand, ofc, any form insurance implies moral hazard, etc. But even this has a bright side, as it'd provide a legit case for having some kind of governance/supervision /enforcement on the subject... I guess I might be asking: Why don't we have a "climate Bretton Woods?"
(I guess you could apply the argument for FHI's Windfall Clause here - it's just that they're concerned with benefits and companies, I'm worried about risks and countries)
Even if that's not workable for climate change, would it work with other risks? E.g., epidemics?
(I think I should have done a better research on this... I guess either I am underestimating moral hazards and the problem of making countries cooperate, or there's a huge flaw in my reasoning here)
Is there anything like a public repository / document listing articles and discussions on social discount rates (similar to what we have for iidm)?
(I mean, I have downloaded a lot of papers on this - Stern, Nordhaus, Greaves, Weitzman, Posner etc. - and there many lit reviews, but I wonder if someone is already approaching it in a more organized way)
I was wondering... We have (private) pension funds for children. Could / should we make it more widespread (maybe even mandatory)? Could we have government-sponsored funds? Parents (with government's help) would save resources in a fund that could only be used by their offspring when they came of age; plus, unlike current pension funds I know, they could be able to use it as collateral, or to pay tuition, or open a business, or maybe even transfer it to another pension fund... For a longtermist, the pros are: it would increase overall savings (does it? or people will just divert resources from other funds?), transfer wealth to new generations (inequality of wealth between generations concerns me almost as much as possible inequalities of political power), improve intergenerational cooperation... Of course, this can be said for sovereign funds, too, but I see there might be some advantage in having individual accounts (so sidestepping things like tragedy of commons). I'm not very confident, though.
Well, you're right that intergenerational cooperation lacks straight reciprocity... but we do have chains of cooperation that extend across time and often depend on the expectation that future people will sustain it - e g., think about pension funds and longterm debt, or maybe even just plain cultural transmission
I think Tarsney is awesome in this episode... but maybe missed two opportunities here:
i. The Berry Paradox is super cool, but the Paradox of the Question is equally addictive, and basically can be seen as a joke on Global Priorities studies. But yeah, some people say it's not so paradoxical after all...
ii. one can also look at the temporal asymmetry as a problem affecting intergenerational cooperation: if you don't consider the interests of antecessors as (equally) important, then you can expect your successors will do the same to you, and you have fewer reasons to invest on the future. Even if you do have something like altruistic preferences towards future people, that preference is irrelevant for them. (Actually, I'm sort of surprised about how rare contractualist-like accounts of intertemporal justice are in EA literature - except for Sandberg's piece on Rawls)
Your "star systems" point reminds me another problem which seems totally absent in this whole discussion - namely, things like agency conflicts and single-points-of-failure. For instance, I was reading about Alcibiades, and I'm pretty sure he was (one of) the most astonishing men alive in his age and overshadowed his peers- brilliant, creative, ridiculously gorgeous, persuasive, etc. Sorry for the cautionary tale: but he caused Athens to go to an unnecessary war, then defected to Sparta, & defected to Persia, prompted an oligarchic revolution in his homeland in order to return... and people enjoyed the idea because they knew he was awesome & possibly the only hope of a way out... then he let the oligarchy be replaced by a new democratic regime of his liking, became a superstar general who changed the course of the war, but then let his subordinate protégé lose a key battle because of overconfidence... and finally just exiled in his castle while the city lost the war. I think one of the major advancements of our culture is that our institutions got less and less personal. So, while we are looking for star scientists, rulers, managers, etc. (i.e., a beneficious type of aristocracy) to leverage our output, we should also solve the resilience problems caused by agency conflicts and concentrating power and resources in few "points-of-failure". (I mean, I know difference in perfomance is a complex factual question per se, without us having to worry about governance; I'm just pointing out that, for many relevant activities where differences in performance will be highlighted the most, we're likely to meet these related issues, and they should be taken into account if your organisation is acting based on "differences in performance are huge")
The Global Catastrophic Risk Institute (GCRI) is currently welcoming inquiries from people who are interested in seeking their advice and/or collaborating with them. These inquiries can concern any aspect of global catastrophic risk but GCRI is particularly interested to hear from those interested in its ongoing projects. These projects include AI policy, expert judgement on long-term AI, forecasting global catastrophic risks and improving China-West relations.
Participation can consist of anything from a short email exchange to more extensive project work. In some cases, people may be able to get involved by contributing to ongoing dialogue, collaborating on research and outreach activities, and co-authoring publications. Inquiries are welcome from people at any career point, including students, any academic or professional background, and any place in the world. People from underrepresented groups are especially encouraged to reach out.
Future of Life Institute is looking for translators! (Forwarded from FLI's Newsletter) The outreach team is now recruiting Spanish and Portuguese speakers for translation work! The goal is to make our social media content accessible to our rapidly growing audience in Central America, South America, and Mexico. The translator would be sent between one and five posts a week for translation. In general, these snippets of text would only be as long as a single tweet. We prefer a commitment of two hours per week but do not expect the work to exceed one hour per week. The hourly compensation is $15. Depending on outcomes for this project, the role may be short-term. https://lnkd.in/d5YqX-h For more details and to apply, please fill out this form. We are also registering other languages for future opportunities so those with fluency in other languages may fill out this form as well.
(d) some EAs working in consulting firms (EACN) - which, among other things, aim to nudge corporations and co-workers into more effective behavior. But I didn't find any org providing to non-EA charities consulting services aiming to make them more effective. Would it be low-impact? Or is it a low-hanging fruit?
One might think that this is basically the same job GW already does... Well, yeah, I suppose you would actually use a similar approach to evaluate impact, but it's very different to provide to a charity recommendations that aim to help them achieve their own goals. This would be framed as assistance, not as some sort of examination; while GW's stakeholders are donors, this "consulting charity" would work for the charities themselves. Besides, in order to prevent conflicts of interest, corporations often use different firms to provide them auditting (which would be akin to charity evaluation - i.e., a service that ultimately is concerned with investores) and consulting services (which is provided to the corporation and its managers). This could be particularly useful for charities in regions that lack a (effective) charity culture.
Update: an example of this idea is the Philanthropy Advisory Fellowship sponsored by EA Harvard - which has, e.g., made recommendations to Arymax Foundation on the best cause areas to invest in Brazil. But I believe an "EA Consulting" org would provide other services, and not only to funders.
Could it be useful for moderators to take into account the amount of karma / votes a statement receives?
I'm no expert here, and I just took a bunch of minutes to get an idea of the whole discussion - but I guess that's more than most people who will have contact with it. So it's not the best assessment of the situation, but maybe you should take it as evidence of what it'd look like for an outsider or the average reader. In Halstead's case, the warning sounds even positive:
However, when I discussed the negative claims with Halstead, he provided me with evidence that they were broadly correct — the warning only concerns the way the claims were presented. While it's still important to back up negative claims about other people when you post them, it does matter whether or not those claims can be reasonably backed up.
I think Aaron was painstakingly trying to follow moderation norms in this case; otherwise, moderators would risk having people accuse them of taking sides. I contrast it with Sean's comments, which were more targeted and catalysed Phil's replies, and ultimately led to the latter being banned; but Sean disclosed evidence for his statements, and consequently was not warned.
Not super-effective, but given Sanjay's post on ESG, maybe there are people interested: Ethics and Trust in Finance 8th Global Prize The Prize is a project of the Observatoire de la Finance (Geneva), a non-profit foundation, working since 1996 on the relationship between the ethos of financial activities and its impact on society. The Observatoire aims to raise awareness of the need to pursue the common good through reconciling the good of persons, organizations, and community. [...] The 8th edition (2020-2021) of the Prize was officially launched on 2 June 2020. The deadline for submissions is 31 May 2021. The Prize is open to people under the age of 35 working in or studying finance. Register here for entry into the competition. All essays submitted to the Prize are assessed by the Jury, comprising academics and professional experts.
I mean, it's pretty relevant for peace (I guess most wars result from conflict of factions or succession crises) and for a well functioning government. People talk about the dangers of polarization, about why nations fail, or autoritarianism, or iidm... It's not neglected per se (it's been the focus of some of classical works in political phil & sci), but I'm not sure all low-hanging has been eaten; plus, thinking about interventions as increasing / decreasing political stability might help assessing other areas (like IIDM).
First, it's not a feeling, it's a hypothesis. Please, do not mistake one for the other.
It could apply to them if they were not observed to be under stress conditions and captivity, and in behaviors consistent with psychological suffering - like neurotic ticks, vocalization or apathy.
(Tbh, I don't quite see your point here, but I guess you possibly don't see mine, either)
Good point, thanks. However, even if EE and Wild animals welfare advocates do not conflict in their intermediary goals, their ultimate goals do collide, right? For the former, habitat destruction is an evil, and habitat restoration is good - even if it's not immediately effective.
Well, if your EA were particularly well placed to tackle this problem, then the answer is likely yes: they would probably realize its scalable and (partially neglected). Plus, if God is reliable, then the Holy Advice would likely dominate other matters - AGI and x-risks are uncertain futures, and reducing present suffering would be greatly affected by the financial crisis.
In addition, maybe this is not quite the answer you're looking for, but I believe personal features (like fit and comparative advantages) would likely trump other considerations when it comes to choosing a cause area to work on (but not to donate to).
Obviously. But then, first, Effective Environmentalists are doing great harm, right? We should be arguing more about it.
On the other hand, if your basic welfare theory is hedonistic (at least for animals), then one good long life compensates for thousands of short miserable ones - because what matters is qualia, not individuals. And though I don't deny animals suffer all the time, I guess their "default welfare setting" must be positive if their reward system (at least for vertebrates) is to function properly.
So I guess it's more likely that we have some sort of instance of the "repugnant conclusion" here.
Ofc, this doesn't imply we shouldn't intervene on wild environments to reduce suffering or increase happiness. What is at stake is: U(destroying habitats) > U(restoring habitats)
Is there some tension between population ethics + hedonic utilitarianism and the premises people in wild animal suffering use (e.g., negative utilitarianism, or the negative welfare expectancy of wild animals) to argue against rewilding (and in favor of environment destruction)?
What I miss when I read about the morality of discounting is a disanalogy that explains why hyperbolic or exponential discount rates might be reasonable for individuals with limited lifespans and such and such opportunity costs, but not for intertemporal collective decision-making. Then we could understand why pure discount is tempting, and maybe even realize there's something that temporal impartiality doesn't capture. If there's any literature about it, I'd like to know. Please, not the basic heuristics & bias stuff - I did my homework.
For instance, if human welfare was something that could grow like compound interests, it'd make sense to talk about pure exponential discount. If you could guarantee that all of the dead in the battle of Marathon would have, in expectancy, added good to the overall happiness (or whatever you use as a goal function) in the world and transmitted it to their descendants, then you could say that those deaths are a greater evil than the millions of casualties in WW2; you could think of that welfare as "investment" instead of "consumption". But that's implausible.
On the other hand, there's a small grain of truth here: a tragedy happening in the past will reverberate longer in the world historical trajectory. That's just causality + temporal asymmetry.
This makes me think about cluelessness... I do have a tendency to think good facts have a tendency to lead to better consequences, in general; you don't have to be an opmitist about it: bad facts just tend to lead to worse consequences, too. The opposite thesis, that a good/bad fact is as likely to cause good as evil, seems quite implausible. So you might be able to think about goodness as investment a little bit; instead of pure discount, maybe we should have something like a proxy for "relative impact in world trajectories"?
I was thinking about Urukagina, the first monarch ever mentioned for his benevolence instead of military prowess. Are there any common traces among them? Should we write something like that Forum post on dark trait rulers - but with opposite sign?
I googled a bit about benevolent kings (I thought it'd provide more insight than looking to XXth century biographies), but, except maybe for enlightened despots, most of the guys (like Suleiman, the magnificent) in these lists are conquerors who just weren't brutal and were kind law-givers to their people - which you could also say about Napoleon. I was thinking more about guys like Ashoka and Marcus Aurelius, who seem to have despised the hunger for conquests in other people and were actually willing to improve human welfare for moral reasons
I love the subject, and thanks for the post. I'd even include some sort of "manslaughter-like" humanicide - i.e., assuming a highrisk of destroying humanity. But I don't even dream with anything like that before we criminalize nuclear (or WMD in general) first strike.
In the "voice of God" example, we're guaranteed to minimize error by applying this reasoning; i.e., if God asks this question to every possible human created, and they all answer this way, most of them will be right. Now, I'm really unsure about the following, but imagine each new human predicts Doomsday through DA reasoning; in that case, I'm not sure it minimizes error the same way. We often assume human population will increase exponentially and then suddenly go extinct; but then it seems like most people will end up mistaken in their predictions. Maybe we're using the wrong priors?
As I see, the point is to estimate when extinction would occur by estimating the distribution of population accross time, right? So we use a Rule of Succession-like reasoning... I'm ok with that, so far. N humans have lived, so we can expect more N humans to live, we can update our estimate each time a new one is born... But then, why don't we use the time humnas have already lived on Earth as input instead? I mean, that's Toby Ord's Precipice argument, right? So 200k years without extinction lead you to a very different guesstimate.
Thanks for the post. I'm often very surprised that people ignore income distribution when arguing about economics and welfare. Which leads me to ask: 1) which one is the best (or more robust) estimate of inequality-adjusted income for welfare analysis: median income or Gini-adjusted average income? Or they are supposed to converge (which does not seem to be the case, according to this article)? (I guess one advantage of Gini-adjustment is that it seems to be used in other welfare metrics, like HDI)
2) How relevant is wealth distribution - vis-à-vis income distribution? I can see how it's important for distribution of power in society (if you're comparing different groups, for instance), and I suppose wealth is important for one's own life evaluation and as a hedge against uncertainty and economic shocks... but it's hard for me to "put a number" on that.
Your post has inspired me to investigate if (and maybe later posting something about) EAs should contribute to public consultations issued by financial regulators on ESG standards to argue for explicitly inserting mentions to animal welfare. For instance, would EBA include something like this in European Banking regulations? That's why Mercy for Animals (and others) have recently asked Brazilian SEC (CVM) to mention animal welfare in regulatory norms about financial disclosures (we could provide a translation if necessary).
Sorry if this is a lame question, but do you think that regulations and standards on ESG that explicitly mentioned animal welfare - something more like soft law, or "comply or explain", e.g., "companies must disclose animal welfare policies", or "social and environmental risks include losses due to... animal cruelty" - could be enough to start a change in US antitrust law interpretation on blacklisting products out of animal welfare concerns?
you may be willing to incur a loss of (say) 50% on the value of the bad egg in order to achieve a benefit of (say) 3% on all of the rest of the portfolio
Curiously, I saw the idea of "universal ownership” (without this name) mentioned in this post (courtesy of Scott Alexander’s March links) about how investments are super correlated lately and how diversified investment funds have a piece of each part of the whole economy. It's the closest I've seen to computing "how much will x lose if this company drops 50%, but everyone else increases by 3%". That would explain why BlackRock (and the financial sector, since TCFD's creation) has been so responsible lately.
Btw, could you link the Symposium you mentioned in the text relating universal ownership and fiduciary duty?
Super thanks for this post. I've seen some people arguing over this subject, yet nothing so well articulated so far. I'll post my comments remarks separately. But I'd like to begin with a very simple question: - Is there some sort of “EA ESG Group” or “EA Financial Ethics Group”? Would it be be interesting to have it? And to link it with other groups and areas (like IIDM? Legal Topics?)?
It reminds me (I'll have to share it) this weird sonnet (On fate & future) I drafted (sorry for any lousy rhyme or offense I may have caused to this beautiful language, but I'm not a native speaker) for some friends working with Generation Pledge:
Unhealing stains, sons to be slain / As it's written: jihad and submission / We let Samsara ourselves drain / While Lord Shiva stated a mission.
Mystics, and yet, we don’t believe / For no told miracles anticipate / What brought us luck, skill and fate / The true great wonder we might live:
In a century – in History, just a moment – / The length of happiness has grown six-fold / And more than doubled the expected life /
Now, let it be your faith and my omen / As their fears and promises grow old / No more be bound to ancestors’ strife.
Nuka zaria: longtermist parenting?
I'm not totally kidding (ok, no more puns). Maybe reproduction could be seen as a credible commitment with the future - at least for rational people that actually ponder on having children.
(That's something that weighs against me having children: do I want the people I care about most to live in the future? hmmm... maybe I'll think again in a few years)
I wonder if there's going to be a similar question on adoption