Animal Testing Is Exploitative and Largely Ineffective 2021-06-13T10:46:44.836Z
Interview with Christine M. Korsgaard: Animal Ethics, Kantianism, Utilitarianism 2021-05-08T11:44:30.113Z
Can a Vegan Diet Be Healthy? A Literature Review 2021-03-12T12:47:15.185Z
Two Inadequate Arguments against Moral Vegetarianism 2021-01-30T10:16:35.997Z
Tolstoy's Famine Relief Work in Ryazan & Considering Moral Intuitions 2021-01-09T10:07:47.993Z


Comment by Erich_Grunewald on Animal Testing Is Exploitative and Largely Ineffective · 2021-06-15T18:11:27.008Z · EA · GW

those are both good questions. i tried to find base rates with a cursory search but came up empty-handed. maybe i just didn't use the right search terms, though. but even if the numbers here are the same as the base rate, i would argue that's still pretty bad, because the costs involved in animal testing are higher. i think it makes sense to judge animal-testing research more stringently than other research. though base rates would be useful to see e.g. how difficult it would be to improve methodologies and reduce the amount of unproductive research.

one thing i didn't make clear in the post but which i now realise i should've is that an experiment not getting published due to lack of statistical significance (or more precisely rejection of the null hypothesis) doesn't mean that research wasn't valuable -- it could have been rejected due to publication bias.

Comment by Erich_Grunewald on Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare · 2021-06-05T10:55:10.773Z · EA · GW

nice, thanks for doing this!

If you’re not eating meat, you have to replace the protein and calories. At baseline, flour is 4,464 calories/dollar and 134g protein/dollar.


Perhaps the most important number is the cost to prevent an animal from being farmed. Initial estimates were as low as $0.10/life, but later came under scrutiny. One estimate puts the cost at $5.70 to save a chicken life, with pigs being around $150. Since that implies costs scales about linearly with meat-produced, I’m assuming $636 to save a cow’s life, but these numbers are all speculative. Note also that these are estimates for one particular intervention.

i'm a bit confused here. what does saving a life entail? does it mean, say, getting the proteins you would've gotten from a chicken from plant-based sources instead? if so, the numbers seem to suggest that plant-based diets are more expensive than meat-based diets, which seems pretty unlikely to me? legumes, nuts, peas and soy-based product are all pretty affordable.

edit: also, the average american's calorie intake is significantly higher than the recommended amount. so one could argue that the same amount of calories don't always need to be replaced. but of course reducing calorie intake is not feasible for everybody.

Comment by Erich_Grunewald on What are things everyone here should (maybe) read? · 2021-05-18T23:23:28.085Z · EA · GW

a really great book for learning practical bayesian statistics is richard mcelreath's statistical rethinking. there is also a series of lectures on youtube.

Comment by Erich_Grunewald on Interview with Christine M. Korsgaard: Animal Ethics, Kantianism, Utilitarianism · 2021-05-09T10:56:40.785Z · EA · GW

Only seeing this now, but she does have sections in the book on thinking about species, habitat loss, eliminating predation and what she calls "creation ethics" among other things. I didn't get the feeling reading the book that she would be against welfare reform, but leafing through the pages now I couldn't find any passage that covers that topic explicitly. Thanks for the resources.

Comment by Erich_Grunewald on Interview with Christine M. Korsgaard: Animal Ethics, Kantianism, Utilitarianism · 2021-05-08T19:59:42.733Z · EA · GW

That's interesting and I think that's true to a certain extent, the bottomless pits of suffering and all that. Though Kantianism does make some pretty strong demands in its own way, for instance in the way that it really hammers home the idea of seeing things from others' points of view (via the Formula of Humanity), or in the way that it considers some duties to be absolute ("perfect").

I believe that Korsgaard also thinks we have duties to help others' promote their own good if it's at no great cost to ourselves, though these duties are not as strong as those not to violate other people's autonomy. I think maybe these sorts of duties lead to something like Effective Altruism, though I haven't really thought all of this through yet, or read much of the relevant literature, so what do I know.

Comment by Erich_Grunewald on Interview with Christine M. Korsgaard: Animal Ethics, Kantianism, Utilitarianism · 2021-05-08T19:32:10.361Z · EA · GW

Indeed, and a commenter there pointed out an interesting paper by Richard Yetter Chappell (pdf) which explores and argues against this claim by Korsgaard:

In utilitarianism, people and animals don’t really matter at all; they are just the place where the valuable things happen.

The title of the paper is "Value Receptacles". I haven't read it yet but I suspect it would be of interest to many here.

Comment by Erich_Grunewald on Interview with Christine M. Korsgaard: Animal Ethics, Kantianism, Utilitarianism · 2021-05-08T19:31:21.267Z · EA · GW

A commenter on LessWrong pointed out an interesting paper by Richard Yetter Chappell (pdf) which explores and argues against this claim by Korsgaard:

In utilitarianism, people and animals don’t really matter at all; they are just the place where the valuable things happen.

The title of the paper is "Value Receptacles". I haven't read it but I suspect it would be of interest to many here.

Comment by Erich_Grunewald on Interview with Christine M. Korsgaard: Animal Ethics, Kantianism, Utilitarianism · 2021-05-08T19:25:24.101Z · EA · GW

Thank you for the thoughtful comment! It is an excellent book – if you are at all interested in Kant's moral philosophy, I highly recommend it. I will preface the remainder of this comment with the caveat that I am explaining someone else's work, and that Professor Korsgaard may not agree with my interpretation. Also, any typos in the quoted passages are copying errors.

I haven't read her work myself and probably should, but I was told by someone that basically condition 3 or even having goal-directed behaviour is not necessary. I would hope it wouldn't be, because we could have a being who experiences good and bad and so has their own ends, but has no power to control what they experience and so would just be completely vulnerable and unable to pursue their own ends. Wouldn't such a being still matter?

Here's a passage from the book that expands on that thought but doesn't counter your objection:

The small [objection] is that the definition that I have given of what an animal is is not the same as the definition a contemporary biologist would give. An "animal", as I am using the term, is an organism that functions as an agent, where by agency I mean something like representation-governed locomotion. Animals are conscious organisms who seek out the things that are (functionally) good-for them and try to avoid the things that are bad. [...] The organisms we are concerned with when we think about whether we have duties to animals are sentient beings who perceive the world in valenced ways and act accordingly. This is the feature of organic life that I have argued places an organism in the morally interesting category of having a final good.

However, later on she gets to the argument from marginal cases (if something like intelligence or rationality is the ground for moral standing among humans, then what about infants, or folks with severe developmental impairments?), which I think is similar to your objection here. Korsgaard argues against it, because to her, there is such a thing as a type of creature, even if categories have fuzzy borders. And though your example beings may not be able to pursue their own functional goods, they are still the sorts of creatures who do.

A human infant is not a particular kind of creature, but a human creature at a particular life stage. I believe that it is not proper to assign moral standing, and the properties on which it is grounded, to life stages or to the subjects of those stages. Moral standing should be accorded to persons and animals considered as the subjects of their whole lives, at least in the case of animals with enough psychic unity over time to be regarded as the subjects of their whole life. Nor, except perhaps in the case of extremely simple life forms, should we think of the subject of a life merely as a collection of the subjects of the different temporal stages of the life. [F]or most animals having a self is not just a matter of being conscious at any given moment, but rather a matter of having a consciousness or a point of view that is functionally unified both at a particular time and from one moment to the next. That ongoing self is the thing that should have or lack moral standing, or be the proper unit of moral concern.


There is a third reason for rejecting the argument from marginal cases, and it is the most important. A creature is not just a collection of properties, but a functional unity, whose parts and systems work together in keeping him alive and healthy in the particular way that is characteristic to his kind. Even if it were correct to characterize a human being with cognitive defects as "lacking reason", which usually it is not, this would not mean that it was appropriate to treat the human being as a non-rational animal. Rationality is not just a property that you might have or lack without any other difference, like blue eyes. To say that a creature is rational is not just to say that he has "reason" as one of his many properties, but to say something about the way he functions. [...] A rational being who lacks some of the properties that together make rational functioning possible is not non-rational, but rather defectively rational, and therefore unable to function well. [...] It is not as if you could simply subtract "rationality" from a human animal. A non-rational animal, after all, functions perfectly well without understanding the principles of reason, since he makes his choices in a different way.


The Argument from Marginal Cases ignores the functional unity of creatures. A creature who is constructed to function in part by reasoning but who is still developing or has been damaged is still a rational creature. So the Kantian need not grant and should not grant that infants, the insane, the demented, and so on, are non-rational beings. The point is not, of course, that we should treat infants and people with cognitive disabilities exactly the way we treat adult rational beings, because they too are rational beings. The way we treat any creature has to be responsive to the creature's actual condition. But the creature's condition itself is not given by a list of properties, but also by the way those properties work together.

Korsgaard is talking about rationality here because that, to her, is what sets humans apart from the other animals (though of course she thinks that is the reason why we are moral agents, but not why we have moral standing). But I think she would argue similarly about creatures that are defective in other ways, e.g. who has no power to control what they experience or to pursue goals.

I also wonder what she has in mind by "functional" in "functional good". Do we need to decide what something's function is, if any, to define their goods and bads, and how do we do that? In my view, animals define their own goods and bads through their valenced experiences and/or desires, not just that they happen to experience their goods and bads or that their experiences guide them towards their own functional goods.

If I understand you correctly, I think she would agree. Her distinction between "final goods" and "functional goods" comes, I think, from this 1983 paper of hers, though there she calls functional goods "instrumental" instead. The functional good is basically that which allows a thing to function well, e.g. a whetstone is good for the blade because it keeps it sharp and tar is good for the boat because it keeps it from taking in water. The final good is "the end or aim of all our strivings, or at any rate the crown of their success, the summum bonum, a state of affairs that is desirable or valuable or worth achieving for its own sake". Where does the final good come from? Korsgaard basically argues, if I recall correctly, following Aristotle, that creatures have functions, and that, when we act to achieve some end, to attain whatever we value as good-for us, we take that end to be good in the final sense. I think this is pretty similar to what you were getting at?

It's interesting that she brings up artwork and the environment, too, as potential ends in themselves.

Ah yes, I thought so too, especially since I had understood (mistakenly, apparently) from the book that she did not think of those things as ends in themselves. I actually wrote a dialogue in the old style about this very subject, concluding that inanimate objects are not ends in themselves.

Comment by Erich_Grunewald on What previous work has been done on factors that affect the pace of technological development? · 2021-04-27T20:19:26.770Z · EA · GW

One good resource is Innovation in Cultural Systems: Contributions from Evolutionary Anthropology. I think that is kind of what you're after? I wrote a little about this here:

Though innovation seems to be happening at breakneck speed, there is nothing abrupt about it. Changes are small & cumulative.[6] New ideas are based on old ideas, on recombinations of them & on extending them to new domains.[7] This does not make those ideas any less important. An illustrative example is the lightbulb, the history of which is one of incremental improvement. [...]

Diffusion of innovations have been shown to normally follow S-shaped cumulative distribution curves, with a very slow uptake followed by rapid spread followed by a slowing down as the innovation nears ubiquity.[8] Joseph Henrich has shown that these curves, which are drawn from real-life data, fit models where innovations are adopted based on their intrinsic attributes (as opposed to models in which individuals proceed by trial-&-error, for example).[9] In other words, in the real world, it seems, innovations spread in the main because people choose to adopt them based on their qualities. And which qualities are those? Everett Rogers, an innovation theorist who coined the term “early adopter”, identified five essential ones: an innovation must (1) have a relative advantage over previous ideas; (2) be compatible such that it can be used within existing systems; (3) be simple such that it is easy to understand & use; (4) be testable such that it can be experimented with; & (5) be observable such that its advantage is visible to others.[10]


The rate of cultural innovation generally is correlated with population size.[13] That makes sense: a country of a million will naturally produce more innovations than a country of one. Simulations indicate that innovation produces far more value in large population groups.[14] [...]

But there is also another quality that greatly affects the population-level rate of innovation. That quality is not necessity, which the adage calls the mother of invention; companies cut R&D costs when times are tough, not the other way around.[15] Neither is it a handful of geniuses making earth-shattering individual contributions.[16] No, what greatly affects a population’s rate of innovation is its interconnectedness, in other words how widely ideas, information & tools are shared.[17] In a culture that is deeply interconnected, where information is widely shared, innovations are observable & shared tools & standards mean that innovations are also more likely to be compatible. Most importantly, interconnectedness provides each individual with a large pool of ideas from which they can select the most attractive to modify, recombine, extend & spread in turn.

Comment by Erich_Grunewald on On future people, looking back at 21st century longtermism · 2021-04-19T19:12:51.391Z · EA · GW

Thanks Michael!

Comment by Erich_Grunewald on Spears & Budolfson, 'Repugnant conclusions' · 2021-04-05T10:23:00.024Z · EA · GW

I'm neither a philosopher nor familiar with the formal methods Spears & Budolfson use, but here is my understanding of the paper, which understanding may well be wrong.

Normally, the repugnant conclusion says that a very large population with only barely positive lives is better than a small population of really great lives. I don't think Spears & Budolfson deny the fact that, in this particular situation, average utilitarianism (to take one example) does say that the small population of really great lives is in fact better than the alternative. Instead, they rephrase the problem to say something like that, for any one population, you can always make those people really unhappy if you only add enough additional lives to counterbalance it. Even average utilitarianism aggregates, so a large number of slightly happy members will outweigh a small group of very unhappy members. In any case, so long as you can add an arbitrary number to a population, & so long as you aggregate utility, a very large number of small differences can outweigh a small number of large differences.

I take them to say that Parfit & others were looking not for forms of utilitarianism that avoided any repugnant conclusion, but for ones that avoided some specific repugnant conclusion for some specific hypothetical populations (such as those originally described by Parfit). But there are still, for all forms of utilitarianism – including those that solve Parfit's original problem – other repugnant conclusions for other hypothetical populations. And because the particular hypothetical populations that produce repugnant conclusions are different in different variants of utilitarianism, they cannot easily be compared & repugnant conclusions are therefore not a good measure.

They also argue that there are repugnant conclusions for non-aggregative forms of utilitarianism. As I interpret it, they argue that, for any suffering population, you can always distribute some fixed amount of utility by giving a tiny amount to each existing member & distributing the rest over a very large number of additional members, such that all original members are still suffering & all new members are, too. But at every step we only added utility & therefore made everyone better off, so even if we don't aggregate utility, the final population should still be preferable to the original population. (To be clear, as I understand it, they are still discussing only utilitarian systems & their discussion doesn't apply to for example Kantian or virtue ethics.)

So I think the suggestion is that one shouldn't look at repugnancy as a binary category, but instead some sort of continuum, though the precise measuring of it is yet to be worked out.

Comment by Erich_Grunewald on How much does performance differ between people? · 2021-03-26T21:27:28.636Z · EA · GW

Thanks for the clarification & references!

Comment by Erich_Grunewald on How much does performance differ between people? · 2021-03-26T07:44:56.047Z · EA · GW

I was going to comment something to this effect, too. The authors write:

For instance, we find ‘heavy-tailed’ distributions (e.g. log-normal, power law) of scientific citations, startup valuations, income, and media sales. By contrast, a large meta-analysis reports ‘thin-tailed’ (Gaussian) distributions for ex-post performance in less complex jobs such as cook or mail carrier: the top 1% account for 3-3.7% of the total.

But there’s an important difference between these groups – the products involved in the first group are cheaply reproducible (any number of people can read the same papers, invest in the same start-up or read the same articles – I don’t know how to interpret income here) & those in the second group are not (not everyone can use the same cook or mail carrier).

So I propose that the difference there has less to do with the complexity of the jobs & more to do with how reproducible the products involved are.

Comment by Erich_Grunewald on On future people, looking back at 21st century longtermism · 2021-03-23T19:46:28.646Z · EA · GW

I do have a strong intuition that humans are simply more capable of having wonderful lives than other species, and this is probably down to higher intelligence. Therefore, given that I see no intrinsic value and little instrumental value in species diversity, if I could play god I would just make loads of humans (assuming total utilitarianism is true). I could possibly be wrong that humans are more capable of wonderful lives though.

I'd be skeptical of that for a few reasons: (1) I think different things are good for different species due to their different natures/capacities (the good here being whatever it is that wonderful lives have a lot of), e.g. contemplation is good for humans but not pigs & rooting around in straw is good for pigs but not humans; (2) I think it doesn't make sense to compare these goods across species, because it means different species have different standards for goodness; & (3) I think it is almost nonsensical to ask, say, whether it would be better for a pig to be a human, or for a human to be a dog. But I recognise that these arguments aren't particularly tractable for a utilitarian!

Life is not fair. The simple point is that non-human animals are very prone to exploitation (factory farming is case in point). There are risks of astronomical suffering that could be locked in in the future. I just don't think it's worth the risk so, as a utilitarian, it just makes sense to me to have humans over chickens. You could argue getting rid of all humans gets rid of exploitation too, but ultimately I do think maximising welfare just means having loads of humans so I lean towards being averse to human extinction.

That life is not fair in the sense that different people (or animals) are dealt different cards, so to put it, is true -- the cosmos is indifferent. But moral agents can be fair (in the sense of just), & in this case it's not Life making those groups' existence miserable, it's moral agents who are doing that.

I think I would agree with you on the prone-to-exploitation argument if I were a utility maximiser, with the possible objection that, if humans reach the level of wisdom & technology needed to humanely euthanise a species in order to reduce suffering, possibly they would also be wise & capable enough to implement safeguards against future exploitation of that species instead. But that is still not a good enough reason if one believes that humans have higher capacity as receptacles of utility, though. If I were a utilitarian who believed that, then I think I would agree with you (without having thought about it too much).

Absolutely I care about orangutans and the death of orangutans that are living good lives is a bad thing. I was just making the point that if one puts their longtermist hat on these deaths are very insignificant compared to other issues (in reality I have some moral uncertainty and so would wear my shortermist cap too, making me want to save an orangutan if it was easy to do so).

Got it. I guess my original uncertainty (& this is not something I thought a lot about at all, so bear with me here) was whether longtermist considerations shouldn't cause us to worry about orangutan extinction risks, too, given that orangutans are not so dissimilar from what we were some few millions of years ago. So that in a very distant future they might have the potential to be something like human, or more? That depends a bit on how rare a thing human evolution was, which I don't know.

Yes indeed. My utilitarian philosophy doesn't care that we would have loads of humans and no non-human animals. Again, this is justified due to lower risks of exploitation for humans and (possibly) greater capacities for welfare. I just want to maximise welfare and I don't care who or what holds that welfare.

By the way, I should mention that I think your argument for species extinction is reasonable & I'm glad there's someone out there making it (especially given that I expect many people to react negatively towards it, just on an emotional level). If I thought that goodness was not necessarily tethered to beings for whom things can be good or bad, but on the contrary that it was some thing that just resides in sentient beings but can be independently observed, compared & summed up, well, then I might even agree with it.

Comment by Erich_Grunewald on On future people, looking back at 21st century longtermism · 2021-03-22T22:58:01.411Z · EA · GW

In the post I actually argue that non-human animal extinction would be good. This is because it isn't at all clear that non-human animals live good lives.

Good for whom? Obviously humans' lives seem good to humans, but it could well be that orangutans' lives are just as good & important to orangutans as our lives are to us. Pigs apparently love to root around in straw; that doesn't seem too enticing to me, but it is probably orgasmic fun for pigs!

(This is where I ought to point out that I'm not a utilitarian or even a consequentialist, so if we disagree, that's probably why.)

Obviously animals' lives in factory farms are brutal & may not be worth living, but that is not a natural or necessary condition -- it's that way only because we make it so. It seems unfair to make a group's existence miserable & then to make them go extinct because they are so miserable!

Even if some or many of them do live good lives, if they go extinct we can simply replace them with more humans which seems preferable because humans probably have higher capacity for welfare and are less prone to being exploited (I'm assuming here that there is no/little value of having species diversity). There are realistic possibilities of terrible animal suffering occuring in the future, and possibly even getting locked-in to some extent, so I think non-human animal extinction would be a good thing.

That humans have a higher capacity of welfare seems questionable to me, but I guess we'd have to define well-being before proceeding. Why do you think so? Is it because we are more intelligent & therefore have access to "higher" pleasures?

Similarly (from a longtermist point of view) who really cares if orangutans go extinct?

I guess it's important here to distinguish between organgutans as in the orangutan species & orangutans as in the members of that species. I'm not sure we should care about species per se. But we should care about individual orangutans, & it seems plausible to me that they care whether they go extinct. Large parts of their lives are after all centered around finding mates & producing offspring. So to the extent that anything is important to them (& I would argue that things can be just as important to them as they can be to us), surely the continuation of their species/bloodline is.

The space they inhabit could just be taken over by a different species. The reason why longtermists really care if humans go extinct is not because of speciesism, but because humans really do have the potential to make an amazing future. We could spread to the stars. We could enhance ourselves to experience amazing lives beyond what we can now imagine. We may be able to solve wild animal suffering. Also, to return to my original point, we tend to have good lives (at least this is what most people think). These arguments don't necessarily hold for other species that are far less intelligent than humans and so are, in my humble opinion, mainly a liability from a longtermist's point of view.

Most of that sounds like a great future for humans. Of course if you optimise for the kind of future that is good for humans, you'll find that human extinction seems much worse than extinction of other species. But maybe there's an equally great future that we can imagine for orangutans (equally great for the orangutans, that is, although I don't think they are actually commensurable), full of juicy fruit & sturdy branches. If so, shouldn't we try to bring that about, too?

We may be able to solve wild animal suffering & that'd be great. I could see an "argument from stewardship" where humans are the species most likely to be able to realise the good for all species. (Though I'll note that up until now we seem rather to have made life quite miserable for many of the other animals.)

Comment by Erich_Grunewald on On future people, looking back at 21st century longtermism · 2021-03-22T19:44:46.665Z · EA · GW

I did find this post which sort of touches on the same question.

Comment by Erich_Grunewald on On future people, looking back at 21st century longtermism · 2021-03-22T19:43:10.388Z · EA · GW

This got me thinking a bit about non-human animals. If it's true that (1) speciesism is irrational & there's no reason to favour one species over another just because you belong to that species; (2) the human species is or could very well be at a very early stage of its lifespan; & (3) we should work very hard to reduce prospects of a future human extinction, then shouldn't we also work very hard to reduce prospects of animal extinction right now? After all, many non-human animals are at much higher risk of going extinct than humans today.

You suggest that we humans could – if things go well – survive for billions or even trillions of years; since we only diverged from the last common ancestor with chimpanzees some four to 13 million years ago, that would put us at a very young age relatively. But if those are the timescales we consider, how about the potential in all the other species? It only took us humans some millions of years to go from apes to what we are today, after all. Who knows where the western black rhinoceros would be in a billion years if we hadn't killed all of them? Maybe we should worry about orangutan extinction at least half as much as we worry about human extinction?

Put differently, it's my impression – but I could well be wrong – that EAs focus on animal suffering & human extinction quite a bit, but not so much on non-human extinction. Is there merit to that question? If so, why? Has it been discussed anywhere? (A cursory search brought up very little, but I didn't try too hard.)

Comment by Erich_Grunewald on Why do so few EAs and Rationalists have children? · 2021-03-14T21:15:01.102Z · EA · GW


  • it interferes with working life or self-actualisation, which they value more than the average person
  • they have higher standards for what they deem sufficiently good living conditions for family life, e.g. they suppose one should have acceptably sound personal finances, or a bigger home, etc. in ways that other people don't
Comment by Erich_Grunewald on Why do so few EAs and Rationalists have children? · 2021-03-14T21:03:37.852Z · EA · GW

Possibly for the same reasons that people with higher income & education levels generally have fewer children? That is, it could just be a spurious correlation.

Edit: moved this to the comment section & now I see that Buck pretty much made the same comment already.

Comment by Erich_Grunewald on Can a Vegan Diet Be Healthy? A Literature Review · 2021-03-12T19:08:17.762Z · EA · GW

Ah, good question. Like the author of your quote, I'm also not a nutritionist, nor am I a medical doctor. That said, I wouldn't be surprised if the healthiest diet did include some animal products. That's because vegan/vegetarian diets optimise for something else – they optimise for the removal of meat & animal products. It shouldn't be surprising that a diet optimising purely for health might be better than (& different from) one that optimises for something else entirely.

I suppose in the end one has to sort out one's motivations in choosing a diet. How much importance do I place on my health versus, say, animal suffering? (Or, in more deontological terms, how do I reconcile the duties I have to myself with those I have to other creatures?) Personally, I would strive to eat vegan/vegetarian even if I learned that it was relatively unhealthy. But I'm well aware that not everyone would do that!

Comment by Erich_Grunewald on In diversity lies epistemic strength · 2021-02-07T09:25:33.093Z · EA · GW

Thanks, this was an interesting write-up. I have one, well, let’s call it a concern or maybe caveat. You write:

Objectivity as it is understood here is a continuum between being more or less objective. Objectivity increases with the diversity of perspectives, as more perspectives in a given discussion lead to more assumptions being challenged and, thus, to an answer that is more likely to be true.

I think this relies on all perspective-havers having some shared norms that enable them to find truth collectively. Philosophy, for example, which while not a science benefits enormously from diverse viewpoints, has norms of logic, reasoning & charity that are essential to finding truth. More generally, my impression is that groups & teams function better when they have some shared values, goals & norms. So that’s the caveat that I would add – that there still need to be shared norms, at least truth-seeking norms.

Comment by Erich_Grunewald on Two Inadequate Arguments against Moral Vegetarianism · 2021-01-31T10:12:09.754Z · EA · GW

Well, I don't disagree! I tried to make up the distance in the parenthetical statement, but I didn't mean to imply that treatment of humans & animals ought to be judged by the exact same standard. What I was getting at was more something like this, quoting Christine Korsgaard:

Then there is the disturbing use of the phrase “treated like an animal.” People whose rights are violated, people whose interests are ignored or overridden, people who are used, harmed, neglected, starved or unjustly imprisoned standardly complain that they are being treated like animals, or protest that after all they are not just animals. Of course, rhetorically, complaining that you are being treated like an animal is more effective than complaining that you are being treated like a thing or an object or a stone, for a thing or an object or a stone has no interests that can be ignored or overridden. In the sense intended, an object can’t be treated badly, while an animal can. But then the curious implication seems to be that animals are the beings that it’s all right to treat badly, and the complainant is saying that he is not one of those.

That is, there's a kind of tension in that sort of complaint. It implies that animals are mistreated by some standard, but that, whereas humans can be mistreated in that way, animals can't. So I meant to say that, if we do think that animals can be mistreated in that way (& many do, of course) then that sort of complaint is almost contradictory.

Comment by Erich_Grunewald on Donating to EA funds from Germany · 2020-12-30T20:54:52.098Z · EA · GW

Hmm, I use & that's the only option I know of. Maybe you could try emailing them to ask about EA Funds? The Effective Altruism Foundation, which used to offer this service, were always responsive & helpful when I contacted them.

Comment by Erich_Grunewald on AMA: Jason Crawford, The Roots of Progress · 2020-12-30T20:32:15.490Z · EA · GW

Art? I haven't looked into it much, but I don't really know of any significant improvement in fine arts for a very long time—not in style/technique and not even in the technology (e.g., methods of casting a bronze sculpture). I'd also suggest that music has gotten less sophisticated, but this is super-subjective and treads in culture-war territory, so I'm just going to throw it out there as a wild-ass hypothesis for someone to follow up on at some point.


I'm a little bit late to the party here, but there are examples of improvements in sculpture technology/technique/style leading to new (& very beautiful) works of art, see e.g. Barry X Ball's works made with a combination of 3d-scanning, CAD software, CNC mills & traditional techniques. Not to mention he has a wide variety of stone available to him thanks to the global trade system.

As for music, I guess that totally depends on what you're comparing. The proper comparison for today's popular music isn't Beethoven or Bach but folk music & perhaps music for drawing rooms & salons, which, although they had their own beauties, were nowhere near as complex & intricate as the traditional European art music that is most listened to today. Of the past, only the best survives, but in the present the good & the bad coexist. That said, I think maybe there's a kernel of truth in what you suggest. But we shouldn't trust our intuitive judgment on this.