Posts

Are logarithmic scales biasing our estimates of Scale, Neglectedness and Solvability? 2020-02-24T18:39:13.760Z · score: 53 (26 votes)
[Link] Assessing and Respecting Sentience After Brexit 2020-02-19T07:19:32.545Z · score: 14 (4 votes)
Changes in conditions are a priori bad for average animal welfare 2020-02-09T22:22:21.856Z · score: 15 (10 votes)
Please take the Reducing Wild-Animal Suffering Community Survey! 2020-02-03T18:53:06.309Z · score: 24 (13 votes)
What are the challenges and problems with programming law-breaking constraints into AGI? 2020-02-02T20:53:04.259Z · score: 5 (1 votes)
Should and do EA orgs consider the comparative advantages of applicants in hiring decisions? 2020-01-11T19:09:00.931Z · score: 15 (6 votes)
Should animal advocates donate now or later? A few considerations and a request for more. 2019-11-13T07:30:50.554Z · score: 19 (7 votes)
MichaelStJules's Shortform 2019-10-24T06:08:48.038Z · score: 4 (3 votes)
Conditional interests, asymmetries and EA priorities 2019-10-21T06:13:04.041Z · score: 13 (14 votes)
What are the best arguments for an exclusively hedonistic view of value? 2019-10-19T04:11:23.702Z · score: 7 (4 votes)
Defending the Procreation Asymmetry with Conditional Interests 2019-10-13T18:49:15.586Z · score: 18 (14 votes)
Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests 2019-07-04T23:56:44.330Z · score: 10 (9 votes)

Comments

Comment by michaelstjules on Wei_Dai's Shortform · 2020-02-26T08:52:19.911Z · score: 1 (1 votes) · EA · GW

As a specific case, counterfactual donation matches should cause you to donate more, too.

It could be the case that people's utility functions are pretty sharp near X% of income, so that new information makes little difference. They're probably directly valuing giving X% of income, perhaps as a personal goal. Some might think that they are spending as much as they want on themselves, and the rest should go to charity.

https://slate.com/human-interest/2011/01/go-ahead-give-all-your-money-to-charity.html


Or maybe their utility functions just change with new information?

Comment by michaelstjules on MichaelStJules's Shortform · 2020-02-25T18:38:32.767Z · score: 1 (1 votes) · EA · GW

Utility functions (preferential or ethical, e.g. social welfare functions) can have weak lexicality without strong lexicality, so that a difference in category can be larger than the maximum difference in category , but we can still make tradeoffs between them. This can be done, for example, by having separate utility functions, and for and , respectively, such that

  • for all satisfying the condition and all satisfying (e.g. can be the negation of , although this would normally lead to discontinuity).
  • is bounded to have range in the interval (or range in an interval of length at most 1).

Then we can define our utility function as the sum , so

This ensures that all outcomes with are at least as good as all outcomes with , without being Pascalian/fanatical to maximize regardless of what happens to .

For example, if there is any suffering in that meets a certain threshold of intensity, , and if there is no at all suffering in , . can still be continuous this way.

If the probability that this threshold is met is and the expected value of conditional on this is bounded below by , , regardless of for the choices available to you, then increasing by at least , which can be small, is better than trying to reduce .


As another example, an AI could be incentivized to ensure it gets monitored by law enforcement. Its reward function could look like

where is 1 if the AI is monitored by law enforcement and passes some test in period , and 0 otherwise. You could put an upper bound on the number of periods or use discounting to ensure the right term can't evaluate to infinity since that would allow to be ignored (maybe the AI will predict its expected lifetime to be infinite), but this would eventually allow to overcome the .


This overall approach can be repeated for any finite number of functions, . Recursively, you could define

for increasing and bounded with range in an interval of length at most 1.

Comment by michaelstjules on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-25T17:55:14.499Z · score: 1 (1 votes) · EA · GW
I'm going to interpret this as:
Assume that the owners are misaligned w.r.t the rest of humanity (controversial, to me at least).

Couldn't the AI end up misaligned with the owners by accident, even if they're aligned with the rest of humanity? The question is whether 1 or 2 is better at aligning the AI in cases where enforcement is impossible or explicitly prevented.

I edited my comment above before I got your reply to include the possibility of the AI being incentivized to ensure it gets monitored by law enforcement. Its reward function could look like

where is bounded to have a range of length , and is 1 if the AI is monitored by law enforcement in period (and passes some test) and 0 otherwise. You could put an upper bound on the number of periods or use discounting to ensure the right term can't evaluate to infinity since that would allow to be ignored (maybe the AI will predict its expected lifetime to be infinite), but this would eventually allow to overcome the .

Comment by michaelstjules on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-25T16:08:48.078Z · score: 1 (1 votes) · EA · GW
Imagine trying to make teenagers law-abiding. You could have two strategies:
1. Rewire the neurons or learning algorithm in their brain such that you can say "the computation done to produce the output of neuron X reliably tracks whether a law has been violated, and because of its connection via neuron Y to neuron Z, if an action is predicted to violate a law, the teenager won't take it".
2. Explain to them what the laws are (relying on their existing ability to understand English, albeit fuzzily), and give them incentives to follow it.
I feel much better about 2 than 1.

What if they also have access to nukes or other weapons that could prevent them or their owners from being held accountable if they're used?

EDIT: Hmm, maybe they need strong incentives to check in with law enforcement periodically? This would be bounded per interval of time, and also (much) greater in absolute sign than any other reward they could get per period.

Comment by michaelstjules on Are logarithmic scales biasing our estimates of Scale, Neglectedness and Solvability? · 2020-02-24T23:06:42.129Z · score: 1 (1 votes) · EA · GW

Note: I've rewritten section 3 since first publishing this post on the EA Forum to consider more possibilities of biases.

Comment by michaelstjules on MichaelStJules's Shortform · 2020-02-24T04:10:57.413Z · score: 1 (1 votes) · EA · GW

Then, if you extend these comparisons to satisfy the independence of irrelevant alternatives by stating that in comparisons of multiple choices in an option set, all permissible options are strictly better than all impermissible options regardless of option set, extending these rankings beyond the option set, the result is antifrustrationism. To show this, you can use the set of the following three options, which are identical except in the ways specified:

  • : a preference exists and is fully satisfied,
  • : the same preference exists and is not fully satisfied, and
  • : the preference doesn't exist,

and since is impermissible because of the presence of , this means , and so it's always better for a preference to not exist than for it to exist and not be fully satisfied, all else equal.

Comment by michaelstjules on Changes in conditions are a priori bad for average animal welfare · 2020-02-23T19:47:37.316Z · score: 1 (1 votes) · EA · GW

The "under constant conditions and at equilibrium, the expected value of the average welfare in the wild is at most 0." was assumed for that last inference. I'll update the intro to make this more explicit. Thanks!

Comment by michaelstjules on Harsanyi's simple “proof” of utilitarianism · 2020-02-23T08:13:22.143Z · score: 1 (1 votes) · EA · GW
This makes sense, but the type of things that tend to convince me to believe in an ethical theory generally depend a lot on how much I resonate with the main claims of the theory. When I look at the premises in this theorem, none of them seem to be type of things that I care about.

If you want to deal with moral uncertainty with credences, you could assign each of the 3 major assumptions an independent credence of 50%, so this argument would tell you should be utilitarian with credence at least . (Assigning independent credences might not actually make sense, in case you have to deal with contradictions with other assumptions.)

On the other hand, pointing out that utilitarians care about people and animals, and they want them to be as happy as possible (and free, or with agency, desire satisfaction) that makes me happy to endorse the theory. When I think about all people and animals being happy and free from pain in a utilitarian world, I get a positive feeling.

Makes sense. For what it's worth, this seems basically compatible with any theory which satisfies the Pareto principle, and I'd imagine you'd also want it to be impartial (symmetry). If you also assume real-valued utilities, transitivity, independence of irrelevant alternatives, continuity and independence of unconcerned agents, you get something like utilitarianism again. In my view, independence of unconcerned agents is doing most of the work here, though.

Comment by michaelstjules on Harsanyi's simple “proof” of utilitarianism · 2020-02-23T07:53:50.325Z · score: 4 (3 votes) · EA · GW

I want to point out that both assumptions 2, and 1 and 3 together have been objected to by academic philosophers.

Assumption 2 is ex post consequentialism: maximize the expected value of a social welfare function. Ex ante prioriatarianism/egalitarianism means rejecting 2: we should be fair to individuals with respect to their expected utilities, even if this means overall worse expected outcomes. This is, of course, vNM irrational, but Diamond defended it (and see my other comment here). Essentially, even if two outcomes are equally valuable, a probabilistic mixture of them can be more valuable because it gives people fairer chances; this is equality of opportunity. This contradicts the independence axiom specifically for vNM rationality (and so does the Allais paradox).

Assumptions 1 and 3 together are basically a weaker version of ex ante Pareto, according to which it's (also) better to increase the expected utility of any individual(s) if it comes at no expected cost to any other individuals. Ex post prioritarianism/egalitarianism means rejecting the conjunction of 1 and 3, and ex ante Pareto: we should be more fair to individuals ex post (we want more fair actual outcomes after they're determined), even if this means worse individual expected outcomes.

There was a whole issue of Utilitas devoted to prioritarianism and egalitarianism in 2012, and, notably, Parfit defended prioritarianism in it, arguing against ex ante Pareto (and hence the conjunction of 1 and 3):

When Rawls and Harsanyi appeal to their versions of Veil of Ignorance Contractualism, they claim that the Equal Chance Formula supports the Utilitarian Average Principle, which requires us to act in ways that would maximize average utility, by producing the greatest sum of expectable benefits per person. This is the principle whose choice would be rational, in self-interested terms, for people who have equal chances of being in anyone’s position.
We can plausibly reject this argument, because we can reject this version of contractualism. As Rawls points out, Utilitarianism is, roughly, self-interested rationality plus impartiality. If we appeal to the choices that would be rational, in self-interested terms, if we were behind some veil of ignorance that made us impartial, we would expect to reach conclusions that are, or are close to being, Utilitarian. But this argument cannot do much to support Utilitarianism, because this argument’s premises are too close to these conclusions. Suppose that I act in a way that imposes some great burden on you, because this act would give small benefits to many other people who are much better off than you. If you object to my act, I might appeal to the Equal Chance Formula. I might claim that, if you had equal chances of being in anyone’s position, you could have rationally chosen that everyone follows the Utilitarian Principle, because this choice would have maximized your expectable benefits. As Scanlon and others argue, this would not be a good enough reply.9 You could object that, when we ask whether some act would be wrong, we are not asking a question about rational self-interested choice behind a veil of ignorance. Acts can be wrong in other ways, and for other reasons.

He claimed that we can reject ex ante Pareto ("Probabilistic Principle of Personal Good"), in favour of ex post prioritarianism/egalitarianism:

Even if one of two possible acts would be expectably worse for people, this act may actually be better for these people. We may also know that this act would be better for these people if they are worse off. This fact may be enough to make this act what we ought to do.

Here, by "worse off" in the second sentence, he meant in a prioritarian/egalitarian way. The act is actually better for them, because the worse off people under this act are better off than the worse off people under the other act. He continued:

We can now add that, like the Equal Chance Version of Veil of Ignorance Contractualism, this Probabilistic Principle has a built-in bias towards Utilitarian conclusions, and can therefore be rejected in similar ways. According to Prioritarians, we have reasons to benefit people which are stronger the worse off these people are. According to Egalitarians, we have reasons to reduce rather than increase inequality between people. The Probabilistic Principle assumes that we have no such reasons. If we appeal to what would be expectably better for people, that is like appealing to the choices that it would be rational for people to make, for self-interested reasons, if they had equal chances of being in anyone’s position. Since this principle appeals only to self-interested or prudential reasons, it ignores the possibility that we may have impartial reasons, such as reasons to reduce inequality, or reasons to benefit people which are stronger the worse off these people are. We can object that we do have such reasons.
When Rabinowicz pointed out that, in cases like Four, Prioritarians must reject the Probabilistic Principle of Personal Good, he did not regard this fact as counting against the Priority View. That, I believe, was the right response. Rabinowicz could have added that similar claims apply to Egalitarians, and to cases like Two and Three.
Comment by michaelstjules on Opinion: Estimating Invertebrate Sentience · 2020-02-22T19:56:15.873Z · score: 1 (1 votes) · EA · GW

Another one for bees: information integration across or generalization between senses.

"Bumble bees display cross-modal object recognition between visual and tactile senses" by Cwyn Solvi, Selene Gutierrez Al-Khudhairy and Lars Chittka.

Humans excel at mental imagery, and we can transfer those images across senses. For example, an object out of view, but for which we have a mental image, can still be recognized by touch. Such cross-modal recognition is highly adaptive and has been recently identified in other mammals, but whether it is widespread has been debated. Solvi et al. tested for this behavior in bumble bees, which are increasingly recognized as having some relatively advanced cognitive skills (see the Perspective by von der Emde and Burt de Perera). They found that the bees could identify objects by shape in the dark if they had seen, but not touched, them in the light, and vice versa, demonstrating a clear ability to transmit recognition across senses.

Many animals can associate object shapes with incentives. However, such behavior is possible without storing images of shapes in memory that are accessible to more than one sensory modality. One way to explore whether there are modality-independent internal representations of object shapes is to investigate cross-modal recognition—experiencing an object in one sensory modality and later recognizing it in another. We show that bumble bees trained to discriminate two differently shaped objects (cubes and spheres) using only touch (in darkness) or vision (in light, but barred from touching the objects) could subsequently discriminate those same objects using only the other sensory information. Our experiments demonstrate that bumble bees possess the ability to integrate sensory information in a way that requires modality-independent internal representations.
Comment by michaelstjules on Harsanyi's simple “proof” of utilitarianism · 2020-02-22T17:01:03.576Z · score: 1 (1 votes) · EA · GW

I think if you believe the conditions of the theorem are all plausible or desirable and so give them some weight, then you should give the conclusion some weight, too.

For example, it's unlikely to be the case that anyone's ethical rankings actually satisfy the vNM rationality conditions in practice, but if you give any weight to the claims that we should have ethical rankings that are complete, continuous with respect to probabilities (which are assumed to work in the standard way), satisfy the independence of irrelevant alternatives and avoid all theoretical (weak) Dutch books, and also give weight to the combination of these conditions at once*, then the Dutch book results give you reason to believe you should satisfy the vNM rationality axioms, since if you don't, you can get (weakly) Dutch booked in theory. I think you should be at least as sympathetic to the conclusion of a theorem as you are to the combination of all of its assumptions, if you accept the kind of deductive logic used in the proofs.

*I might be missing more important conditions.

Comment by michaelstjules on Harsanyi's simple “proof” of utilitarianism · 2020-02-21T20:57:42.690Z · score: 5 (4 votes) · EA · GW

I think this is an important point. People might want to start with additional or just different axioms, including, as you say, avoiding the repugnant conclusion, and if they can't all together be consistent, then this theorem may unjustifiably privilege a specific subset of those axioms.

I do think this is an argument for utilitarianism, but more like in the sense of "This is a reason to be a utilitarian, but other reasons might outweigh it." I think it does have some normative weight in this way.

Also, independence of irrelevant alternatives is safer to give up than transitivity, and might accomplish most of what you want. See my other comment.

Comment by michaelstjules on Harsanyi's simple “proof” of utilitarianism · 2020-02-21T00:36:25.769Z · score: 1 (1 votes) · EA · GW
Concretely, assume:
1. Each individual in the group is rational (for a commonly used but technical definition of “rational”, hereafter referred to as “VNM-rational”)[1][2]
2. The group as a whole is VNM-rational[3][4]
3. If every individual in the group is indifferent between two options, then the group as a whole is indifferent between those two options

One way of motivating 3 is by claiming (in the idealistic case where everyone's subjective probabilities match, including the probabilities that go with the ethical ranking):

a. Individual vNM utilities track welfare and what's better for individuals, and not having it do so is paternalistic. We should trust people's preferences when they're rational since they know what's best for themselves.

b. When everyone's preferences align, we should trust their preferences, and again, not doing so is paternalistic, since it would (in principle) lead to choices that are dispreferred by everyone, and so worse for everyone, according to a.*

As cole_haus mentioned, a could actually be false, and a motivates b, so we'd have no reason to believe b either if a were false. However, if we use some other real-valued conception of welfare and claim what's good for individuals is maximizing its expectation, then we could make an argument similar to b (replacing "dispreferred by everyone" with "worse in expectation for each individual") to defend the following condition, which recovers the theorem:

3'. If for two options and for each individual in the options, their expected welfare is the same in the two options, then we should be ethically indifferent between the options.

*As alluded to here, if your ethical ranking of choices broke one of these ties so , it would do so with a real number-valued difference, and by the continuity axiom, you could probabilistically mix the choice you broke the tie in favour of with any choice that's worse to everyone than the other choice , and this could be made better than according to your ethical ranking, i.e. for any close enough to 1, while everyone has the opposite preference over these two choices.

Comment by michaelstjules on Harsanyi's simple “proof” of utilitarianism · 2020-02-20T21:06:12.909Z · score: 1 (1 votes) · EA · GW
Why should morality be based on group decision-making principles? Why should I care about VNM rationality of the group?

I've retracted my previous reply. The original 2nd condition is different from ex ante Pareto; it's just vNM rationality with respect to outcomes for social/ethical preferences/views and it says nothing about the relationship between individual preferences and social/ethical ones. It's condition 3 that connects individual vNM utility and social/ethical vNM utility.

Comment by michaelstjules on Harsanyi's simple “proof” of utilitarianism · 2020-02-20T21:00:06.980Z · score: 4 (3 votes) · EA · GW

I think this last point essentially denies the third axiom above, which is what connects individual vNM utility and social/ethical preferences. (The original statement of the second axiom is just vNM rationality for social/ethical preferences, and has no relationship with the individuals' preferences.)

Comment by michaelstjules on Harsanyi's simple “proof” of utilitarianism · 2020-02-20T18:30:22.883Z · score: 2 (2 votes) · EA · GW
I used preferences about restaurants as an example because that seemed like something people can relate to easily, but that's just an example. The theorem is compatible with hedonic utilitarianism. (In that case, the theorem would just prove that the group's utility function is the sum of each individual's happiness.)

In this case, I think it's harder to argue that we should care about ex ante expected individual hedonistic utility and for the 1st and 3rd axioms, because we had rationality based on preferences and something like Pareto to support these axioms before, but we could now just be concerned with the distribution of hedonistic utility in the universe, which leaves room for prioritarianism and egalitarianism. I think the only "non-paternalistic" and possibly objective way to aggregate hedonistic utility within an individual (over their life and/or over uncertainty) would be to start from individual preferences/attitudes/desires but just ignore concerns not about hedonism and non-hedonistic preferences, i.e. an externalist account of hedonism. Roger Crisp defends internalism in "Hedonism Reconsidered", and defines the two terms this way:

Two types of theory of enjoyment are outlined-internalism, according to which enjoyment has some special ’feeling tone’, and externalism, according to which enjoyment is any kind of experience to which we take some special attitude, such as that of desire.

Otherwise, I don't think there's any reason to believe there's an objective common cardinal scale for suffering and pleasure, even if there were a scale for suffering and a separate scale for pleasure. Suffering and pleasure don't use exactly the same parts of the brain, and suffering isn't just an "opposite" pattern to pleasure. Relying on mixed states, observing judgements when both suffering and pleasure are happening at the same time might seem promising, but these judgements happen at a higher level and probably wouldn't be consistent between people, e.g. you could have two people with exactly the same suffering and pleasure subsystems, but with different aggregating systems.

I'm personally more sympathetic to externalism. With antifrustrationism (there are actually arguments for antifrustrationism; see also my comment here), externalism leads to a negative hedonistic view (which I discuss further here).

Comment by michaelstjules on Harsanyi's simple “proof” of utilitarianism · 2020-02-20T17:36:50.434Z · score: 3 (2 votes) · EA · GW
Why should morality be based on group decision-making principles? Why should I care about VNM rationality of the group?

It doesn't have to be the group, it can be an impartial observer with their own social welfare function, as long as it is increasing with individual expected utility, i.e. satisfies ex ante Pareto. Actually, that's how it was originally stated.

EDIT: woops, condition 2 is weaker than ex ante Pareto; it's just vNM rationality with respect to outcomes for social/ethical preferences/views. It's condition 3 that connects individual vNM utility and social/ethical vNM utility.

Comment by michaelstjules on Harsanyi's simple “proof” of utilitarianism · 2020-02-20T17:23:07.334Z · score: 1 (1 votes) · EA · GW

I would actually say that being equivalent to and is in contradiction with equality of opportunity. In the first case, both individuals have an equal chance of being well-off (getting 2), but in the second and third, only one has any chance of being well-off, so the opportunities to be well-off are only equal in the first case (essentially the same objection to essentially the same case is made in "Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparison of Utility: Comment", in which Peter Diamond writes "it seems reasonable for the individual to be concerned solely with final states while society is also interested in the process of choice"). This is what ex ante prioritarianism/egalitarianism is for, but it can lead to counterintuitive results. See the comments on that post, and "Decide As You Would With Full Information! An Argument Against Ex Ante Pareto" by Marc Fleurbaey & Alex Voorhoeve.

For literature on equality of outcomes and uncertainty, the terms to look for are "ex post egalitarianism" and "ex post prioritarianism" (or with the hyphen as "ex-post", but I think Google isn't sensitive to this).

Comment by michaelstjules on Harsanyi's simple “proof” of utilitarianism · 2020-02-20T17:15:43.432Z · score: 29 (13 votes) · EA · GW

Thanks for writing this!

I don't think the theorem provides support for total utilitarianism, specifically, unless you add extra assumptions about how to deal with populations of different sizes or different populations generally. Average utilitarianism is still consistent with it, for example. Furthermore, if you don't count the interests of people who exist until after they exist or unless they come to exist, it probably won't look like total utilitarianism, although it gets more complicated.

You might be interested in Teruji Thomas' paper "The Asymmetry, Uncertainty, and the Long Term" (EA Forum post here), which proves a similar result from slightly different premises, but is compatible with all of 1) ex post prioritarianism, 2) mere addition, 3) the procreation asymmetry, 4) avoiding the repugnant conclusion and 5) avoiding antinatalism, and all five of these all at the same time, because it sacrifices the independence of irrelevant alternatives (the claim that how you rank choices should not depend on what choices are available to you, not the vNM axiom). Thomas proposes beatpath voting to choose actions. Christopher Meacham's "Person-affecting views and saturating counterpart relations" also provides an additive calculus which "solves the Non-Identity Problem, avoids the Repugnant and Absurd Conclusions, and solves the Mere-Addition Paradox" and satisfies the asymmetry, also by giving up the independence of irrelevant alternatives, but hasn't, as far as I know, been extended to deal with uncertainty.

I've also written about ex ante prioritarianism in the comments on the EA Forum post about Thomas' paper, and in my own post here (with useful feedback in the comments).

Comment by michaelstjules on Harsanyi's simple “proof” of utilitarianism · 2020-02-20T16:45:29.484Z · score: 2 (2 votes) · EA · GW

Some discussion here, too.

Comment by michaelstjules on Candidate Scoring System recommendations for the Democratic presidential primaries · 2020-02-20T09:06:35.525Z · score: 3 (2 votes) · EA · GW

Might not really matter now given her chances, but she did an interview with VegNews:

For me, deciding to be vegetarian is rooted in a very strong spiritual foundation as a practicing Hindu—and an awareness and a care and compassion for all living beings. So, more recently, in the last few years—just as I became more aware of the unethical treatment of animals in the dairy industry especially—it caused me to really think about some of the changes I could make to lessen that negative impact on animals as well as the environment.

VN: Switching gears, what changes do you would want to see for animals legally? 
TG: Factory farms have to be a thing of the past. Throughout the time I’ve spent in Iowa, we’ve seen the horrifying ways animals are treated in these farms and the incredible, ravaging impact that it has on the communities where these farms are located. Supporting more ethical and organic farming has to be the place that we go when it comes to farming. Ending animal testing. Ending the inhumane treatment of animals, whether it is for cosmetic purposes or other purposes. Science is showing us that even for those kinds of testing that may be required, there’s absolutely no reason or justification for this to continue to occur in the use of animals. We need to ban puppy mills. These commercial breeding factories full of animals that don’t put an emphasis on animals’ well-being—and really is a purely profit-driven, greed-based business—is leading to more dogs who are just actually in need of homes, and filling up shelters and ending up in a very terrible situation. I think another one is a huge issue—but not maybe striking a chord with everyone because people are not aware of it—is ending the trophy hunting that’s happening, and making it so that it is not a cultural norm that we accept in this society. There’s a long list of things we need to do, but I think these are at the top of the list. 
VN: What about culturally and societally? In what ways do you want to see our relationships to animals shift? 
TG: When people talk about their dogs as their best friends, or the cats in their house, or the horses that they have on their ranch … I would love to see that same kind of relationship that people have with their animals extended to all animals. That you’ve got to respect animals. That you know and understand that animals have incredible feelings and emotions and, just as our dogs are happy to see us when we come home, we need to understand and appreciate that relationship with all animals and respecting them as sentient beings that are like us. They are a very integral part of our ecosystem. 
Comment by michaelstjules on Ben Cottier's Shortform · 2020-02-19T22:31:46.154Z · score: 2 (2 votes) · EA · GW

Some discussion here, too, in the context of introducing s-risks:

https://foundational-research.org/s-risks-talk-eag-boston-2017/

Comment by michaelstjules on Estimates of global captive vertebrate numbers · 2020-02-19T22:20:18.949Z · score: 2 (2 votes) · EA · GW
One important point I think worth highlighting about the numbers is their differential growth rates. That is, for instance, not only are there many more farmed fish than pigs or cows but the annual increase in the number of farmed fish is much greater than that for pigs or cows

Agreed that this is very important. The scale of a problem should be defined to include (your projections for) its total over time that you think your actions could influence. Relatively few animals could be used in a given country now, but because of expected growth, the scale could actually be huge, and our cost-effectiveness estimates should take such projections into account.

Comment by michaelstjules on Thoughts on electoral reform · 2020-02-19T09:57:38.216Z · score: 8 (3 votes) · EA · GW
Monte Carlo simulations independently performed by Warren Smith and Jameson Quinn generally find that approval voting has higher VSE than instant-runoff voting, and that both approval voting and instant-runoff voting have much higher VSE than plurality voting.

A priori, I think this could end up being quite sensitive to the distributions of votes they used. Did they choose them based on surveys/polls of voter preferences?

Comment by michaelstjules on Estimates of global captive vertebrate numbers · 2020-02-19T03:02:20.838Z · score: 7 (5 votes) · EA · GW

Thanks for putting all of this together!

Some other ideas to put the scale into perspective:

There are a bunch of slaughter counters online that increment the number of animals killed per species over time, which also helps put the scale into perspective, but not so useful for thinking about the number of animals alive at any moment. Based on the data here, it seems to be about 2000 farmed chickens (for meat or eggs) and 3000 farmed fishes slaughtered per second.

For number alive at any moment, the ratio with number of humans alive at any moment could be helpful, too, but only as a relative scale. For example, you'd imagine 2 to 3 chickens and 11 to 12 farmed fishes, in misery, hanging around each human, on average.

You could also imagine each person slaughtering a chicken every ~40 days and a farmed fish about once a month.

Comment by michaelstjules on Changes in conditions are a priori bad for average animal welfare · 2020-02-18T18:06:52.953Z · score: 1 (1 votes) · EA · GW

Here's a consideration in the opposite direction (and which could imply positive average welfare a priori, depending on your subjective credences), which I had written about here (and then forgot):

However, I also suspect there's an argument going in the opposite direction (is it the same as the original one in the OP?): animals act to avoid suffering and seek pleasure, and the results might better be thought of as applying to behaviours in response to pleasure and suffering as signals than directly to these signals, because evolution is optimizing for behaviour, and optimizing for pleasure and suffering only as signals for behaviour. If we thought a negative event and a positive event were equally intense, probable and reinforcing *before* they happened, the positive event would be more likely to continue or happen again after it happened than the negative one after it happened, because the animal seeks the positive and avoid the negative. This would push the average welfare up. I'm pretty uncertain about this argument, though.

Indeed, you could think of humans as the most extreme and successful example of this, given how much effort we've put in to reduce suffering and discomfort and increase pleasure.

Comment by michaelstjules on How Much Do Wild Animals Suffer? A Foundational Result on the Question is Wrong. · 2020-02-18T18:02:16.053Z · score: 1 (1 votes) · EA · GW

I've elaborated on point 3 here.

Comment by michaelstjules on Zach Groff: Does suffering dominate enjoyment in the animal kingdom? · 2020-02-18T17:59:45.557Z · score: 3 (3 votes) · EA · GW

Zach's EA Forum post on this.

Comment by michaelstjules on Defending the Procreation Asymmetry with Conditional Interests · 2020-02-17T08:43:00.881Z · score: 1 (1 votes) · EA · GW

Here's a more thorough treatment of a similar approach:

Teruji Thomas: The Asymmetry, Uncertainty, and the Long Term

Post on the EA Forum.

Comment by michaelstjules on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-15T17:20:29.283Z · score: 1 (1 votes) · EA · GW
But AI-enabled police would be able to probe actions, infer motives, and detect bad behavior better than humans could. In addition, AI systems could have fewer rights than humans, and could be designed to be more transparent than humans, making the police's job easier.

Isn't most of this after a crime has already been committed? Is that enough if it's an existential risk? To handle this, would we want continuous monitoring of autonomous AIs, at which point aren't we actually just taking their autonomy away?

Also, if we want to automate "detect bad behavior", wouldn't that require AI alignment, too? If we don't fully automate it, then can we be confident that humans can keep up with everything they need to check themselves, given that AIs could work extremely fast? AIs might learn how much work humans can keep up with and then overwhelm them.

Furthermore, AIs may be able to learn new ways of hiding things from the police, so there could be gaps where the police are trying to catch up.

Comment by michaelstjules on Short-Term AI Alignment as a Priority Cause · 2020-02-12T22:54:27.094Z · score: 1 (1 votes) · EA · GW

Possibly. One trend in YouTube's recommendations seems to be towards more mainstream content, and EA, x-risks and farm animal welfare/rights aren't really mainstream topics (animal rights specifically might be considered radical), so any technical contributions to recommender alignment might be used further to the exclusion of these topics and be net-negative.

Advocacy, policy and getting the right people on (ethics) boards might be safer. Maybe writing about the issue for Vox's Future Perfect could be a good place to start?

Comment by michaelstjules on Global Cochineal Production: Scale, Welfare Concerns, and Potential Interventions · 2020-02-12T00:28:17.402Z · score: 8 (3 votes) · EA · GW

Also, more than just understanding the probability of sentience, it would be good to have a better idea of what kinds of things cause them to suffer (that are relevant in their production and use), not just whether they're sentient at all. For example, I've heard that crustaceans are sensitive to heat but not the cold. I suppose whether or not cochineals are sensitive to heat itself, live boiling may cause suffering through the damage it causes.

Comment by michaelstjules on Short-Term AI Alignment as a Priority Cause · 2020-02-11T19:42:07.555Z · score: 3 (3 votes) · EA · GW
As a more EA example, we can consider the case of the Malaria Consortium (or other GiveWell top charities). Much of philanthropy could become a lot more effective if donators were better informed. An aligned recommender could stress this fact, and recommend effective charities, as opposed to appealing ineffective ones. Thousands, if not hundreds of thousands of lives, could probably be saved by exposing potential donators to better quality information.

Why would an aligned recommender stress this fact? Is this something we could have much influence over?

Comment by michaelstjules on Short-Term AI Alignment as a Priority Cause · 2020-02-11T19:21:35.348Z · score: 11 (9 votes) · EA · GW

Thanks for writing this!

I will argue that short-term AI alignment should be viewed as today's greatest priority cause

I don't see much quantitative analysis in this post demonstrating this. You've shown that it's plausibly something worth working on and it can impact other priorities, but not that work on this is better than other work, either by direct comparison or by putting it on a common scale (e.g. Scale/Importance + Solvability/Tractability + Neglectedness).

I think in health and poverty in developing countries, there are well-known solutions that don't need AI or recommender alignment, although more powerful AI, more data and targeted ads might be useful in some cases (but generally not, in my view). Public health generally seems to be a lower priority than health and poverty in developing countries, but maybe the gains across many domains from better AI can help, but even then, is alignment the problem here, or is it just that we should collect more data, use more powerful algorithms and/or pay for targeted ads?

For animal welfare, I think more sophisticated targeted ads will probably go further than trying to align recommender systems, and it's not clear targeted ads are particularly cost-effective compared to, say, corporate outreach/campaigns, so tweaking them might have little value (I'm not sure how much of a role targeted ads play in corporate campaigns, though).

Comment by michaelstjules on Short-Term AI Alignment as a Priority Cause · 2020-02-11T18:44:24.835Z · score: 2 (2 votes) · EA · GW

Related: Aligning Recommender Systems as Cause Area by Ivan Vendrov and Jeremy Nixon

Comment by michaelstjules on Some (Rough) Thoughts on the Value of Campaign Contributions · 2020-02-10T19:12:04.335Z · score: 1 (1 votes) · EA · GW

FWIW, the difference in presidency score here between Biden and the Republican prior (with Biden better) is similar to that between Biden and the best Democratic candidate (although Trump is far worse than the Republican prior or Pence). I think the scores are intended to track expected utility roughly, and the differences between candidates seem pretty significant.

Check out the Excel sheets. They're still being updated.

Comment by michaelstjules on Should we minimize the suffering felt next year or speed up neglected welfare improvements? A simple model · 2020-02-08T08:54:41.106Z · score: 1 (1 votes) · EA · GW
In the first case, I examined a world in which a Random Altruist charity is already works on welfare asks. This charity chooses randomly from the set of welfare asks; this could be similar to (although probably worse than) choosing asks based on salience and emotional impact. In this case, I modeled a sample of 30 random welfare ask ranges and selections. I found that entering the space with the welfare increase method leads to more optimal outcomes ~17% of the time. The counterfactual speed-up approach leads to more optimal outcomes ~60% of the time. The two models were equally good ~23% of the time. Therefore, if we did not have perfect information and so cannot select asks which result in highest utility then maximizing counterfactual speed-up is a superior strategy. No surprise there.

I think the ranking of the two approaches could depend substantially on the distribution of magnitudes of welfare asks and the number of asks.

For example, consider a distribution which is constant in magnitude, except a few rare and very large outliers. Suppose specifically it's always positive, and constant except for exactly one large positive outlier. In this case, the optimal solution is to ensure the outlier comes as early as possible, so you choose the outlier first, and then choose any other asks after that. The welfare increase method does this, so it will always be optimal (but might tie with counterfactual speed-up). On the other hand, if the number of asks is high enough, the counterfactual speed-up approach will often choose the last ask the Random Altruist charity would have chosen so it could speed it up, which would be suboptimal.

To illustrate, consider the following sequence of asks (and their present value) that Random Altruist charity would have chosen:

1, 10, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1

That's one 1, one 10 and then ten 1s.

Choosing the 10 has a value of 10 according to counterfactual speed-up, since it advances it by one year.

Choosing the very last 1 in the sequence has a value of 11 according to counterfactual speed-up, since it advances it by 11 years, but it wouldn't have made a real difference if you had chosen the 2nd 1 instead (the two sequences would be indistinguishable by welfare), and even choosing the very 1st 1 would have been better, since it would make the 10 come one year earlier.


What this might suggest in general to me is that if most asks aren't very impactful or have similar impact, but there are some much more impactful outliers, we should use the welfare increase approach. This seems kind of intuitive if you thought most animal welfare charities aren't focused on farmed animals at all or the most numerous and worst treated ones (I'm not sure this is actually the case, most animal charity goes to shelters according to ACE, but I don't know if that counts as animal welfare asks). (EDIT: I suppose if there's still quite a lot of spread among the outliers, then the counterfactual speed-up approach could be better.) Of course, we could just ignore those charities, but once we do, we might be in a situation similar to the one you described as:

However, I then examined how varying the strategy of the existing charity would change the outcome. If the existing charity follows the simple welfare increase approach, the results change. Now following the simple welfare increase strategy is the optimal outcome 100% of the time! This would be the same if the actor in the space is trying maximize counterfactual speed-up, as the best way to do this when alone is to choose the biggest welfare asks.

Also, did you happen to estimate (via Monte Carlo) the expected value of each approach?

Comment by michaelstjules on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-08T04:26:40.938Z · score: 3 (2 votes) · EA · GW
Since we don't publicize rejections, or even who applied to the fund, I wasn't planning to write any COI statements for rejected applicants. That's a bit sad, since it kind of leaves a significant number of decisions without accountability, but I don't know what else to do.

You could give short write-ups (with COI statements) to rejected applicants, which they could then share themselves (publicly or privately). If someone asks you why a particular applicant didn't get funding, you could request permission from the applicant to share the write-up with them or direct them to the applicant.

Do you expect that publicizing rejections would deter the kinds of applicants that would actually get grants from the fund? It might be worth running an informal survey. You could publicize all applicants and rejections as a rule, but only publish reasoning and COI statements for rejections with consent from the applicants. Such a rule might even encourage some applicants, if they believe it improves transparency and accountability.

The charity evaluators GiveWell and ACE publicize the charities they consider.

Of course, I don't know how much extra work this would be.

The natural time for grantees to object to certain information to be included would be when we run our final writeup past them. They could then request that we change our writeup, or ask us to rerun the vote with certain members excluded, which would make the COI statements unnecessary.

Sounds good!

Comment by michaelstjules on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-08T01:17:47.764Z · score: 2 (2 votes) · EA · GW

For privacy, you could let potential grantees request the recusal of specific fund members to prevent the publication of certain COIs, since the COIs of recused fund members don't need to be published.

They could do this at two points:

1. At the very start of the process.

2. After they see the COI statements but before final decisions are made.

I'm not sure how early you would want to do 2 in the process, to avoid writing many unnecessary COI statements. You could do early screening by unanimous vote against funding specific potential grantees, and, in these cases, no COI statement would have to be written at all.

Comment by michaelstjules on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-05T23:05:41.783Z · score: 1 (1 votes) · EA · GW

Maybe this cuts to the chase: Should we expect AIs to be able to know or do anything in particular well "enough". I.e. is there one thing in particular we can say AIs will be good at and only get wrong extremely rarely? Is solving this as hard as technical AI alignment in general?

How do you define "biological" and "brain"? Again, your input is a camera image, so you have to build this up starting from sentences of the form "the pixel in the top left corner is this shade of grey".

These are things it would be trained to learn. It would learn to read and could read biology textbooks and papers or things online, and it would also see pictures of people, brains, etc..

AIs do not usually come equipped with such functions, so you either have to say how to use the AI system to implement those functions, or you have to implement them yourself.

This could be an explicit output we train the AI to predict (possibly part of responses in language).

I mean, the existence part was not the main point -- my point was that if butterfly effects are real, then the AI system must always do nothing (even if it can't predict what the butterfly effects would be). If you want to avoid debates about population ethics, you could imagine butterfly effects that affect current people: e.g. you slightly change who talks to whom, which changes whether a person gets hit by a car later in the day or not.

I "named" a particular person in that sentence. The probability that what I do leads to an earlier death for John Doe is extremely small, and that's the probability that I'm constraining, for each person separately. This will also in practice prevent the AI from conducting murder lotteries up to a certain probability of being killed, but this probability might be too high, so you could have separate constraints for causing an earlier death for a random person or on the change in average life expectancy in the world to prevent, etc..

Comment by michaelstjules on Should Longtermists Mostly Think About Animals? · 2020-02-05T17:49:20.177Z · score: 2 (2 votes) · EA · GW
I don't think it's obvious that this is in expectation negative. I'm not at all confident that negative valence is easier to induce than positive valence today (though I think it's probablytrue), but conditional upon that being true, I also think it's a weird quirk of biology that negative valence may be more common than positive valence in evolved animals. Naively I would guess that the experiences of tool AI (that we may wrongly believe to not be sentient, or are otherwise callous towards) is in expectation zero. However, this may be enough for hedonic utilitarians with a moderate negative lean (3-10x, say) to believe that suffering overrides happiness in those cases.

It might be 0 in expectation to a classical utilitarian in the conditions for which they are adapted, but I expect it to go negative if the tools are initially developed through evolution (or some other optimization algorithm for design) and RL (for learning and individual behaviour optimization), and then used in different conditions. Think of "sweet spots": if you raise temperatures, that leads to more deaths by hyperthermia, but if you decrease temperatures, more deaths by hypothermia. Furry animals have been selected to have the right amount of fur for the temperatures they're exposed to, and sentient tools may be similarly adapted. I think optimization algorithms will tend towards local maxima like this (although by local maxima here, I mean with respect to conditions, while the optimization algorithm is optimizing genes; I don't have a rigorous proof connecting the two).

On the other hand, environmental conditions which are good to change in one direction and bad in the other should cancel in expectation when making a random change (with a uniform prior), and conditions that lead to improvement in each direction don't seem stable (or maybe I just can't even think of any), so are less likely than conditions which are bad to change in each direction. I.e. is there any kind of condition such that a change in each direction is positive? Like increasing the temperature and decreasing the temperature are both good?

This is also a (weak) theoretical argument that wild animal welfare is negative on average, because environmental conditions are constantly changing.

Fair enough on the rest.

Comment by michaelstjules on Should Longtermists Mostly Think About Animals? · 2020-02-05T09:48:23.353Z · score: 1 (1 votes) · EA · GW
moral patients that are themselves not moral patients

Did you mean not moral agents?

Comment by michaelstjules on Should Longtermists Mostly Think About Animals? · 2020-02-05T09:41:21.130Z · score: 5 (4 votes) · EA · GW
A5b. The argument that people may wish to directly optimize for positive utility, but nobody actively optimizes for negative utility, is in my mind some (and actually quite strong) evidence that total or negative-leaning *hedonic* utilitarians should focus more on avoiding extinction + ensuring positive outcomes than on avoiding negative outcomes.

I've argued against this point here (although I don't think my objection is very strong). Basically, we (or whoever) could be mistaken about which of our AI tools are sentient or matter, and end up putting them in conditions in which they suffer inadvertently or without concern for them, like factory farmed animals. If sentient tools are adapted to specific conditions (e.g. evolved), a random change in conditions is more likely to be detrimental than beneficial.

Also, individuals who are indifferent to or unaware of negative utility (generally or in certain things) may threaten you with creating a lot of negative utility to get what they want. EAF is doing research on this now.

Comment by michaelstjules on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-04T05:38:04.701Z · score: 1 (1 votes) · EA · GW
How do you intend to define "person" in terms of the inputs to an AI system (let's assume a camera image)?

Can we just define them as we normally do, e.g. biologically with a functioning brain? Is the concern that AIs won't be able to tell which inputs represent real things from those that won't? Or they just won't be able to apply the definitions correctly generally enough?

How do you compute the "probability" of an event?

The AI would do this. Are AIs that aren't good at estimating probabilities of events smart enough to worry about? I suppose they could be good at estimating probabilities in specific domains but not generally, or have some very specific failure cases that could be catastrophic.

What is "inaction"?

The AI waits for the next request, turns off or some other inconsequential default action.

(There's also the problem that all actions probably change who does and doesn't exists, so this law would require the AI system to always take inaction, making it useless.)

Maybe my wording didn't capture this well, but my intention was a presentist/necessitarian person-affecting approach (not that I agree with the ethical position). I'll try again:

"A particular person will have been born with action A and with inaction, and will die at least x earlier with probability > p with A than they would have with inaction."

Comment by michaelstjules on Please take the Reducing Wild-Animal Suffering Community Survey! · 2020-02-04T04:34:58.886Z · score: 1 (1 votes) · EA · GW

Good suggestion. We'll remember this for next time.

Comment by michaelstjules on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-03T18:30:20.567Z · score: 2 (2 votes) · EA · GW
Huh? If I ask someone to manage my paperclip factory, I certainly do expect them to interpret that request to include "and also don't kill anyone".

That's what you want, but the sentence "Maximize paperclips" doesn't imply it through any literal interpretation, nor does "Maximize paperclips" imply "maximize paperclips while killing at least one person". What I'm looking for is logical equivalence, and adding qualifiers about whether or not people are killed breaks equivalence.

This is then also a problem of reasoning and understanding language: when I say "please help me write good education policy laws", if it understands language and reason, and acts based on that, that seems pretty aligned to me.

I think much more is hidden in "good", which is something people have a problem specifying fully and explicitly. The law is more specific and explicit, although it could be improved significantly.

I am not a law expert, but my impression is that there is a lot of common sense + human judgment in the application of laws, just as there is a lot of common sense + human judgment in interpreting requests.

That's true. I looked at the US Code's definition of manslaughter and it could, upon a literal interpretation, imply that helping someone procreate is manslaughter, because bringing someone into existence causes their death. That law would have to be rewritten, perhaps along the lines of "Any particular person dies at least x earlier with probability > p than they would have by inaction", or something closer to the definition of stochastic dominance for time of death (it could be a disjunction of statements). These are just first attempts, but I think they could be refined enough to capture a prohibition on killing humans to our satisfaction, and the AI wouldn't need to understand vague and underspecified words like "good".

We would then do this one by one for each law, but spend a disproportionate amount of time on the more important laws to get them right.

(Note that laws don't cover nonidentity cases, as far as I know.)

Comment by michaelstjules on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-03T02:12:49.434Z · score: 1 (1 votes) · EA · GW

I suspect current laws capture enough of what we care about that if an AGI followed them "properly", this wouldn't lead to worse outcomes than without AGI at all in expectation, but there could be holes to exploit and "properly" is where the challenge is, as you suggest. Many laws would have to be interpreted more broadly than before, perhaps.

You might say that we could train an AI system to learn what is and isn't breaking the law; but then you might as well train an AI system to learn what is and isn't the thing you want it to do.

Isn't interpreting statements (e.g. laws) and checking if they apply to a given action a narrower, more structured and better-defined problem than getting AI to do what we want it to do? If the AI can find an interpretation of a law according to which an action would break it with high enough probability, then that action would be ruled out. This seems like it could be a problem of reasoning and understanding language, instead of the problem of understanding and acting in line with human values.

To illustrate, "Maximize paperclips without killing anyone" is not an interpretation of "Maximize paperclips", but "Any particular person dies at least 1 day earlier with probability > p than they would have by inaction" could be an interpretation of "produce death" (although it might be better to rewrite laws in more specific numeric terms in the first place).

Defining a good search space (and search method) for interpretations of a given statement might still be a very difficult problem, though.

Comment by michaelstjules on What posts you are planning on writing? · 2020-02-02T20:59:19.570Z · score: 1 (1 votes) · EA · GW

Consider reaching out to Rethink Priorities, Charity Entrepreneurship and Good Policies (a CE-incubated charity). I think they'd be very interested, given that they're doing similar research (RP on ballot initiatives, CE did some on lobbying for animal welfare and has had interest in lobbying for tobacco taxation). Open Philanthropy Project and the managers of the EA Funds would also probably be interested in your findings.

Comment by michaelstjules on Candidate Scoring System recommendations for the Democratic presidential primaries · 2020-02-02T18:06:43.600Z · score: 1 (1 votes) · EA · GW
One objection here is that improving socioeconomic conditions can also broadly improve people's values. Generally speaking, increasing wealth and security promotes self-expression values, which correspond decently well to having a wide moral circle. So there's less general reason to single out moral issues like animal welfare as being a comparatively higher priority.
However, improving socioeconomic conditions also accelerates the date at which technological s-risks will present themselves. So in some cases, we are looking for differential moral progress. So this tells me to increase the weight of animal welfare for the long run. (It's overall slightly higher now than before.)

Good points. I think it's also important where these improvements (socioeconomic or moral) are happening in the world, although I'm not sure in which way. How much effect does further improvements in socioeconomic conditions in the US and China have on emerging tech and values in those countries compared to other countries?

Emerging tech is treated as an x-risk here, so s-risks from tech should be considered separately. In terms of determining weights and priorities I would sooner lump s-risks into growth and progress than into x-risks.

FWIW, s-risks are usually considered a type of x-risk, and generally involve new technologies (artificial sentience, AI).

I don't see climate change policy as promoting better moral values. Sure, better moral values can imply better climate change policy, but that doesn't mean there's a link the other way. One of the reasons animal welfare uniquely matters here is that we think there is a specific phenomenon where people care less about animals in order to justify their meat consumption.

Well, that's been observed in studies on attitudes towards animals and meat consumption, but I think similar phenomena could be plausible for climate change. Action on climate change may affect people's standards of living, and concern for future generations competes with concern for yourself.

I also don't see reducing cognitive dissonance or rationalization as the only way farm animal welfare improves values. One is just more attention to and discussion of the issue, and another could be that identifying with or looking up to people (the president, the party, the country) who care about animal welfare increases concern for animals. Possibly something similar could be the case for climate change and future generations.

Comment by michaelstjules on Candidate Scoring System recommendations for the Democratic presidential primaries · 2020-02-02T07:13:54.291Z · score: 1 (1 votes) · EA · GW

I apologize if you have and I missed it, but have you considered the impacts of the different candidates and policies on the EA community and those who contribute to EA causes? Policies on taxes and charity/philanthropy, for example, could have pretty important impacts. Here are Dylan Matthews and Tyler Cowen on wealth taxes and philanthropy.