Communicate your epistemic status shifts 2021-07-14T14:07:57.049Z
The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion 2021-07-06T13:17:03.393Z
Cost-effectiveness distributions, power laws and scale invariance 2021-03-18T19:27:52.086Z
Rational altruism and risk aversion 2021-02-17T12:33:21.935Z
Clean technology innovation as the most cost-effective climate action 2020-12-06T20:06:29.789Z
Towards zero harm: animal-free and land-free food 2020-10-23T13:19:14.247Z
EA's abstract moral epistemology 2020-10-20T14:11:08.100Z
Relativistic welfare, farm animal abolitionism and wild animal welfarism 2020-08-29T08:55:16.465Z
The extreme cost-effectiveness of cell-based meat R&D 2020-08-10T14:00:23.745Z
Asymmetric altruism 2020-06-27T17:49:06.082Z
Probability estimate for wild animal welfare prioritization 2019-10-23T20:47:21.236Z
Some solutions to utilitarian problems 2019-07-14T10:04:08.387Z


Comment by Stijn on The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion · 2021-07-07T07:24:28.552Z · EA · GW

Yes, my theory favours B, assuming that those 100 billion additional people have on expectation a welfare higher than the threshold, that the higher X-risk in world A does not on expectation decrease the welfare of existing people, and that  the negative welfare in absolute terms of having a miserable life is less than ten times higher than the positive welfare of currently existing people in world A. In that case, the added welfare of those additional people is higher than  the loss of welfare of the current people. In other words: if there are so many extra future people who are so happy, we really should sacrifice a lot in order to generate that outcome. 

However, the question is whether we would set the threshold lower than the welfare of those future people. It is possible that most current people are die-hard person-affecting utilitarians who care only about making people happy instead of making happy people. In that case, when facing a choice between worlds A and B, people may democratically decide to set a very high threshold, which means they prefer world A

Comment by Stijn on The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion · 2021-07-07T07:11:14.475Z · EA · GW

Hi Kevin,

thanks for the comment.  My theory mostly violates that neutrality principle: all else equal, adding a person to the world who has a  negative welfare is bad, adding a person who has a welfare higher than treshold T is good, and in its lexical extension, adding a person with welfare between 0 and threshold T, is good (the lexical extension says that if two states are equally good when it comes to the total welfare excluding the welfare of possible people between 0 and T, then the state that has the highest total welfare, including that of all possible people, is the best).

There is indeed an apparent intransitivity in my theory, which is not a real or serious intransitivity, as it is avoided in the same way as that dynamic inconsistency is avoided, namely by considering the choise sets. So, worlds A, B and C are equally good when you consider the full choice set {A,B,C}, but once that extra person is added, the choice set reduces to  {B,C}, and then C is better than B (the extra person becomes a necessary person in choice set {B,C}). The crucial thing is that the 'better than' relationship depends on the choice set, the set of all available states. This excludes the serious 'money pump' intransitivities. In the full choice set {A,B,C}, I am indifferent between A and B, so I'm willing to switch from A to B. Now I prefer C over B (because that extra person has a higher welfare in C), and hence I'm willing to pay to switch from B to C. But as the choice set is now reduced to {B,C},  after choosing C, I can no longer switch back to A, even if I was initially indifferent between C and A. In the lexical extension of my theory, I would end up with world C.

Comment by Stijn on The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion · 2021-07-06T19:17:01.867Z · EA · GW

My theory would be like critical level utilitarianism, where necessary people, possible people with negative welfare and possible people with high positive welfare have zero critical levels, and possible people with low positive welfare have a critical level equal to their own welfare. So people can have different critical levels, and the critical level might depend on the welfare of the person. 

The problem of identity could become difficult, when we consider identity as something fluid or vague. If for example copying a person (a kind of teleportation but without destroying the source person) would be possible: which of the two copies is the necessary person and which is the possible person? I guess the two copies have to fight over this for themselves. In general: once person A in state X identifies herself with a unique person B in state Y, and B identifies herself with A, only then are persons A and B considered identical. A necessary person is a person who is able to identify himself with a unique person in each other available state. 

Comment by Stijn on The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion · 2021-07-06T17:41:37.686Z · EA · GW

That's a good summary, except that the threshold is chosen democratically by those who definitely exist. If these people choose not to ignore those people who don't definitely exist and have welfare between 0 and T, then it reduces to total utilitarianism

Comment by Stijn on The most successful EA podcast of all time: Sam Harris and Will MacAskill (2020) · 2021-07-06T12:55:32.664Z · EA · GW

Yep, in my new EA Fellowship group, one participant also mentioned that podcast as basic inspiration to join EA. Proof by anecdote.

Comment by Stijn on Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term' · 2021-04-28T12:37:56.874Z · EA · GW

I think the beatpath method to avoid intransitivity still results in a sadistic repugnant conclusion. Consider three situations. In situation 1, one person exist with high welfare 100. In situation 2, that person gets welfare 400, and 1000 additional people are added with welfare 0. In situation 3, those thousand people will have welfare 1, i.e. small but positive (lives barely worth living), and the first person now gets a negative welfare of -100. Total utilitarianism says that situation 3 is best, with total welfare 900. But comparing situations 1 and 3, I would strongly prefer situation 1, with one happy person. Choosing situation 3 is both sadistic (the one person gets a negative welfare) and repugnant (this welfare loss is compensated by a huge number of lives barely worth living). Looking at harms, in situation 1, the one person has 300 units of harm (400 welfare in situation 2 compared to 100 in situation 1). In situation 2, the 1000 additional people each have one unit of harm, which totals 1000 units. In situation 3, the first person has 200 units of harm (-100 in situation 3 compared to +100 in situation 1). According to person-affecting views, we have an intransitivity. But Schulze's beatpath method, Tideman’s ranked pairs method, minimax Condorcet method, and other selection methods to avoid intransitivity, select situation 3 if situation 2 were an option (and would select situation 1 if situation 2 was not an available option, violating independence of irrelevant alternatives).

Perhaps we can solve this issue by considering complaints instead of harms. In each situation X, a person can complain against choosing that situation X over another situation Y. That complaint is a value between zero and the harm that the person has in situation X compared to situation Y. A person can choose how much to complain.  For example, if the first person would fully complain in situation 1, then situation 3 will be selected, and in that situation the first person is worse-off.  Hence, learning about this sadistic repugnant conclusion, the first person can decide not to complain in situation 1, as if that person is not harmed in that situation. Without the complaint, situation 1 will be selected. We have to let people freely choose how much they want to complain in the different situations. 

Comment by Stijn on What is the argument against a Thanos-ing all humanity to save the lives of other sentient beings? · 2021-03-07T12:02:14.364Z · EA · GW

I wrote some counter-arguments, why we could prefer human lives from an impartial (antispeciesist) perspective:

Comment by Stijn on Why EA groups should not use “Effective Altruism” in their name. · 2021-02-20T12:13:07.064Z · EA · GW

Good points, but I'm a little tiny bit skeptical. So those people who join the group under the name of PISE but would not have joined the group when it was called Effective Altruism Erasmus, I wonder if that is due to the reasons that were mentioned (that the -ism suffix reminds of something religious, makes the name too unfamiliar, too difficult, associated with elitism...). If that would be the case, I would be surprised if those people are potentially high impact effective altruists. To put it overly simplistic: suppose someone would not join because of the word altruism in the name. The person does not like that word or does not even know what it means (like I don't know what "Marnaism" means). How can such a person (who has such a cognitive bias towards words, is so hypersenstitive to the use of a single word, thinks that an -ism word is too difficult, makes strange associations with religion, or does not even know what altruism means) expected to become a rational, intelligent, self-critical, scientifically literate high impact effective altruist? In the PISE group there are members who should come to the conclusion that if the name were different, they would not have joined? Do the group members realize that? 

Comment by Stijn on Differences in the Intensity of Valenced Experience across Species · 2020-11-03T09:10:43.257Z · EA · GW

About split brain; those studies are about cognition (having beliefs about what is being seen). Does anyone know if the same happens with affection (valenced experience)? For example: left brain sees a horrible picture, right brain sees picture of the most joyfull vacation memory. Now ask left and right brains how they feel. I imagine such experiments are already being done? My expectation is that asking the brain hemisphere who sees the picture of the vacation memory, that hemisphere will respond that the picture strangely enough gives the subject a weird, unexplainable, kind of horrible feeling instead of pure joy. As if feelings are still unified. Anyone knows about such studies?

Comment by Stijn on Differences in the Intensity of Valenced Experience across Species · 2020-11-03T09:01:46.254Z · EA · GW

That anti-proportionality arguments seems tricky to me. It sounds comparable to the following example. You see a grey picture, composed of small black and white pixels. (The white pixles correspond to neuron firings in your example) The greyness depends on the proportion of white pixels. Now, what happens when you remove the black pixels? That is undefined. It could be that only white pixels are left and you now see 100% whiteness. Or the absent black pixels are still being seen as black, which means the same greyness as before. Or removing the black pixels correspond with making them transparent, and then who knows what you'll see? 

Comment by Stijn on EA's abstract moral epistemology · 2020-10-23T08:17:51.385Z · EA · GW

Thanks for the answers, really appreciate it

Comment by Stijn on An introduction to global priorities research for economists · 2020-08-30T12:29:53.903Z · EA · GW

Thanks, this is exactly what I needed. Now I also need a list of researchers who are interested in collaboration on one of these research topics :-)

Comment by Stijn on The extreme cost-effectiveness of cell-based meat R&D · 2020-08-16T18:14:33.108Z · EA · GW


about the 10.000 years assumption: that is only used to calculate a high estimate of clean meat R&D. I'm not so worried if that is an overstimate.

My calculation assumes indeed no diminishing returns for clean meat R&D. I don't expect diminishing returns in the short run, when so much need to be researched. In my model, the decreasing neglectedness accounts for diminishing returns. When funding and investments by others increses to 1 billion dollars, the cost-effectiveness decreases with a factor 10. Anyway, the point is that clean meat R&D is a good opportunity in the short run, for the next 10 or 20 years.

ACE's CEA methodology is different indeed, but Im not convinced that it is really incomparable to mine. A basic assumption is that ACE's top charities who are not involved in clean meat (i.e. the charities except Good Food Institute), are not capable of eliminating animal farming before clean meat can.

The CEA of leafleting could be an overestimate indeed. The study that I did, was not randomized controlled.

About missing the impact of animal advocacy: I'm sceptical about the possibility of attitudinal change: just like the expectations of leafleting were too high (not strong evidence of behavioral change), the expectations about other animal rights advocacy could be too high as well. The case for clean meat is different: in the past we already have striking examples of animals being replaced by more than 90% within 50 years due to new technologies (e.g. horses -> cars, whale oil -> kerosene, messenger pigeons -> telephone/telegraph, sheep wool -> synthetic fibers, animal insulin -> human recombinant DNA insulin, rabbit skin tests for cosmetics -> human skin tissueand perhaps now movie animals -> CGI animals). These transitions were independent from animal rights campaigning.

I do see much room left for attitudinal change, in particular moral circle expansion (see e.g., but perhaps after 10 or 20 years, when clean meat is already well on track and lost its opportunity for more funding (and returns diminished). Also, once people automatically decrease their animal meat consumption, they suffer less from cognitive dissonance, which means attitudinal change might become easier.

I'm skeptical about the welfare reforms strategy. For me to be indifferent between the current welfare reforms and an X% reduction of animal farming, I think X is very low, probably lower than 10%. For example cage free eggs: I don't believe that, if all battery cages were abolished and chickens had free range, that count for more than a 10% improvement in welfare, and probably a 0% in animal rights. Given moral uncertainty, I put some probability on a rights-based ethic where animals should not be used as merely a means. Also, some of the future possible welfare reforms could be so difficult, that clean meat (or animal-free eggs) will arrive sooner, making the welfare reforms campaigns obsolete. Also, welfare campaigns are also much less neglected than clean meat R&D.

Comment by Stijn on The extreme cost-effectiveness of cell-based meat R&D · 2020-08-13T17:34:59.064Z · EA · GW

Sorry, I'm not following. The gain is independent of C, and hence (at given U and F) independent of the expected time period. Assume x is such that cell-based meat enters the market 1 year sooner (i.e. x=F). Accelerating cell-based meat with one year is equally good (spares U=0,1.10^11 animals), whether it is a reduction from 10 to 9 years or 100 to 99 years. Only if C/F would be smaller than a year, accelerating with 1 year would not work.

Comment by Stijn on The extreme cost-effectiveness of cell-based meat R&D · 2020-08-13T13:47:13.936Z · EA · GW

Thanks! I assumed indeed a zero discount rate, because I believe the disutility of farm animal suffering in the future counts the same as the disutility today. Perhaps one could use a very small discount rate, to account for a human extinction probability, but then again, when humans are extinct, there will be no more farm animal suffering. I guess a higher discount rate matters when utility measures greenhouse gas emisions saved. Reducing 1 ton CO2 now is more important than 1 ton later (because in the future the carbon absorption capacity by forests, oceans and carbon capture and storage technologies will be bigger). However, I think cell-based meat will enter the market within 10 years, so I don't expect C/F to be very big.

Comment by Stijn on The extreme cost-effectiveness of cell-based meat R&D · 2020-08-13T10:15:25.884Z · EA · GW

The basic (in my opinion realisitic) assumption is that other people invest in cell-based meat R&D anyway, and that in the business-as-usual scenario (where you do not fund anything) no other strategy (technology, intervention, vegan outreach campaign,...) will be able (even with more funding) to abolish animal farming before cell-based meat enters the market at competitive prices. Suppose cell-based meat arrives within a few decades and eliminates animal farming in say 50 years, whereas another, next best strategy would eliminate animal farming in 100 years. Suppose that this other strategy was less costly, for example requiring only 10 million euro funding per year over a period of 100 years to abolish animal farming, whereas cell-based meat would require 100 million euro funding over 50 years. And suppose that other strategy was more neglected, for example receiving only 10 million euro funding per year, compared to 100 million for cell-based meat. Even then, extra funding for that other strategy would not be effective when it is impossible to speed it up such that it will eliminate animal farming within 50 years. When that other strategy takes more than 50 years anyway, it will become obsolete anyway in the business-as-usual scenario where cell-based meat arrives earlier and eliminates animal farming earlier. A global coordination such that all cell-based meat funding goes to that other, less costly strategy, is not effective (not so feasible). Hence, the most effective thing to do for us, is to accelerate that cell-based meat research, such that it enters the market one year earlier. That saves an extra year of animal suffering and greenhouse gas emissions. If other strategies received more funding, there is a likelihood that they make cell-based meat obsolete, and this consideration is included in the 10% probability of cell-based meat eliminating animal farming.

Comment by Stijn on The extreme cost-effectiveness of cell-based meat R&D · 2020-08-11T18:53:09.544Z · EA · GW

I quickly made a guesstimate: (you can also compare it with shaybenmoshe's guesstimate below)

Comment by Stijn on The extreme cost-effectiveness of cell-based meat R&D · 2020-08-11T18:51:28.428Z · EA · GW

I'm surprised by the level of agreement between our assumptions. In your model, 200 M$ funding is required to advance clean meat with 0,7 years, whereas I assumed 100M$ and 1 year. You assume a lower greenhouse gas saving: 50% of the current 7,8 Gton CO2 emissions, whereas I assumed an increase in meat consumption in businass as usual scenario, and a reduction of 1 ton CO2 per vegan year, that means a reduction of around 10 Gton (assuming 10B people), but you assumed a 25% probability of success, whereas I assumed 10%. But with more lognormal error distributions, you arrive at higher $/ton estimates. Here's my guesstimate

Comment by Stijn on The extreme cost-effectiveness of cell-based meat R&D · 2020-08-11T17:53:45.455Z · EA · GW

I partially agree. In my second, high estimate model, cell-based meat arrives in 100 years. However, it more likely arrives sooner, e.g. in 2030. From then on, carbon offsetting starts to count. I agree that we should discount future emission reductions, due to the urgency of the climate problem and the possibility of early threshold values in the climate system being passed. But 10 years is not so long.

Comment by Stijn on The problem with person-affecting views · 2020-08-06T14:45:44.382Z · EA · GW

The intransitivity problem that you address is very similar to the problem of simultaneity or synchronicity in special relativity. Consider three space-time points (events) P1, P2 and P3. The point P1 has a future and a past light cone. Points in the future light cone are in the future of P1 (i.e. a later time according to all observers). Suppose P2 and P3 are outside of the future and past light cones of P1. Then it is possible to choose a reference frame (e.g. a non-accelerating rocket) such that P1 and P2 have the same time coordinate and hence are simultaneous space-time events: the person in the rocket sees the two events happening at the same time according to his personal clock. It is also possible to perform a Lorentz transformation towards another reference frame, e.g. a second rocket moving at constant speed relative to the first rocket, such that P1 and P3 are simultaneous (i.e. the person in the second rocket sees P1 and P3 at the same time according to her personal clock). But... it is possible that P3 is in the future light cone of P2, which means that all observers agree that event P3 happens after P2 (at a later time according to all clocks). So, special relativity involves a special kind of intransitivity: P2 is simultaneous to P1, P1 is simultaneous to P3, and P3 happens later than P2. This does not make space-time inconsistent or irrational, neither does it make the notion of time incomprehensible. The same goes for person-affecting views. In the analogy: the time coordinate corresponds to a person's utility level. A later time means a higher utility. You can formulate a person-affecting axiology that is 'Lorentz invariant' just like in special relativity.

My favorite population ethical theory is variable critical level utilitarianism

This theory is in many EA-relevant cases (e.g. dealing with X-risks) equal to total utilitarianism, except that it avoids the very repugnant conclusion: situation A involves N extremely happy people, situation B involves the same N people, now extremely miserable (very negative utility), plus a huge number M of extra people with lives barely worth living (small positive utility). According to total utilitarianism, situation B would be better if M is large enough. I'm willing to bite the bullet of the repugnant conclusion, but this very repugnant conclusion is for me one of the most counterintuitive conclusions in population ethics. VCLU can easily avoid this.

Comment by Stijn on Probability estimate for wild animal welfare prioritization · 2019-10-29T19:13:27.614Z · EA · GW

A small addendum: a simplified expected value estimate of reducing X-risks versus wild animal suffering:

Comment by Stijn on Probability estimate for wild animal welfare prioritization · 2019-10-27T18:57:17.268Z · EA · GW

As mentioned, those percentages wher my own subjective estimates, and they were determined based on the considerations that I mentioned ("This estimate is based on"). When I clearly state that these are my personal, subjective estimates, I don't think it is misleading: it does not give a veneer of objectivity.

The clarifying part is that you can now decide whether you agree or disagree with the probability estimates. Breaking the estimate into factors helps you to clarify the relevant considerations and improves your accuracy. It is better than simply guessing the overall estimate of the probability that wild animal suffering is the priority.

If you don't like the wide margins, perhaps you can improve the estimates? But knowing we often have an overconfidence bias (our error estimates are often too narrow), we should a priori not expect narrow error margins and we should correct this bias by taking wider margins.

Comment by Stijn on Probability estimate for wild animal welfare prioritization · 2019-10-26T18:21:27.081Z · EA · GW

the personal probability estimates are pulled out of my 'air' of intuitive judgments. You are allowed to play with the numbers according to your intuitive judgments. Breaking down the total estimate into factors allows you to make more accurate estimates, because you better reflect on all your beliefs that are relevant for the estimate

Comment by Stijn on Probability estimate for wild animal welfare prioritization · 2019-10-26T07:42:01.996Z · EA · GW

Suppose we can choose between A: adding one person with negative utility -100, versus B: adding thousand people, each with small positive utility +1. If the critical level was fixed at say +10, then situation A decreases social welfare with 100, whereas B decreases it with 900, so traditional critical level theory indeed implies a sadistic conclusion to choose A. However, variable critical level utilitarianism can avoid this: the one person in A can choose a very high critical level for him in A, the thousand people in B can set their critical levels in B at say +1. Then B gets chosen. In general, people can choose their critical levels such that they can steer away from the most counterintuitive conclusions. The critical levels can depend on the situation and the choice set, which gives the flexibility. You can also model this with game theory, as in my draft article:

Comment by Stijn on Probability estimate for wild animal welfare prioritization · 2019-10-24T13:27:14.127Z · EA · GW

Perhaps I'm too sloppy with the terminology. I've rewritten the part about suffering focused ethics in the main text. What I meant is that these theories are characterized by a (procreation) asymmetry. That allows the avoidance of the repugnant sadistic conclusion (which is indeed called the very repugnant conclusion by Arrhenius).

So the suffering focused ethic that I am proposing, does not imply that sadistic conclusion that you mentioned (where the state with everyone experiencing extreme suffering is considered better). My personal favorite suffering focused ethic is variable critical level utilitarianism: a flexible version of critical level utilitarianism where everyone can freely choose their own non-negative critical level, which can be different for different persons, different situations and even different choice sets. This flexibility allows to steer away from the most counterintuitive conclusions.

Comment by Stijn on Defending the Procreation Asymmetry with Conditional Interests · 2019-10-23T20:45:33.132Z · EA · GW

got it! :-)

Comment by Stijn on Defending the Procreation Asymmetry with Conditional Interests · 2019-10-14T14:48:13.379Z · EA · GW

I'm still not perfectly convinced: there still seems to be a symmetric formulation. You describe it in terms of pushing instead of pulling. But what about the symmetry between expressions "an existing individual in X pushes the situation from X to Y", versus "an existing individual in Y pulls the situation from X to Y"? Why would there be no money pump in pulling cases if there could be a money pump in a pushing case?

That being said, my gut feeling tells me that your reference to game theoretic instability or money pumps is similar (analogous or perhaps exactly the same?) as my reference to dynamic inconsistency (subgame imperfect situations) that I described in my variable critical level utilitarianism draft paper So in the end you could be pointing at a valid argument indeed.

Comment by Stijn on Defending the Procreation Asymmetry with Conditional Interests · 2019-10-13T22:02:36.843Z · EA · GW

It seems that with the formulation of the Comparative Interest principle, you already assume an asymmetry. Consider the symmetric (equally reasonable) formulation, by writing ‘better’ instead of ‘worse’ and switching X and Y: An outcome X is in one way better than an outcome Y if, conditional on X, the individuals in X would have a stronger overall interest in outcome X than in Y and, conditional on Y, the individuals in Y would not have an even stronger overall interest in Y than in X.

With this formulation, the procreation asymmetry illustriation looks different: there is an arrow from non-existence to positive existence (top arrow from right to left), but no arrow from negative existence to non-existence.

Your formulation of the comparative interest principle, means that you focus on the tails of the arrows in the figure: an arrow can only be drawn if someone exists (and has interests) at the position of the tail of the arrow. My formulation focuses on the arrowheads: an arrow can only be drawn if someone exists (and has interests) at the position of the head of the arrow. There is a symmetry in choosing heads or tails, so your comparative interest principle is not suitable for a good defense of the procreation asymmetry.

I have another defense, based on my theory of variable critical level utilitarianism ( This is a critical level utilitarianism, where now everyone is free to choose their own critical level. The condition is: everyone should be willing to accept a life at the chosen critical level. This means that no-one will choose a negative critical level. Critical levels always have to be positive. That introduces an asymmetry between the positive and the negative, and this asymmetry is at the root of the procreation asymmetry.

Comment by Stijn on Some solutions to utilitarian problems · 2019-07-19T08:46:20.463Z · EA · GW

Now that's a suggestion :-) My intention is to do academic economic research about the implications of such population ethical theories for cost-benefit analysis. My preliminary, highly uncertain guess is that a variable critical level utilitarianism results in a higher priority for avoiding current suffering (e.g. livestock farming, wild animal suffering), because it is closer to a negative utilitarianism or person affecting views, compared to e.g. total utilitarianism which prioritizes the far future (existential risk reduction). And my even more uncertain guess is that variable critical level utilitarianism is less vulnerable than total utilitarianism to counterintuitive sadistic repugnant conclusions. This means that also future generations can be inclined to be variable critical levellers instead of totalists, and that means we should discount future generations more (i.e. prioritize current generations more and focus less on existential risk reduction). But this conclusion will be very senstitive on the critical levels chosen by current and future generations.

Comment by Stijn on Some solutions to utilitarian problems · 2019-07-14T10:10:26.410Z · EA · GW

Thanks, I corrected the link

Comment by Stijn on When should an Effective Altruist be vegetarian? · 2015-04-18T19:45:25.906Z · EA · GW

I don't follow the logic of the argument, but at first sight it seems scary. Suppose a really hate my ex-girlfriend. In fact, I hate her so much that I want to kill her. I am even willing to pay $6000 to an assassin to do the job. But instead I kill her myself and give the 6000 dollars to SCI to save a life. (I can even steal all her money after I killed her and give it away to effective charities) "If you would happily pay this much (in my case, $6000) to kill someone, you probably shouldn't abstain from killing that person." If this is how effective altruists would defend their meat consumption, it will discredit the whole idea of effective altruism.