Scenarios for cellular agriculture 2016-06-20T21:40:40.334Z
Lessons from the history of animal rights 2016-05-17T19:32:17.248Z


Comment by JesseClifton on Launching the EAF Fund · 2021-05-13T16:08:51.314Z · EA · GW

We at CLR are now using a different definition of s-risks.

New definition:

S-risks are risks of events that bring about suffering in cosmically significant amounts. By “significant”, we mean significant relative to expected future suffering.

Note that it may turn out that the amount of suffering that we can influence is dwarfed by suffering that we can’t influence. By “expectation of suffering in the future” we mean “expectation of action-relevant suffering in the future”.

Comment by JesseClifton on Against neutrality about creating happy lives · 2021-03-15T18:28:21.952Z · EA · GW

I found it surprising that you wrote: …

Because to me this is exactly the heart of the asymmetry. It’s uncontroversial that creating a person with a bad life inflicts on them a serious moral wrong. Those of us who endorse the asymmetry don’t see such a moral wrong involved in not creating a happy life.

+1. I think many who have asymmetric sympathies might say that there is a strong aesthetic pull to bringing about a life like Michael’s, but that there is an overriding moral responsibility not to create intense suffering.

Comment by JesseClifton on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-11-22T04:16:07.111Z · EA · GW

Very late here, but a brainstormy thought: maybe one way one could start to make a rigorous case for RDM is to suppose that there is a “true” model and prior that you would write down if you had as much time as you needed to integrate all of the relevant considerations you have access to. You would like to make decisions in a fully Bayesian way with respect to this model, but you’re computationally limited so you can’t. You can only write down a much simpler model and use that to make a decision.

We want to pick a policy which, in some sense, has low regret with respect to the Bayes-optimal policy under the true model. If we regard our simpler model as a random draw from a space of possible simplified models that we could’ve written down, then we can ask about the frequentist properties of the regret incurred by different decision rules applied to the simple models. And it may be that non-optimizing decision rules like RDM have a favorable bias-variance tradeoff, because they don’t overfit to the oversimplified model. Basically they help mitigate a certain kind of optimizer’s curse.

Comment by JesseClifton on some concerns with classical utilitarianism · 2020-11-17T19:29:56.274Z · EA · GW

nil already kind of addressed this in their reply, but it seems important to keep in mind the distinction between the intensity of a stimulus and the moral value of the experience caused by the stimulus. Statements like “experiencing pain just slightly stronger than that threshold” risk conflating the two. And, indeed, if by “pain” you mean “moral disvalue” then to discuss pain as a scalar quantity begs the question against lexical views.

Sorry if this is pedantic, but in my experience this conflation often muddles discussions about lexical views.

Comment by JesseClifton on What are some low-information priors that you find practically useful for thinking about the world? · 2020-08-07T13:55:38.049Z · EA · GW

Some Bayesian statisticians put together prior choice recommendations. I guess what they call a "weakly informative prior" is similar to your "low-information prior".

Comment by JesseClifton on Problems with EA representativeness and how to solve it · 2018-08-05T02:20:35.581Z · EA · GW

Nice comment; I'd also like to see a top-level post.

One quibble: Several of your points risk conflating "far-future" with "existential risk reduction" and/or "AI". But there is far-future work that is non-x-risk focused (e.g. Sentience Institute and Foundational Research Institute) and non-AI-focused (e.g. Sentience Institute) which might appeal to someone who shares some of the concerns you listed.

Comment by JesseClifton on “Just take the expected value” – a possible reply to concerns about cluelessness · 2018-01-09T23:47:33.413Z · EA · GW

Distribution P is your credence. So you are saying "I am worried that my credences don't have to do with my credence." That doesn't make sense. And sure we're uncertain of whether our beliefs are accurate, but I don't see what the problem with that is.

I’m having difficulty parsing the statement you’ve attributed to me, or mapping it what I’ve said. In any case, I think many people share the intuition that “frequentist” properties of one’s credences matter. People care about calibration training and Brier scores, for instance. It’s not immediately clear to me why it’s nonsensical to say “P is my credence, but should I trust it?”

Comment by JesseClifton on “Just take the expected value” – a possible reply to concerns about cluelessness · 2018-01-08T22:21:40.799Z · EA · GW

It sounds to me like this scenario is about a difference in the variances of the respective subjective probability distributions over future stock values. The variance of a distribution of credences does not measure how “well or poorly supported by evidence” that distribution is.

My worry about statements of the form “My credences over the total future utility given intervention A are characterized by distribution P” does not have to do with the variance of the distribution P. It has to do with the fact that I do not know whether I should trust the procedures that generated P to track reality.

Comment by JesseClifton on “Just take the expected value” – a possible reply to concerns about cluelessness · 2018-01-07T22:22:15.538Z · EA · GW

whether you are Bayesian or not, it means that the estimate is robust to unknown information

I’m having difficulty understanding what it means for a subjective probability to be robust to unknown information. Could you clarify?

subjective expected utility theory is perfectly capable of encompassing whether your beliefs are grounded in good models.

Could you give an example where two Bayesians have the same subjective probabilities, but SEUT tells us that one subjective probability is better than the other due to better robustness / resulting from a better model / etc.?

Comment by JesseClifton on “Just take the expected value” – a possible reply to concerns about cluelessness · 2017-12-27T20:38:33.155Z · EA · GW

For a Bayesian, there is no sense in which subjective probabilities are well or poorly supported by the evidence, unless you just mean that they result from calculating the Bayesian update correctly or incorrectly.

Likewise there is no true expected utility to estimate. It is a measure of an epistemic state, not a feature of the external world.

I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory. As does what you have said about robustness and being well or poorly supported by evidence.

Comment by JesseClifton on “Just take the expected value” – a possible reply to concerns about cluelessness · 2017-12-27T07:46:00.190Z · EA · GW

But that just means that people are making estimates that are insufficiently robust to unknown information and are therefore vulnerable to the optimizer's curse.

I'm not sure what you mean. There is nothing being estimated and no concept of robustness when it comes to the notion of subjective probability in question.

Comment by JesseClifton on “Just take the expected value” – a possible reply to concerns about cluelessness · 2017-12-22T17:20:48.164Z · EA · GW

I can’t speak for the author, but I don’t think the problem is the difficulty of “approximating” expected value. Indeed, in the context of subjective expected utility theory there is no “true” expected value that we are trying to approximate. There is just whatever falls out of your subjective probabilities and utilities.

I think the worry comes more from wanting subjective probabilities to come from somewhere — for instance, models of the world that have a track-record of predictive success. If your subjective probabilities are not grounded in such a model, as is arguably often the case with EAs trying to optimize complex systems or the long-run future, then it is reasonable to ask why they should carry much epistemic / decision-theoretic weight.

(People who hold this view might not find the usual Dutch book or representation theorem arguments compelling.)

Comment by JesseClifton on What consequences? · 2017-11-24T21:28:01.583Z · EA · GW

Thanks for writing this. I think the problem of cluelessness has not received as much attention as it should.

I’d add that, in addition to the brute good and x-risks approaches, there are approaches which attempt to reduce the likelihood of dystopian long-run scenarios. These include suffering-focused AI safety and values-spreading. Cluelessness may still plague these approaches, but one might argue that they are more robust to both empirical and moral uncertainty.

Comment by JesseClifton on An Argument for Why the Future May Be Good · 2017-07-20T22:02:22.886Z · EA · GW

Lazy solutions to problems of motivating, punishing, and experimenting on digital sentiences could also involve astronomical suffering.

Comment by JesseClifton on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-09T17:17:07.059Z · EA · GW

Right, I'm asking how useful or dangerous your (1) could be if it didn't have very good models of human psychology - and therefore didn't understand things like "humans don't want to be killed".

Comment by JesseClifton on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-07T22:13:46.548Z · EA · GW

Great piece, thank you.

Regarding "learning to reason from humans", to what extent do you think having good models of human preferences is a prerequisite for powerful (and dangerous) general intelligence?

Of course, the motivation to act on human preferences is another matter - but I wonder if at least the capability comes by default?

Comment by JesseClifton on Introducing Sentience Institute · 2017-06-09T17:53:20.617Z · EA · GW

Have animal advocacy organizations expressed interest in using SI's findings to inform strategic decisions? To what extent will your choices of research questions be guided by the questions animal advocacy orgs say they're interested in?

Comment by JesseClifton on Cognitive Science/Psychology As a Neglected Approach to AI Safety · 2017-06-05T16:32:21.824Z · EA · GW

Strong agreement. Considerations from cognitive science might also help us to get a handle on how difficult the problem of general intelligence is, and the limits of certain techniques (e.g. reinforcement learning). This could help clarify our thinking on AI timelines as well as the constraints which any AGI must satisfy. Misc. topics that jump to mind are the mental modularity debate, the frame problem, and insight problem solving.

This is a good article on AI from a cog sci perspective:

Comment by JesseClifton on Scenarios for cellular agriculture · 2017-04-12T11:57:37.058Z · EA · GW

Yes, I think you're right, at least when prices are comparable.

Comment by JesseClifton on Outcome of GWWC Outreach Experiment · 2017-02-09T20:11:47.199Z · EA · GW

More quick Bayes: Suppose we have a Beta(0.01, 0.32) prior on the proportion of people who will pledge. I choose this prior because it gives a point-estimate of a ~3% chance of pledging, and a probability of ~95% that the chance of pledging is less than 10%, which seems prima facie reasonable.

Updating on your data using a binomial model yields a Beta(0.01, 0.32 + 14) distribution, which gives a point estimate of < 0.1% and a ~99.9% probability that the true chance of pledging is less than 10%.

Comment by JesseClifton on Thoughts on the Reducetarian Labs MTurk Study · 2016-12-02T22:09:07.712Z · EA · GW

Thanks for writing this up.

The estimated differences due to treatment are almost certainly overestimates due to the statistical significance filter ( and social desirability bias.

For this reason and the other caveats you gave, it seems like it would be better to frame these as loose upper bounds on the expected effect, rather than point estimates. I get the feeling people often forget the caveats and circulate conclusions like "This study shows that $1 donations to newspaper ads save 3.1 chickens on average".

I continue to question whether these studies are worthwhile. Even if it did not find significant differences between the treatments and control, it's not as if we're going to stop spreading pro-animal messages. And it was not powered to detect the treatment differences in which you are interested. So it seems it was unlikely to be action-guiding from the start. And of course there's no way to know how much of the effect is explained by social desirability bias.

Comment by JesseClifton on What does Trump mean for EA? · 2016-11-12T15:00:11.925Z · EA · GW such it reads "There are many important things being neglected. This is an important thing. Therefore it is the most important thing to do."

I never meant to say that spreading anti-speciesism is the most important thing, just that it's still very important and it's not obvious that its relative value has changed with the election.

Comment by JesseClifton on What does Trump mean for EA? · 2016-11-11T23:08:00.307Z · EA · GW

Trump may represent an increased threat to democratic norms and x-risk, but that doesn't mean the marginal value of working in those areas has changed. Perhaps it has. We'd need to see concrete examples of how EAs who previously had a comparative advantage in helping animals now can do better by working on these other things.

my personal position on animal advocacy is that the long-term future of animals on Earth is determined almost entirely by how much humans have their shit together in the long run

This may be true of massive systemic changes for animals like the abolition of factory farming or large-scale humanitarian intervention in nature. But the past few years have shown that we can reduce a lot of suffering through corporate reform. Animal product alternatives are also very promising.

Also, "having our shit together in the long run" surely includes anti-speciesism (or at least much higher moral consideration for animals). Since EAs are some of the only people strategically working to spread anti-speciesism, it seems that this remains highly valuable on the margin.

Edited to add: It's possible that helping animals has become more valuable on the margin, as many people (EA and otherwise) may think similarly to you and divert resources to politics. Many animal advocates still think humans come first. Just a speculation.

Comment by JesseClifton on What does Trump mean for EA? · 2016-11-11T03:03:01.149Z · EA · GW

Agreed that large updates about things like the prevalence of regressive attitudes and the fragility of democracy should have been made before the election. But Trump's election itself has changed many EA-relevant parameters - international cooperation, x-risk, probability of animal welfare legislation, environmental policy, etc. So there may be room for substantial updates on the fact that Trump and a Republican Congress will be governing.

That said, it's not immediately obvious to me how the marginal value of any EA effort has changed, and I worry about major updates being made out of a kneejerk reaction to the horribleness of someone like Trump being elected.

Comment by JesseClifton on What does Trump mean for EA? · 2016-11-10T23:59:10.255Z · EA · GW

I'd be interested to hear a case for moving from animal advocacy to politics. If your comparative advantage was in animal advocacy before the election, it's not immediately obvious to me that switching makes sense.

In the short term, animal welfare concerns dominate human concerns, and your marginal contribution to animal welfare via politics is unclear: welfare reform in the US is happening mostly through corporate reform, and it's dubious that progressive politics is even good for wild animals due to the possible harms of environmentalism.

Looking farther into the future, it's not clear that engaging in politics is has become more effective on the margin than spreading anti-speciesism.

Politics is still a crowded space and it's looking like many other progressives have been galvanized by this result.

Comment by JesseClifton on [deleted post] 2016-11-10T00:20:39.858Z

Thank you for opening this discussion.

It’s not clear to me that animal advocacy in general gets downweighted:

-For the short term, wild and farmed animal welfare dominates human concerns. I'd be interested to hear a case that animals are better served by some EAs switching to progressive politics more generally. I'm doubtful that EA contributions to politics would indirectly benefit welfare reform and wild animal suffering efforts. Welfare reform in the United States is taking place largely through corporate reform. The impact of progressive vs conservative (or Trumpian) policy on WAS is unclear, and it’s not implausible that the latter will be net helpful to wild animals due to anti-environmentalist policies. And plenty of progressives will be galvanized to work on (human-centered) progressive politics; so it’s not clear to me that the marginal value of the EA community getting involved is high.

Animal liberation, however, looks (on the face of it) worse as a cause. The election makes any kind of legal status for animals, factory farming ban, etc. in the next few decades seem even less likely.

-Looking at the farther future…I am personally skeptical about the value of any efforts to affect the long-term development of human civilization, political or otherwise. But even conditional on one thinking that trying to influence the far-future is a good idea, it’s not obvious to me that marginal anti-speciesism efforts are less valuable than marginal progressive political efforts, esp. since the latter is fairly crowded.

That said, I imagine there are many variables I haven’t considered and I think this is a great time to deepen the conversation about the extent to which progress for animals depends on the broader political circumstances.

Finally, I am wary of major belief revisions being made due to System 1 reactions. Right now I want to join the Rebel Alliance as much as the next guy, but we have to keep in mind that the consequences of Trump's election for all sentient beings are highly complex and uncertain.

Comment by JesseClifton on Where I Am Donating in 2016 · 2016-11-02T16:04:50.350Z · EA · GW

What do you mean by "too speculative"? You mean the effects of agriculture on wildlife populations are speculative? The net value of wild animal experience is unclear? Why not quantify this uncertainty and include it in the model? And is this consideration that much more speculative than the many estimates re: the far future on which your model depends?

Also, "I thought it was unlikely that I'd change my mind" is a strange reason for not accounting for this consideration in the model. Don't we build models in the first place because we don't trust such intuitions?

Comment by JesseClifton on Where I Am Donating in 2016 · 2016-11-01T17:51:26.476Z · EA · GW

Thanks for writing this up! Have you taken into account the effects of reductions in animal agriculture on wildlife populations? I didn't see terms for such effects in your cause prioritization app.

Comment by JesseClifton on Is not giving to X-risk or far future orgs for reasons of risk aversion selfish? · 2016-09-14T13:39:26.271Z · EA · GW

It's possible that preventing human extinction is net negative. A classical utilitarian discusses whether the preventing human extinction would be net negative or positive here: Negative-leaning utilitarians and other suffering-focused people think the value of the far-future is negative.

This article contains an argument for time-discounted utilitarianism: I'm sure there's a lot more literature on this, that's about all I've looked into it.

You could also reject maximizing expected utility as the proper method of practical reasoning. Weird things happen with subjective expected utility theory, after all - St. Petersburg paradox, Pascal's Mugging, anything with infinity, dependence on possibly meaningless subjective probabilities, etc. Of course, giving to poverty charities might still be suboptimal under your preferred decision theory.

FWIW, strict utilitarianism isn't concerned with "selfishness" or "moral narcisissm", just maximizing utility.

Comment by JesseClifton on A technical note: Bayesianism is not logic, statistics is not rationality · 2016-09-08T01:46:33.261Z · EA · GW

Examining the foundations of the practical reasoning used (and seemingly taken for granted) by many EAs seems highly relevant. Wish we saw more of this kind of thing.

Comment by JesseClifton on Starting a conversation about Effective Environmentalism · 2016-08-08T15:33:17.207Z · EA · GW

Have you seen Brian Tomasik's work on 1) the potential harms of environmentalism for wild animals, and 2) the effects of climate change on wild animal suffering?


Comment by JesseClifton on EA != minimize suffering · 2016-07-13T23:14:28.182Z · EA · GW

You don't think directing thousands of dollars to effective animal charities has made any difference? Or spreading effectiveness-based thinking in the animal rights community (e.g. the importance of focusing on farm animals rather than, say, shelter animals)? Or promoting cellular agriculture and plant-based meats?

As for wild animal suffering: there are a few more than 5-10 people who care (the Reducing WAS FB group has 1813 members), but yes, the community is tiny. Why does that mean thinking about how to reduce WAS accomplishes nothing? Don't you think it's worth at least trying to see if there are tractable ways to help wild animals - if only through interventions like lawn-paving and humane insecticides?

May I ask which efforts to reduce suffering you do think are worthwhile?

Comment by JesseClifton on EA != minimize suffering · 2016-07-13T20:05:44.338Z · EA · GW

What do you think of the effort to end factory farming? Or Tomasik et al's work on wild animal suffering? Do you think these increase rather than decrease suffering?

Comment by JesseClifton on EA != minimize suffering · 2016-07-13T17:23:51.738Z · EA · GW

I agree that EA as a whole doesn't have coherent goals (I think many EAs already acknowledge that it's a shared set of tools rather than a shared set of values). But why are you so sure that "it's going to cause much more suffering than it prevents"?

Comment by JesseClifton on Scenarios for cellular agriculture · 2016-06-20T23:56:21.257Z · EA · GW

Thanks a lot! I've made the correction you pointed out.

Comment by JesseClifton on Global poverty could be more cost-effective than animal advocacy (even for non-speciesists) · 2016-05-31T17:48:45.601Z · EA · GW

I'm not objecting to having moral uncertainty about animals. I'm objecting to treating animal ethics as if it were a matter of taste. EAs have rigorous standards of argument when it comes to valuations of humans, but when it comes to animals they often seem to shrug and say "It depends on how much you value them" rather than discussing how much we should value them.

I didn't intend to actually debate what the proper valuation should be. But FWIW, the attitude that talking about how we should value animals "is likely to be emotionally charged and counterproductive" - an attitude I think is widespread given how little I've seen this issue discussed - strikes me as another example of EAs' inconsistency when it comes to animals. No EA hesitates to debate, say, someone's preference for Christians over Muslims. So why are we afraid to debate preference among species?

Comment by JesseClifton on Global poverty could be more cost-effective than animal advocacy (even for non-speciesists) · 2016-05-31T16:28:32.302Z · EA · GW

I take issue with the statement "it depends greatly on how much you value a human compared to a nonhuman animal". Similar things are often said by EAs in discussions of animal welfare. This makes it seem as if the value one places on nonhumans is a matter of taste, rather than a claim subject to rational argument. The statement should read "it depends greatly on how much we ought to value a human compared to a nonhuman".

Imagine if EAs went around saying "it depends on how much you value an African relative to an American". Maybe there is more reasonable uncertainty about between- as opposed to within-species comparisons, but still we demand good reasons for the value we assign to different kinds of humans. This idea is at the core of Effective Altruism. We ought to do the same with non-human sentients.

Comment by JesseClifton on More Thoughts (and Analysis) on the Mercy For Animals Online Ads Study · 2016-05-30T05:17:27.228Z · EA · GW

I'm not saying any experiment is necessarily useless, but if MFA is going to spend a bunch of resources on another study they should use methods that won't exaggerate effectiveness.

And it's not only that "one should attend to priors in interpretation" - one should specify priors beforehand and explicitly update conditional on the data.

Comment by JesseClifton on More Thoughts (and Analysis) on the Mercy For Animals Online Ads Study · 2016-05-29T17:27:46.695Z · EA · GW

Confidence intervals still don't incorporate prior information and so give undue weight to large effects.

Comment by JesseClifton on More Thoughts (and Analysis) on the Mercy For Animals Online Ads Study · 2016-05-29T02:58:39.530Z · EA · GW

I would be especially wary of conducting more studies if we plan on trying to "prove" or "disprove" the effectiveness of ads with so dubious a tool as null hypothesis significance tests.

Even if in a new study we were to reject the null hypothesis of no effect, this would arguably still be pretty weak evidence in favor of the effectiveness of ads.

Comment by JesseClifton on Lessons from the history of animal rights · 2016-05-28T17:25:52.046Z · EA · GW

As prohibitions on methods of animal exploitation - rather than just regulations which allow those forms of exploitation to persist if they're more "humane" - I think these are different than typical welfare reforms. As I say in the post, this is the position taken by abolitionist-in-chief Gary Francione in Rain Without Thunder.

Of course the line between welfare reform and prohibition is murky. You could argue that these are not, in fact, prohibitions on the relevant form of exploitation - namely, raising animals to be killed for food. But in trying to figure out whether welfare reforms delay progress, we have to go on what evidence we have...and the fact that we do have these prohibitions on certain practices, in many cases based on the explicit recognition of animal interests that shouldn't be violated (e.g. the Five Freedoms), seems to be about as good as it gets in terms of historical evidence bearing on the debate over welfarism.

Comment by JesseClifton on Lessons from the history of animal rights · 2016-05-26T17:56:05.852Z · EA · GW

I haven't seen much on welfare reforms in these industries in particular. In the 90s Sweden required that foxes on fur farms be able to express their natural behaviors, but this made fur farming economically unviable and it ended I'm not sure what that tells us. Other than that, animals used in fur farming and cosmetics testing are/were subject to general EU animal welfare laws, and laws concerning farm and experimental animals, respectively.

I think welfare having no effect on abolition is a reasonable conclusion. I just want to argue that it isn't obviously counterproductive on the basis of this historical evidence.

Comment by JesseClifton on Lessons from the history of animal rights · 2016-05-24T18:07:04.791Z · EA · GW

Thanks for the comments!

"...we have evidence that welfare reforms lead to more welfare reforms, which might suggest someday they will get us to something close to animal rights, but I think Gary Francione's historical argument that we have had welfare reforms for two centuries without significant actual improvements is a bit stronger...."

My point is that welfare reforms have led not only to more welfare reforms, but prohibitions as well. Even if we disqualify bans on battery cages, veal crates, and gestation crates as prohibitions, there are still bans on fur farming and cosmetics testing. There are also what might be considered proto-rights in the Five Freedoms.

"...many movements historically have come to a similar conclusion that seeking a more dramatic shift (abolition or desegregation) was more valuable than improved conditions (slavery reform or improved segregated black schools)."

I think the success of incrementalist vs. abolitionist strategies is highly context dependent. A society may simply not be ready to even consider the abolition of a particular institution. This seems to have been the case with abolitionist anti-vivisectionism.

And there is bias in looking at cases like slavery and civil rights in which dramatic shifts were actually achieved. Of course it looks, in retrospect, like pursuing a dramatic shift was the best choice! But history is littered with people whose calls for dramatic change were not realized: socialists, libertarians, anarchists, fascists, adherents of all religions, radical environmentalists, anti-globalizationists, anti-nuclearists, pacifists, Bernie Bros, and 19th century anti-vivisectionists. Arguably, however, each of these groups has been able to advance some of their goals through small changes.

My point is not to advocate for welfarism over abolitionism, but to say we can't predict what will work in a given time and place, and therefore we should diversify our strategic portfolio. And I do think recognizing that welfarism does not seem to have prevented progress towards abolition is especially important in the case of developing countries, which seem particularly far from being receptive to animal liberation, but where animal welfare reforms could reduce the suffering of a lot of animals in the meantime.

Comment by JesseClifton on Lessons from the history of animal rights · 2016-05-20T15:10:28.643Z · EA · GW

In the EU, prohibitions on battery cages, gestation crates, veal crates, and cosmetics testing, and the adoption of the Five Freedoms as a basis for animal welfare policy. In the UK, Austria, Netherlands, Croatia, & Bosnia & Herzegovina, bans on fur farming.

Comment by JesseClifton on Looking for Wikipedia article writers (topics include many of interest to effective altruists) · 2016-04-27T04:45:02.126Z · EA · GW

Echo what Issa said. I've been working with Vipul to create articles on animal welfare and rights topics, and it's been a valuable experience. I've learned about Wikipedia, and more importantly I have learned a ton about the animal welfare/rights movement that will inform my own activism. I have already referred a lot to what I've learned and written about in conversations with other activists about what's effective. I think it's really good that now anyone will be able to easily access this information. Plus Vipul's great to work with.

Comment by JesseClifton on On Priors · 2016-04-26T22:52:50.376Z · EA · GW

Seems like you ought to conduct the analysis with all of the reasonable priors to see how robust your conclusions are, huh?

Comment by JesseClifton on The Poor Meat Investor Problem · 2016-04-26T01:05:39.012Z · EA · GW

"That's not what's happening here, because the case in question is an abstract discussion of a huge policy question regarding what stance we should take in the future, with little time pressure. These are precisely the areas where we should be consequentialist if ever we should be."

Most people's thinking is not nearly as targeted and consequentialist as this. On my model of human psychology, supporting the exploitation of animals in service of third-world development reinforces the belief that animals are for human benefit in general (rather than in this one instance where the benefits to all sentient beings were found to outweigh the harms). Given that speciesism is responsible for the vast majority of human-caused suffering, I think we should be extremely careful about supporting animal exploitation, even when it looks net-positive at first blush.

And I'm not concerned about EA looking "heartless and crazy" by endorsing livestock as a development tool, I was just pointing out that there are certain things EA should take off the table for signalling and memetic reasons.

"I doubt that we are well-advised to insist that people in the developing world cannot should not own animals as assets (regardless of the balance of cost and benefits)."

There's a difference between insisting that people in the developing world not own animals as assets, which I agree would be mistaken, and opposing the adoption of livestock ownership as a development strategy.

Comment by JesseClifton on The Poor Meat Investor Problem · 2016-04-24T17:06:28.136Z · EA · GW

I think adopting and spreading some deontic heuristics regarding the exploitation of animals is good from a consequentialist perspective. Presumably, EAs don't consider whether enslaving, murdering, and eating other humans "is for the greater good impartially considered". Even putting that on the table would make EA look much more heartless and crazy than it already does, and risk spreading some very dangerous memes. Likewise, not taking a firm stand against animal exploitation as a development tool makes EA seem less serious about helping animals, and reinforces the idea that animals are here to benefit humans.