Posts

Comments

Comment by vidur_kapur on What are the best arguments for an exclusively hedonistic view of value? · 2019-10-19T09:21:59.844Z · EA · GW

(Crossposted from FB)

Some initial thoughts: hedonistic utilitarians, ultimately, wish to maximise pleasure. Concurrently, suffering will be eliminated. In the real world, things are a lot fuzzier, and we do have to consider pleasure/suffering tradeoffs. Because it's difficult to measure pleasure/suffering directly, preferences are used as a proxy.

But I aver that we're not very good at considering these tradeoffs. Most are framed as thought-experiments, in which we are asked to imagine two 'real-world' situations. Some people may be willing to take five minutes of having a dust-speck in the eye for ten minutes of eating delicious food, whereas others may only be willing to take 30 seconds of the dust-speck. It's likely that, when we are asked to do this, we aren't considering the pleasure and suffering on their own, but taking other things into consideration too (perhaps thinking about our memories of similar situations in the past). The variance may also arise because a speck of dust in the eye *will* cause some people to suffer more than others.

Ideally, we'd be able to just consider the pleasure and the suffering on their own. That's very difficult to do, though. I think there are right answers to these tradeoff questions, but that our brains aren't able to answer the questions precisely enough. But in extreme cases, the hedonistic utilitarian could argue that anyone who would rather not have a blissful life at all, if it comes at the cost of being pricked by a pin, is simply wrong. It is the pleasure and the suffering that matter, no matter what people *say* they prefer. (See the 'Future Tuesday Indifference' argument promulgated by Parfit and Singer).

Sidgwick's definition of pleasure is after all "a feeling which the sentient individual at the time of feeling it implicitly or explicitly apprehends to be desirable – desirable, that is, when considered merely as feeling." The feeling, as it were, cannot be unfelt, even if an individual makes certain claims about the desirability (or lack thereof) of the feeling later on.

On that note, have you read Derek Parfit's 'On What Matters' (particularly Parts 1 and 6, in Volumes One and Two respectively)? In my view, he makes some convincing arguments against preference-based theories. Singer and de-Lazari Radek, in 'The Point of View of the Universe', build on his arguments to mount a defence of hedonistic utilitarianism against other normative theories, including preference utilitarianism.

Moral realists who endorse hedonistic utilitarianism, such as Singer, posit that the very nature of what Sidgwick describes as pleasure gives us reason to increase it, and that nothing else in the universe gives us similar reasons.

The experience machine is another example of where hedonistic utilitarians would postulate that people's preferences are plagued by bias. Joshua Greene and Peter Singer have both argued that people's objections to entering the experience machine are the result of status quo bias, for instance.

See: https://www.tandfonline.com/doi/abs/10.1080/09515089.2012.757889?journalCode=cphp20 and https://en.wikipedia.org/wiki/Experience_machine#Counterarguments

Comment by vidur_kapur on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T09:43:55.001Z · EA · GW

Thank you for this piece. I enjoyed reading it and I'm glad that we're seeing more people being explicit about their cause-prioritization decisions and opening up discussion on this crucially important issue.

I know that it's a weak consideration, but I hadn't, before I read this, considered the argument for the scale of values spreading being larger than the scale of AI alignment (perhaps because, as you pointed out, the numbers involved in both are huge) so thanks for bringing that up.

I'm in agreement with Michael_S that hedonium and delorium should be the most important considerations when we're estimating the value of the far-future, and from my perspective the higher probability of hedonium likely does make the far-future robustly positive, despite the valid points you bring up. This doesn't necessarily mean that we should focus on AIA over MCE (I don't), but it does make it more likely that we should.

Another useful contribution, though others may disagree, was the biases section: the biases that could potentially favour AIA did resonate with me, and they are useful to keep in mind.

Comment by vidur_kapur on The marketing gap and a plea for moral inclusivity · 2017-07-09T15:37:50.859Z · EA · GW

Thank you for the interesting post, and you provide some strong arguments for moral inclusivity.

I'm less confident that the marketing gap, if it exists, is a problem, but there may be ways to sell the more 'weird' cause areas, as you suggest. However, even when they are mentioned, people may still get the impression that EA is mostly about poverty. The other causes would have to be explained in the same depth as poverty (looking at specific charities in these cause areas as well as cost-effectiveness estimates where they exist, for instance) for the impression to fade, it seems to me.

While I do agree that it's likely that a marketing gap is perceived by a good number of newcomers (based solely on my intuition), do we have any solid evidence that such a marketing gap is perceived by newcomers in particular?

Or is it mainly perceived by more 'experienced' EAs (many of whom may prioritise causes other than global poverty) who feel as if sufficient weight isn't being given to other causes, or who feel guilty for giving a misleading impression relative to their own impressions (which are formed from being around others who think like them)? If the latter, then the marketing gap may be less problematic, and will be less likely to blow up in our faces.

Comment by vidur_kapur on The marketing gap and a plea for moral inclusivity · 2017-07-09T15:12:32.969Z · EA · GW

And, as Michael says, even the perception that EA is misrepresenting itself could potentially be harmful.

Comment by vidur_kapur on Why I left EA · 2017-02-25T10:29:40.498Z · EA · GW

I agree with the characterization of EA here: it is, in my view, about doing the most good that you can do, and EA has generally defined "good" in terms of the well-being of sentient beings. It is cause-neutral.

People can disagree on whether potential beings (who would not exist if extinction occurred) have well-being (total vs. prior-existence), they can disagree on whether non-human animals have well-being, and can disagree on how much well-being a particular intervention will result in, but they don't arbitrarily discount the well-being of sentient beings in a speciesist manner or in a manner which discriminates against potential future beings. At least, that's the strong form of EA. This doesn't require one to be a moral realist, though it is very close to utilitarianism.

If I'm understanding this post correctly, the "weak form" of EA - donating more and donating more effectively to causes you already care about, or even just donating more effectively given the resources you're willing to commit - is not unique enough for Lila to stay. I suspect, though, that many EAs (particularly those who are only familiar with the global poverty aspect of EA) only endorse this weak form, but the more vocal EAs are the ones who endorse the strong form.

Comment by vidur_kapur on EAs are not perfect utilitarians · 2017-01-31T16:48:12.936Z · EA · GW

I don't think this gets us very far. You're making a utilitarian argument (or certainly an argument consistent with utilitarianism) in favour of not trying to be a perfect utilitarian. Paradoxically, this is what a perfect utilitarian would do given the information that they have about their own limits - they're human, as you put it. For someone such as myself who believes that utilitarianism is likely to be objectively true, therefore, I already know not to be a perfectionist.

Ultimately, Singer put it best: do the most good that you can do.

Comment by vidur_kapur on A Different Take on President Trump · 2016-12-10T18:21:00.710Z · EA · GW

The main problem with this post, in my view, is that it's still in some places trying trying to re-run the election debate. The relevant question is no longer about who is a bigger risk or who will cause more net suffering out of Trump or Clinton, but about how bad Trump is on his own and what we can do to reduce the risks that arise from his Presidency.

I agree that Trump's views on Russia reduce global catastrophic risk (although his recent appointments seem to be fairly hawkish towards Russia.) However, he'll likely increase tensions in Asia, and his views on climate change seem to me to be a major risk.

In terms of values and opinion polls, immigrants to Western nations have better attitudes than people from their native countries. Furthermore, immigrants when they return to their native countries often take back the values and norms of their host countries. I'm not saying this to make a judgement on whether immigration on this scale is good or bad, just to make the point that our aim is to make the world a better place, not to decrease crime rates in Europe.

That said, far-right extremists are on the rise in both the United States and in Europe (thanks in part to irrational overreactions and hyperbolic statements like law and order is breaking down, which is just patently false as others have said, and thanks in part due to a number of false beliefs about immigration and immigrants themselves, Muslim or not) and I think that one way to stop them from taking power in elections and from attacking immigrants, refugees and others is to give them the sense that they have control over 'their' borders; in other words, tactically retreating on the issue of immigration may well be a good thing. Did we need to elect Trump, with all of the risks that come with his Presidency, in order to do that?

I don't know, but I do know that Trump has been elected now, and that many of his stated policies are terrible, and if individual EAs think that trying to change the policies of the Trump administration from the inside would be an effective thing to do (as Peter Singer has suggested) then I'd say that's plausibly true for a small number of EAs.

I think, in general, it's true that a small number of EAs going into party politics would be an effective thing to do, over and above the policy-change focus which already exists in the EA community and some of its organisations, but that this should be done on an individual basis: EA-affiliated groups and organisations should not get involved in party-politics.

Comment by vidur_kapur on What does Trump mean for EA? · 2016-11-14T19:04:13.259Z · EA · GW

Just a few thoughts.

Firstly, Trump's agricultural advisors seem to be very hostile to animal welfare. This may mean that we need more people working on farmed animal welfare, not less.

In terms of going into politics, the prospect of having a group of EAs, and perhaps even an EA-associated organization, doing regular, everyday politics may turn some people off from the movement (depending on your view on whether EA is net-positive or negative overall, this may be bad or good.)

While Sentience Politics, the Open Philanthropy Project and some others I may have missed do take part in political activities, they focus on specific policies, and I suspect that what some people are talking about would involve a systematic attempt to engage in party-politics.

I think that even without Trump, the idea of having a very small number of individual EAs (maybe 1/1000 EAs) going inside politics and trying to influence administrations or even become politicians was a good one.

But, a systematic attempt to engage in party-politics would not be a good idea, partly because, even in the EA community, focusing on party-politics or even on controversial policies seems to lead to less willingness to consider other points of view.

And, partly because influencing administrations or becoming a politician on one's own is more likely to make a difference than engaging in regular party-political campaigning, even though becoming a politician or influencing an administration is less easy to do.

Finally, I think that politics is very important, because you could potentially reduce existential risks as well as spread good values and ensure that humanity is on the right course in the future, and therefore there's not a tension between reducing existential risks and values-spreading.

However, in order for any politicians or political advisors to be able to steer humanity in a positive direction, you need public and corporate support for it, which is why I believe that spreading anti-speciesism, working on farmed animal suffering, and so on, remains highly important too.

Overall, Trump's election has not influenced my beliefs significantly.

Comment by vidur_kapur on The need for convergence on an ethical theory · 2016-09-21T15:53:47.578Z · EA · GW

Yeah, I'd say Parfit is probably the leading figure when it comes to trying to find convergence. If I understand his work correctly, he initially tried to find convergence when it came to normative ethical theories, and opted for a more zero-sum approach when it came to meta-ethics, but in the upcoming Volume Three I think he's trying to find convergence when it comes to meta-ethics too.

In terms of normative theories, I've heard that he's trying to resolve the differences between his Triple Theory (which is essentially Rule Utilitarianism) and the other theory he finds most plausible, the Act Utilitarianism of Singer and De-Lazari Radek.

Anyone trying to work on convergence should probably follow the fruitful debate surrounding 'On What Matters'.

Comment by vidur_kapur on Is not giving to X-risk or far future orgs for reasons of risk aversion selfish? · 2016-09-18T12:02:13.621Z · EA · GW

It's also possible that people don't even want to consider the notion that preventing human extinction is bad, or they may conflate it with negative utilitarianism when it could also be a consequence of classical utilitarianism.

For the record, I've thought about writing something about it, but I basically came to the same conclusions that you did in your blog post (I also subscribe to total, hedonistic utilitarianism and its implications i.e. anti-speciesism, concern for wild-animals etc).

If everyone has similar perspectives, it could be a sign that we're on the right track, but it could be that we're missing some important considerations as you say, which is why I also think more discussion of this would be useful.

Comment by vidur_kapur on EA != minimize suffering · 2016-07-21T12:15:53.868Z · EA · GW

I disagree that biting the bullet is "almost always a mistake". In my view, it often occurs after people have reflected on their moral intuitions more closely than they otherwise would have. Our moral intuitions can be flawed. Cognitive biases can get in the way of thinking clearly about an issue.

Scientists have shown, for instance, that for many people, their intuitive rejection of entering the Experience Machine is due to the status quo bias. If people's current lives were being lived inside an Experience Machine, 50% of people would want to stay in the Machine even if they could instead live the lifestyle of a multi-millionaire in Monaco. Similarly, many people's intuitive rejection of the Repugnant Conclusion could be due to scope insensitivity.

And, revising our principles to accommodate the new evidence may lead to inconsistencies in our principles. Also, if you're a moral realist, it almost always doesn't make sense to change your principles if you believe that your principles are true.

Comment by vidur_kapur on On Priors · 2016-04-27T16:00:02.920Z · EA · GW

I'm very interested in this sort of stuff, though a bit of the maths is beyond me at the moment!

Comment by vidur_kapur on Four free CFAR programs on applied rationality and AI safety · 2016-04-10T19:44:35.821Z · EA · GW

Thanks for the info! Yes, I'll give it a shot.

Comment by vidur_kapur on Four free CFAR programs on applied rationality and AI safety · 2016-04-10T16:03:20.897Z · EA · GW

I have a probably silly question about the EuroSPARC program: what if you're in the no man's land between high school and university, i.e. you've just left high school before the program starts?

I know of a couple of mathematically talented people who might be interested (and who would still be in high school), so I'll certainly try and contact them!

Comment by Vidur_Kapur on [deleted post] 2016-02-29T23:04:14.532Z

This essay by Brian Tomasik addresses this question further, looking at the overall impact of human activities on wild-animal suffering, and includes the effect of factory-farming in the analysis too. Whilst human impact on the environment may lead to a net reduction in wild-animal suffering (if you think that the lives of wild-animals are significantly net-negative), the people whose lives are saved by the Against Malaria Foundation also have little impact on the environment, so also have little impact on the reduction of wild-animal suffering.

Comment by Vidur_Kapur on [deleted post] 2016-02-29T22:45:50.510Z

Thanks for the post. I'm somewhat less confident in the meat-eater problem being a problem as a result of it, maybe for different reasons though. I still think that it is overall a problem, however. I'll just put my initial thoughts below.

It’s also plausible that interventions that raise incomes, like deworming, have a lower impact on meat consumption because they don’t raise the overall number of humans that would be eating meat over their entire lifetime.

The effect of raising income itself will still tend to increase meat consumption, though. There was another helpful post recently on the forum which attempted to quantify the effect of economic growth on meat consumption. Although, it's plausible that interventions that raise incomes and contribute to increased education, such as deworming, could not only not raise the number of humans eating meat, but could also reduce the number of humans who would have otherwise existed, if education and particularly female education does lead to lower fertility, though I don't think that this lowering of fertility would outweigh the increased amount of meat being eaten.

Also, while life-saving interventions may have no effect on or even lower fertility in the long-run, there's also some evidence that interventions against malaria, for instance, may raise incomes too, which would lead to more meat being eaten. But, then again, more education as a result of lack of disruption due to malaria prevention could lead to lower fertility in the long run.

I’m less sure of other systems of animal agriculture where welfare standards are higher

Though, factory-farming is the dominant method of animal agriculture in the UK too, and likely in Europe as a whole. I'm also not convinced that animal welfare standards in developing countries will be significantly better even today, and I think that the hypothesis that factory-farming is only going to grow as incomes and populations grow is strong.

Even if I’m wrong about the meat eater problem, we can improve the chances we’ll solve it with investments in animal organizations today

I agree with this, and I also agree with the conclusion that EA should be directing more resources towards animal advocacy, because it does appear to be quite human-centred despite the commitment to impartiality. The possibility that lab-grown meat could ensure that the meat-eater problem is not as big of a problem in the future is also an interesting one, and hopefully one which will be realised.

Again, I would agree that this makes the meat-eater problem somewhat less of a concern, but it also means that potential short-term increases in fertility, which are plausible as a result of global health interventions as the report you cite states, are more important than long-run decreases in fertility - decreasing fertility in the long run is less likely to matter as more people in the long-run will impact less, in expectation, on meat-eating due to the ever increasing probability of lab-grown meat becoming widely or near-fully adopted.

I also liked the idea of "working more in India" as a compromise solution.

However, I'd still disagree that we should split our donations - I would endorse the view that we should maximise expected utility and favour your option 1 of donating solely to animal charities (or future animal suffering), and I wouldn't say that this relies on implausible causal chains either. While I have downshifted my confidence in the meat-eater problem being a thing, I still think that it's more likely to be a thing than not. And, I would say that the amount of suffering inflicted upon non-human animals as a result of meat-eating is greater than the amount of human suffering we could alleviate. So, if we're sufficiently worried about the meat-eater problem, chances are that our donations would align best with our values if we donated solely to animal charities, and vice-versa.

Comment by vidur_kapur on Effective Altruism and ethical science · 2016-01-27T18:21:50.628Z · EA · GW

I agree - it would be bizarre to selectively criticise EA on this basis when our entire healthcare system is predicated on ethical assumptions.

Similarly, we could ask "why satisfy my own preferences?", but seeing as we just do, we have to take it as a given. I think that the argument outlined in this post takes a similar position: we just do value certain things, and EA is simply the logical extension of our valuing these things.

Comment by vidur_kapur on Effective Altruism and ethical science · 2016-01-26T11:16:50.840Z · EA · GW

I agree with Squark - it's only when we've already decided that, say, saving lives is important that we create health systems to do just that.

But, I agree with the point that EA is not doing anything different to society as a whole - particularly healthcare - in terms of its philosophical assumptions. It would be fairly inconsistent to selectively look for the philosophical assumptions that underlie EA and not healthcare systems.

More generally, I approach morality in a similar way: sentient beings aim to satisfy their own preferences. I can't suddenly decide not to satisfy my own preferences, yet there's no justification for putting my own preferences above those of others'. It seems to me, then, if I am satisfying my own preferences - which it is impossible not to do - I'm obligated to maximise the preference-satisfaction of others too.

We could ask "why act in a logically consistent fashion?" or "why act as logic tells you to act?", but such questions presuppose the existence of logic, so I don't think they're valid questions to ask.

Comment by vidur_kapur on Doing Good Better - Book review and comments · 2015-12-26T10:07:08.771Z · EA · GW

I'm in agreement with you on the meat consumption issue: morality doesn't begin and end with meat consumption, but it's better to donate lots to effective animal charities and be vegan, as opposed to offsetting one's meat consumption or having fancy vegan meals and being vegan. This seems to be the standard utilitarian stance. That's without taking into account the benefits of being vegan in terms of flow-through effects too, which have been discussed on this forum before. Personally, after having become essentially vegan, my family has had to reduce its meat consumption too, because it's not worth it to buy a lot of animal products anymore when 1/3 of the family is now vegetarian/vegan.

In terms of the overall review, I agree that it's a good introduction for non-EAs. I enjoyed 'Doing Good Better' a lot, and I would highly recommend it too, though I doubt many people on here won't have read it.

Comment by vidur_kapur on Population ethics: In favour of total utilitarianism over average · 2015-12-23T16:38:02.907Z · EA · GW

I approach utilitarianism more from a framework that, logically, I should be maximising the preference-satisfaction of others who exist or will exist, if I am doing the same for myself (which it is impossible not to do). So, in a sense, I don't believe that preference-satisfaction is good in itself, meaning that there's no obligation to make satisfied preferrers, just preferrers satisfied. I still assign some weight to the total view, though.

Comment by vidur_kapur on Population ethics: In favour of total utilitarianism over average · 2015-12-23T10:20:43.285Z · EA · GW

Interesting piece. I too reject the average view, but I'm currently in favour of prior-existence preference utilitarianism (the preferences of currently existing beings and beings who will exist in the future matter, but extinction, say, isn't bad because it prevents satisfied people from coming into existence) over the total view. I find it to be quite implausible that people can be harmed by not coming into existence, although I'm aware that this leads to an asymmetry, namely that we're not obligated to bring satisifed beings into existence but we're obligated not to bring lives not worth living into existence. One way to resolve that is some form of negative-leaning view, but that has problems too so I'm satisfied with living with the asymmetry for now.

Nonetheless, I agree that the Repugnant Conclusion is a fairly weak argument against the total view.

Comment by vidur_kapur on Quantifying the Impact of Economic Growth on Meat Consumption · 2015-12-22T20:20:18.304Z · EA · GW

Thank you for this - I found it to be very useful. While I recognise the PR issue, I think it's also very important to explore all areas when it comes to cause-prioritization.

Comment by vidur_kapur on Are GiveWell Top Charities Too Speculative? · 2015-12-21T22:53:58.899Z · EA · GW

Could it not plausibly be the case that supporting rigorous research explicitly into how best to reduce wild-animal suffering is robustly net positive? I say this because whenever I'm making cause-prioritization considerations, the concern that always dominates seems to be wild-animal suffering and the effect that intervention x (whether it's global poverty or domesticated animal welfare) will have on it.

General promotion of anti-speciesism, with equal emphasis put on wild-animals, would also seem to be robustly net positive, although this general promotion would be difficult to do and may have a low success rate, so it would probably be outweighed in an expected-utility calculation by more speculative interventions such as vegan advocacy which have an unclear sign when it comes to wild-animal suffering.

Comment by vidur_kapur on EA's Image Problem · 2015-10-11T16:18:34.017Z · EA · GW

I think this is an excellent post. The point about unnecessary terminology from philosophy and economics is certainly one I've thought about before, and I like the suggestion about following Orwell's rules.

On the use of the term rational, I think it can be used in different ways. If we're celebrating Effective Altruism as a movement which proceeds from the assumption that reason and evidence should be used to put our moral beliefs into action, then I think the use of the term is fine, and indeed is one of the movement's strong points which will attract people to it.

But, if we're saying something along the lines of "effective altruists are rational because they're donating to AMF (or other popular charities among EAs)", then I suppose it could be interpreted as saying we have all the answers already. So, perhaps it should be stressed that the fact that effective altruism is based on the principle that we should engage in rational inquiry does not always mean that effective altruists will be rational. From what I've read, the EA movement seems to be good at welcoming criticism, but it may not seem that way to others not associated with the movement.

On the point about narrow consequentialism, I agree with using other arguments, such as the drowning child argument, to counter this accusation. It may be harder to counter it with people you personally know, though: my non-EA friends know I assign a lot of weight to utilitarianism, so even if I am discussing it without using narrow consequentialist arguments, they may still see it through the lens of narrow consequentialism because they'll associate EA with me and therefore with utilitarianism. Hopefully, though, by focusing on the arguments for EA that don't rely on consequentialism, this association can be dealt with.

Comment by vidur_kapur on Political Debiasing and the Political Bias Test · 2015-09-11T16:16:39.952Z · EA · GW

Interesting test. I scored quite low in terms of political bias, but there's certainly a temptation to correct or over-correct for your biases when you're finding it very hard to choose between the options.

Comment by vidur_kapur on EA introduction course and YouTube playlists · 2015-08-13T15:45:37.710Z · EA · GW

A discussion of moral philosophy may be important not only because morality is integral to EA in general, but because it illustrates how the movement is suitable for people with wildly different views on morality, from utilitarians/consequentialists to deontologists to those who take a religious view.

I'd say that this video of Peter Singer is quite a good, short overview of cause prioritization.

Comment by vidur_kapur on Should I be vegan? · 2015-05-17T15:47:05.790Z · EA · GW

Very detailed!

I'm currently in between lacto-ovo vegetarianism and veganism in that I'm a lacto-vegetarian. This is only because I don't currently have a regular income (I'm still in high school), and attempting to replace dairy in particular has been quite an inconvenience.

So, my experience is that it is a lot less inconvenient to give up eggs than to give up dairy products, so perhaps you could try lacto-vegetarianism, but seeing as you are willing to go "95% vegan" and potentially "100% vegan", they're probably better in consequentialist terms overall.

Comment by vidur_kapur on Should altruism be selfless? · 2015-03-24T17:22:30.520Z · EA · GW

I've seen criticisms of effective altruism in which effective altruists have been criticised for supposedly donating a large proportion of their income simply to improve their image and make themselves look better. On that basis, it could be argued that EA should have a closer relationship to Maximum Selflessness, but even then, people could still accuse EAs of "being selfless" in order to improve their image.

On the other hand, if EA were centred around the concept of Maximum Selflessness, it could be perceived as too demanding. But, if the selfish reasons for being an effective altruist are promoted too much, a selfish person may simply get bored after a while and find something else which benefits him or her.

So, a balance should be struck and I think this balance does exist currently in effective altruism. From a utilitarian point of view, I wouldn't say that EA should ever be all about Maximum Selflessness, because self-benefit along with benefitting others surely means that net happiness or net preferences-satisfied in the world has increased to a greater extent than if one sacrificed everything to benefit others and was then unhappy.

Comment by vidur_kapur on Saving the World, and Healing the Sick · 2015-02-16T16:50:58.949Z · EA · GW

Thank you for giving a realistic account of what it's like to be a doctor.

I'm considering studying medicine, so this was very helpful!