Possible misconceptions about (strong) longtermism 2021-03-09T17:58:54.851Z
Important Between-Cause Considerations: things every EA should know about 2021-01-28T19:56:31.730Z
What is a book that genuinely changed your life for the better? 2020-10-21T19:33:15.175Z
jackmalde's Shortform 2020-10-05T21:53:33.811Z
The problem with person-affecting views 2020-08-05T18:37:00.768Z
Are we neglecting education? Philosophy in schools as a longtermist area 2020-07-30T16:31:37.847Z
The 80,000 Hours podcast should host debates 2020-07-10T16:42:06.387Z


Comment by jackmalde on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-07T18:52:44.633Z · EA · GW

I think that's fair but I also think that non-neglectedness is actually bad for two reasons:

  1. Diminishing returns (which may not be the case if people are solving the problem poorly)
  2. Crowdedness meaning it's harder to change direction even if people are solving the problem poorly (although this point is really tractability so one needs to be careful about not double-counting when doing ITN).

I'm thinking number 2 could be quite relevant in this case. Admittedly it's quite relevant for any EA intervention that involves systemic change, but I get the impression that other systemic change interventions may be even higher in importance.

Comment by jackmalde on Concerns with ACE's Recent Behavior · 2021-04-20T05:39:34.272Z · EA · GW

The only thing of interest here is what sort of compromise ACE wanted. What CARE said in response is not of immediate interest, and there's certainly no need to actually share the messages themselves.

Perhaps you can understand why one might come away from this conversation thinking that ACE tried to deplatform the speaker? To me at least it feels hard to interpret "find a compromise" any other way.

Comment by jackmalde on Concerns with ACE's Recent Behavior · 2021-04-17T15:20:49.190Z · EA · GW

Thanks for writing this comment as I think you make some good points and I would like people who disagree with Hypatia to speak up rather than stay silent.

Having said that, I do have a few critical thoughts on your comment. 

Your main issue seems to be the claim that these harms are linked, but you just respond by only saying how you feel reading the quote, which isn't a particularly valuable approach.

I don’t think this was Hypatia’s main issue. Quoting Hypatia directly, they imply the following are the main issues:

  • The language used in the statement makes it hard to interpret and assess factually
  • It made bold claims with little evidence
  • It recommended readers spend time going through resources of questionable value

Someone called Encompass a hate group (which as a side note, it definitely is not). The Anima Executive Director in question liked this comment.

You bring this up a few times in your comment. Personally I give the ED the benefit of the doubt here because the comment in question also said “what does this have to do with helping animals" which is a point the ED makes elsewhere in the thread, so it’s possible that they were agreeing with this part of the comment as opposed to the ‘hate group’ part. I can’t be sure of course, but I highly doubt the ED genuinely agrees that Encompass is a hate group given their other comments in the thread seeming fairly respectful of Encompass including “it's not really about animal advocacy, it's about racial injustice and how animal advocates can help with that. That's admirable of course, I just don't think it's relevant to this group”.

This was a red-flag to ACE (and probably should have been to many people), since the ED had both liked some pretty inflammatory / harmful statements, and was speaking on a topic they clearly had both very strong and controversial views on, regarding which they had previously picked fights on.

You seem to imply that others should have withdrawn from the conference too, or at least that they should have considered it? This all gets to the heart of the issue about free speech and cancel culture. Who decides what’s acceptable and what isn’t? When is expressing a different point of view just that vs. "picking a fight". Is it bad to hold "strong and controversial views"?

People were certainly affected by the ED’s comments, but people are affected by all sorts of comments that we don’t, and probably shouldn't, cancel people for. People will be affected by your comment, and people will be affected by my comment. When talking about contentious issues, people will be affected. It’s unavoidable unless we shut down debate altogether. You imply that the ED's actions were beyond the pale, but we need to realise that this is an inherently subjective viewpoint and it's clearly the case that not everyone agrees. So whilst ACE had the right to withdraw, I'm not sure we can imply that others should have too.

Comment by jackmalde on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-16T07:57:21.214Z · EA · GW

I don't find your comment to have much in the way of argument as to why it might be bad if papers like this one become more widespread. What are you actually worried would happen? This isn't super clear to me at the moment.

I agree a paper that just says "we should ignore the repugnant conclusion" without saying anything else isn't very helpful, but this paper does at least gather reasons why the repugnant conclusion may be on shaky ground which seems somewhat useful to me.

Comment by jackmalde on Confusion about implications of "Neutrality against Creating Happy Lives" · 2021-04-11T17:14:40.264Z · EA · GW

My short answer is that 'neutrality against creating happy lives' is not a mainstream position in the EA community. Some do hold that view, but I think it's a minority. Most think that creating happy lives is good.

Comment by jackmalde on On the longtermist case for working on farmed animals [Uncertainties & research ideas] · 2021-04-11T09:38:01.936Z · EA · GW

Thanks for writing this Michael, I would love to see more research in this area. 

Thus, it seems plausible that expanding a person’s moral circle to include farm animals doesn’t bring the “boundary” of that person’s moral circles any “closer” to including whatever class of beings we’re ultimately concerned about (e.g., wild animals or artificial sentient beings). Furthermore, even if expanding a person’s moral circle to include farm animals does achieve that outcome, it seems plausible that that the outcome would be better achieved by expanding moral circles along other dimensions (e.g., by doing concrete wild animal welfare work, advocating for caring about all sentient beings, or advocating for caring about future artificial sentient beings).[2] 

This is definitely an important point.

This is very speculative, but part of me wonders if the best thing to advocate for is (impartial) utilitarianism. This would, if done successfully, expand moral circles across all relevant boundaries including farm animals, wild animals and artificial sentience, and future beings. Advocacy for utilitarianism would naturally include "examples", such as ending factory farming, so it wouldn't have to be entirely removed from talk of farmed animals. I'm quite uncertain if such advocacy would effective (or even be good in expectation), but it is perhaps an option to consider.

(Of course this all assumes that utilitarianism is true/the best moral theory we currently have).

Comment by jackmalde on Possible misconceptions about (strong) longtermism · 2021-04-06T18:40:49.831Z · EA · GW

To be honest I'm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing I took from Greaves was to realise there seems to be an issue of complex cluelessness in the first place - where we can't really form precise credences in certain instances where people have traditionally felt like they can, and that these instances are often faced by EAs when they're trying to do the most good.

Maybe we're also complexy clueless about what day to conceive a child on, or which chair to sit on, but we don't really have our "EA hat on" when doing these things. In other words, I'm not having a child to do the most good, I'm doing it because I want to. So I guess in these circumstances I don't really care about my complex cluelessness. When giving to charity, I very much do care about any complex cluelessness because I'm trying to do the most good and really thinking hard about how to do so.

I'm still not sure if I would class myself as complexly clueless when deciding which chair to sit on (I think from a subjective standpoint I at least feel simply clueless), but I'm also not sure this particular debate really matters.

Comment by jackmalde on Possible misconceptions about (strong) longtermism · 2021-04-05T11:04:55.956Z · EA · GW

So far, I feel I've been able to counter any proposed example, and I predict I would be able to do so for any future example (unless it's the sort of thing that would never happen in real life, or the information given is less than one would have in real life).

I think simple cluelessness is a subjective state.  In reality one chair might be slightly older, but one can be fairly confident that it isn't worth trying to find out (in expected value terms). So I think I can probably just modify my example to one where there doesn't seem to be any subjectively salient factor to pull you to one chair or the other in the limited time you feel is appropriate to make the decision, which doesn't seem too far-fetched to me (let's say the chairs look the same at first glance, they are both in front row and either side of the aisle etc.).

I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the other - otherwise they fall prey to  paralysis. Admittedly I haven't read James Lenman closely enough to know if he does in fact invoke paralysis as a necessary consequence for consequentialists, but I think it would probably be the conclusion.

EDIT: To be honest I'm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing from Greaves was to realise there seems to be an issue of complex cluelessness in the first place where we can't really form precise credences. 

FWIW, I also think other work of Greaves has been very useful. And I think most people. -though not everyone - who've thought about the topic think the cluelessness stuff is much more useful than I think it is

For me, Greaves' work on cluelessness just highlighted to me a problem I didn't think was there in the first place. I do feel the force of her claim that we may no longer be able to justify certain interventions (for example giving to AMF), and I think this should hold even for shorttermists (provided they don't discount indirect effects at a very high-rate). The decision-relevant consequence for me is trying to find interventions that don't fall prey to problem, which might be the longtermist ones that Greaves puts forward (although I'm uncertain about this).

Comment by jackmalde on Possible misconceptions about (strong) longtermism · 2021-04-05T08:00:56.726Z · EA · GW

Your critique of the conception example might be fair actually. I do think it's possible to think up circumstances of genuine 'simple cluelessness' though where, from a subjective standpoint, we really don't have any reasons to think one option may be better or worse than the alternative. 

For example we can imagine there being two chairs in front of us and making a choice of which chair to sit on. There doesn't seem to be any point stressing about this decision (assuming there isn't some obvious consideration to take into account), although it is certainly possible that choosing the left chair over the right chair could be a terrible decision ex post. So I do think this decision is qualitatively different to donating to AMF. 

However I think the reason why Greaves introduces the distinction between complex and simple cluelessness is to save consequentialism from Lenman's cluelessness critique (going by hazy memory here). If a much wider class of decisions suffer from complex cluelessness than Greaves originally thought, this could prove problematic for her defence. Having said that, I do still think that something like working on AI alignment probably avoids complex cluelessness for the reasons I give in the post, so I think Greaves' work has been useful.

Comment by jackmalde on Possible misconceptions about (strong) longtermism · 2021-04-05T07:32:48.956Z · EA · GW

Thanks for all your comments Michael, and thanks for recommending this post to others!

I have read through your comments and there is certainly a lot of interesting stuff to think about there. I hope to respond but I might not be able to do that in the very near future.  

I'd suggest editing the post to put the misconceptions in the headings in quote marks

Great suggestion thanks, I have done that.

Comment by jackmalde on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-05T01:54:11.812Z · EA · GW

OK thanks I think that is clearer now.

Comment by jackmalde on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-05T01:37:51.594Z · EA · GW

Thanks yeah, I saw this section of the paper after I posted my original comment. I might be wrong but I don't think he really engages in this sort of discussion in the video, and I had only watched the video and skimmed through the paper. 

So overall I think you may be right in your critique. It might be interesting to ask Tarsney about this (although it might be a fairly specific question to ask).

Comment by jackmalde on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-05T01:24:31.239Z · EA · GW

OK that's clearer, although I'm not immediately sure why the paper would have achieved the following:

I somewhat updated my views regarding: 

  • how likely such a lock-in is
    • and in particular how likely it is that a state that looks like it might be a lock-in would actually be a lock-in
      • ...

I think Tarsney implies that institutional reform is less likely to be a true lock-in, but he doesn't really back this up with much argument. He just implies that this point is somewhat obvious. Under this assumption, I can understand why his model would lead to the following update:

  • ...
    • ...
      • and in particular how much the epistemic challenge to longtermism might undermine a focus on this type of potential lock-in in particular

In other words, if Tarsney had engaged in a discussion about why institutional change isn't actually likely to be stable/persistent, providing object-level reasons for why (which may involve disagreeing with Greaves and MacAskill's points), I think I too would update away from thinking institutional change is that important, but I don't think he really engages in this discussion.

I should say that I haven't properly read through the whole paper (I have mainly relied on watching the video and skimming through the paper), so it's possible I'm missing some things.

Comment by jackmalde on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-04T16:23:45.129Z · EA · GW

In case anyone is interested, Rob Wiblin will be interviewing Tarsney on the 80,000 Hours podcast next week. Rob is accepting question suggestions on Facebook (I think you can submit questions to Rob on Twitter or by email too).

Comment by jackmalde on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-04T15:50:02.238Z · EA · GW

I agree with you that Tarsney hasn't been clear, but I think you've got it the wrong way around (please tell me if you think I'm wrong though). The abstract to the paper says:

But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these “Pascalian” probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.

These two sentences seem to say different things, as you have outlined. The first implies that you need fanaticism, whilst the second implies you need either fanaticism or non-obvious but plausible empirical views. Counter to you I think the former is actually correct.

Tarsney initially runs his model using point estimates for the parameters and concludes that the case for longtermism is "plausible-but-uncertain" if we assume that humanity will eventually spread to the starts, and "extremely demanding" if we don't make that assumption. Therefore longtermism doesn't really "survive the epistemic challenge" when using point estimates.

Tarsney says however that "The ideal Bayesian approach would be to treat all the model parameters as random variables rather than point estimates". So if we're Bayesians we can pretty much ignore the conclusions so far and everything is still to play for.

When Tarsney does incorporate uncertainty for all parameters, the expectational superiority of longtermism becomes clear because "the potential upside of longtermist interventions is so enormous". In other words the use of random variables allows for fanaticism to take over and demonstrates the superiority of longtermism. 

So it seems to me that it really is fanaticism that is doing the work here. Would be interested to hear your thoughts.

EDIT: On a closer look at his paper Tarsney does say that it isn't clear how Pascalian the superiority of longtermism is because of the "tremendous room for reasonable disagreement about the relevant probabilities". Perhaps this is what you're getting at Michael?

Comment by jackmalde on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-04T14:48:27.366Z · EA · GW

This indeed seems like an interesting implication of Tarsney's model, and indeed updates me towards placing a bit less emphasis on reducing non-extinction existential risks - e.g., reducing the chance of lock-in of a bad governmental system or set of values. 

Surely "lock-in" implies stability and persistence?

Greaves and MacAskill introduce the concept of the 'non-extinction attractor state' to capture interventions that can achieve the persistence Tarsney says is so important, but that don't rely on extinction to do so. 

This includes institutional reform:

But once such institutions were created, they might persist indefinitely. Political institutions often change as a result of conflict or competition with other states. For strong world governments, this consideration would not apply (Caplan 2008). In the past, governments have also often changed as a result of civil war or internal revolution. However, advancing technology might make that far less likely for a future world government: modern and future surveillance technologies could prevent insurrection, and AI-controlled police and armies could be controlled by the leaders of the government, thereby removing the possibility of a military coup (Caplan 2008; Smith 2014).

Comment by jackmalde on Formalising the "Washing Out Hypothesis" · 2021-04-03T08:36:47.023Z · EA · GW

I haven't read that post but will definitely have a look, thanks.

Comment by jackmalde on Formalising the "Washing Out Hypothesis" · 2021-04-03T08:35:28.665Z · EA · GW

Yeah this all makes sense, thanks.

Comment by jackmalde on Formalising the "Washing Out Hypothesis" · 2021-04-03T08:33:56.047Z · EA · GW

Thanks for this Michael, I'd be very interested to read this post when you publish it. Especially as my career has taken a (potentially temporary) turn in the general direction of speeding up progress, rather than towards safety. I still feel that Ben Todd and co are probably right, but I want to read more.

Also, relevant part from Greaves and MacAskill's paper:

Just how much of an improvement [speeding up progress] amounts to depends, however, on the shape of the progress curve. In a discrete-time model, the benefit of advancing progress by one time period (assuming that at the end of history, one thereby gets one additional time period spent in the “end state”) is equal to the duration of that period multiplied by the difference between the amounts of value that are contained in the first and last periods. Therefore, if value per unit time is set to plateau off at a relatively modest level, then the gains from advancing progress are correspondingly modest. Similarly, if value per unit time eventually rises to a level enormously higher than that of today, then the gains from advancing progress are correspondingly enormous.

Comment by jackmalde on Any EAs familiar with Partha Dasgupta's work? · 2021-03-31T09:51:32.348Z · EA · GW

I haven't actually read the Dasgupta review, only that first link you shared. Overall I think EAs probably don't disagree that much with what Dasgupta is saying but probably focus on other things due to neglectedness. Even if economics doesn't account for nature enough, there are still loads of people shouting about the the negative effect we have on nature, and this review was actually commissioned by the UK Government so they are clearly aware of the problem. It's also hardly news that GDP isn't a perfect measure. Compare this to things like biorisk and risk from unaligned AI which important people generally don't think about.

Otherwise a few things jumped out to me from that first link:

Biological diversity is, in fact, declining faster now than at any time in our history. Since 1970, there has been on average almost a 70% drop in the populations of mammals, birds, fish, reptiles, and amphibians. Around one million animal and plant species – almost a quarter of the global total – are believed to be threatened with extinction.

Beyond its intrinsic – and incalculable – worth, biodiversity provides fundamental natural “dividends” that nourish and protect us: from basic sustenance through fish stocks or insects that pollinate crops, to soil regeneration, and water and flooding regulation. Not to mention the cultural and spiritual values that enrich our lives.

Dasgupta doesn't appear to have factored in animal welfare. Fish "sustaining" us probably isn't a great thing (unless perhaps some people literally don't have any other options) and reduction in wild animal populations could actually be good if they live net negative lives (which is quite possible).

The review also refers to 'intrinsic' value of biodiversity. I'd imagine EAs mostly reject this thinking biodiversity only has instrumental value.

Thank you for raising this though, I'm hoping to read the report (or maybe a good summary!) and it's possible that the EA community should too. If natural capital is indeed important in sustaining economic development then it is an important consideration from a long-term perspective.

Comment by jackmalde on The Long-Term Future Fund has room for more funding, right now · 2021-03-30T14:00:09.195Z · EA · GW

OK thanks that makes sense

Comment by jackmalde on The Long-Term Future Fund has room for more funding, right now · 2021-03-29T17:27:01.657Z · EA · GW

It's worth noting that I only believe this under the assumption that the individual donors know about some specific opportunities that the fund managers are unaware of, or perhaps have significant worldview differences with the fund managers.

The long-term future fund can only give to people who apply for funding though (right?) whereas someone who wins a donor lottery can give literally anywhere. This seems another reason why a donor lottery winner might give better?

Comment by jackmalde on The Long-Term Future Fund has room for more funding, right now · 2021-03-29T16:07:45.286Z · EA · GW

Thanks! I'm actually not surprised that the quality of grant applications might be increasing e.g. due to people learning more about what makes for a good grant.

I have a follow-on question. Do you think that the increase in the size of the grant requests is justified? Is this because people are being more ambitious in what they want to do?

Comment by jackmalde on The Long-Term Future Fund has room for more funding, right now · 2021-03-29T14:46:57.093Z · EA · GW

We’ve recently changed parts of the fund’s infrastructure and composition, and it’s possible that these changes have caused us to unintentionally lower our standards for funding. My personal sense is that this isn’t the case

Can you say more about why the changes might have led lower standards for funding? It sounds like you think there are some at least somewhat plausible reasons why this might be the case.

Can you also say more about why you actually don't think the standards have fallen despite these possible reasons?

Comment by jackmalde on FAQ: UK Civil Service Job Applications · 2021-03-27T21:33:01.331Z · EA · GW

Thanks for writing this!

Consider working outside of your preferred policy area at first in order to build up your policy experience. After a few years of relevant experience, you may find it easier to find a role in a more competitive policy area.

Do you know if this works for someone coming from outside the civil service? E.g. if someone works in say economic policy at a think tank and they want to shift into say an emerging technology role at the civil service, would they be able to easily do so? Or would the job description likely say something like "experience in emerging technology policy required"?

Comment by jackmalde on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-26T17:37:03.039Z · EA · GW

If you look back at Jonas' post a name change was just a "potential implication", alongside other steps to "de-emphasize the EA brand". I wouldn't say therefore that he is advocating a name change, just putting the idea out there.

Also he certainly doesn't advocate changing it to "Global Priorities" specifically as you have claimed. It was just one very tentative idea he had (clue is in the use of "e.g.").

EDIT: re-tracted as I thought AllAmericanBreakfast still thought Jonas was advocating for a name change but I misread

Comment by jackmalde on Formalising the "Washing Out Hypothesis" · 2021-03-26T16:38:18.167Z · EA · GW

I'm unsure how many proposed longtermist interventions don't  rely on the concept of attractor states. For example, in Greaves and MacAskill's The Case for Strong Longtermism, they class mitigating (fairly extreme) climate change as an intervention that steers away from a "non-extinction" attractor state:

A sufficiently warmer climate could result in a slower long-run growth rate (Pindyck 2013, Stern 2006), making future civilisation poorer indefinitely; or it could mean that the planet cannot in the future sustain as large a human population (Aral 2014); or it could cause unrecoverable ecosystem losses, such as species extinction and destruction of coral reefs (IPCC 2014, pp.1052-54)

Perhaps Nick Beckstead's work deviates from the concept of attractor states? I haven't looked at his work very closely so am not too sure. Do you feel that "ordinary" (non-attractor state) longtermist interventions are commonly put forward in the longtermist community?

The only intervention in Greaves and MacAskill's paper that doesn't rely on an attractor state is "speeding up progress":

Suppose, for instance, we bring it about that the progress level that would otherwise have been realised in 2030 is instead realised in 2029 (say, by hastening the advent of some beneficial new technology), and that progress then continues from that point on just as it would have if the point in question had been reached one year later. Then, for as long as the progress curve retains a positive slope, people living at every future time will be a little bit better off than they would have been without the intervention. In principle, these small benefits at each of an enormous number of future times could add up to a very large aggregate benefit.

I'd be interested to hear your thoughts on what you think the forecasting error function would be in this case. My (very immediate and perhaps incorrect) thought is that speeding up progress doesn't fall prey to a noisier signal over time. Instead I'm thinking it would be constant noisiness, although I'm struggling to articulate why. I guess it's something along the lines of "progress is predictable, and we're just bringing it forward in time which makes it no less predictable".

Overall thanks for writing this post, I found it interesting!

Comment by jackmalde on Proposed Longtermist Flag · 2021-03-24T16:09:38.918Z · EA · GW

I like this. Ryan's original example, whilst a pretty good suggestion overall, gives the impression of insignificance, whereas this one gives the impression of insignificance mixed with vast potential and hope for something more.

The only reservation I have is that this flag might imply that longtermism is only valid if we can spread to the stars. I think the jury is still out on whether or not this is actually the case? It has been suggested that existential security may only be possible if we spread out in the universe, but I'm not sure if this is generally accepted?

Perhaps I'm being overly nitpicky though.

Comment by jackmalde on Why do so few EAs and Rationalists have children? · 2021-03-24T07:05:36.417Z · EA · GW

By the way, Toby Ord weighs in on this at 24:33 in his Global Reconnect interview

He basically agrees with Michael that having children and raising them as EAs is unlikely to be as cost-effective as spreading EA to existing adults. He also seems to feel somewhat uncomfortable about the idea of raising children as EAs.

Comment by jackmalde on On future people, looking back at 21st century longtermism · 2021-03-23T20:08:39.635Z · EA · GW

I think it is almost nonsensical to ask, say, whether it would be better for a pig to be a human, or for a human to be a dog

To clarify I'm not asking that question. I class myself as a hedonistic utilitarian which just means that I want to maximise net positive over negative experiences. So I'm not saying that it would be better for a pig to be a human, just that if we were to replace a pig with a human we may increase total welfare (if the human has greater capacity for welfare than the pig). I agree that determining if humans have greater capacity for welfare than pigs isn't particularly tractable though - I too haven't really read up much on this.

whether longtermist considerations shouldn't cause us to worry about orangutan extinction risks, too, given that orangutans are not so dissimilar from what we were some few millions of years ago. So that in a very distant future they might have the potential to be something like human, or more?

That's an interesting possibility! I don't know enough biology to comment on the likelihood. 

I should mention that I think your argument for species extinction is reasonable & I'm glad there's someone out there making it

To be honest I'm actually quite unsure if we should be trying to make all non-human animals go extinct. I don't know how tractable that is or what the indirect effects would be. I'm saying, putting those considerations aside, that it would probably be good from a longtermist point of view.

The exception is of course factory-farmed animals. I do hope they go extinct and I support tangible efforts to achieve this e.g. plant-based and clean meat.

Comment by jackmalde on On future people, looking back at 21st century longtermism · 2021-03-22T23:40:23.760Z · EA · GW

This is where I ought to point out that I'm not a utilitarian or even a consequentialist, so if we disagree, that's probably why.

Yes I would say that I am a consequentialist and, more specifically a utilitarian, so that may be doing a lot of work in determining where we disagree. 

That humans have a higher capacity of welfare seems questionable to me, but I guess we'd have to define well-being before proceeding. Why do you think so? Is it because we are more intelligent & therefore have access to "higher" pleasures?

I do have a strong intuition that humans are simply more capable of having wonderful lives than other species, and this is probably down to higher intelligence. Therefore, given that I see no intrinsic value and little instrumental value in species diversity, if I could play god I would just make loads of humans (assuming total utilitarianism is true). I could possibly be wrong that humans are more capable of wonderful lives though.

It seems unfair to make a group's existence miserable & then to make them go extinct because they are so miserable!

Life is not fair. The simple point is that non-human animals are very prone to exploitation (factory farming is case in point). There are risks of astronomical suffering that could be locked in in the future. I just don't think it's worth the risk so, as a utilitarian, it just makes sense to me to have humans over chickens. You could argue getting rid of all humans gets rid of exploitation too, but ultimately I do think maximising welfare just means having loads of humans so I lean towards being averse to human extinction.

But we should care about individual orangutans, & it seems plausible to me that they care whether they go extinct.

Absolutely I care about orangutans and the death of orangutans that are living good lives is a bad thing. I was just making the point that if one puts their longtermist hat on these deaths are very insignificant compared to other issues (in reality I have some moral uncertainty and so would wear my shortermist cap too, making me want to save an orangutan if it was easy to do so).

Most of that sounds like a great future for humans.

Yes indeed. My utilitarian philosophy doesn't care that we would have loads of humans and no non-human animals. Again, this is justified due to lower risks of exploitation for humans and (possibly) greater capacities for welfare. I just want to maximise welfare and I don't care who or what holds that welfare.

Comment by jackmalde on On future people, looking back at 21st century longtermism · 2021-03-22T21:59:53.329Z · EA · GW

Thank you for raising non-human animals. I believe that longtermists don't talk about non-human animals enough. That is one reason I wrote that post that you have linked to.

In the post I actually argue that non-human animal extinction would be good. This is because it isn't at all clear that non-human animals live good lives. Even if some or many of them do live good lives, if they go extinct we can simply replace them with more humans which seems preferable because humans probably have higher capacity for welfare and are less prone to being exploited (I'm assuming here that there is no/little value of having species diversity). There are realistic possibilities of terrible animal suffering occuring in the future, and possibly even getting locked-in to some extent, so I think non-human animal extinction would be a good thing. 

Similarly (from a longtermist point of view) who really cares if orangutans go extinct? The space they inhabit could just be taken over by a different species. The reason why longtermists really care if humans go extinct is not down to speciesism, but because humans really do have the potential to make an amazing future. We could spread to the stars. We could enhance ourselves to experience amazing lives beyond what we can now imagine. We may be able to solve wild animal suffering. Also, to return to my original point, we tend to have good lives (at least this is what most people think). These arguments don't necessarily hold for other species that are far less intelligent than humans and so are, in my humble opinion, mainly a liability from a longtermist's point of view.

Comment by jackmalde on Please stand with the Asian diaspora · 2021-03-21T11:04:42.190Z · EA · GW

Thanks for this I think that all makes a lot of sense. 

FWIW I wasn't necessarily asking you to provide this feedback to Dale. I was just noting that such feedback hadn't yet been provided. I interpreted your earlier comment as implying that it had.

Comment by jackmalde on Please stand with the Asian diaspora · 2021-03-21T06:33:21.130Z · EA · GW

As someone who often gives criticism, sometimes unpopular criticism, I both appreciate when people point out ways I could phrase it better

Neither you nor Khorton appear to have done this for Dale, at least not very clearly.

Comment by jackmalde on Please stand with the Asian diaspora · 2021-03-20T21:01:48.323Z · EA · GW

Fair enough. I agree that Dale perhaps could have included words along the lines of "I understand the hurt the Asian community must be feeling but...".

Comment by jackmalde on Please stand with the Asian diaspora · 2021-03-20T20:47:33.698Z · EA · GW

In theory at least, pointing out that someone may be mistaken about something that has been troubling them can be comforting.

Whether or not this works in practice I'm less sure about.

Comment by jackmalde on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding · 2021-03-20T15:21:14.457Z · EA · GW

Sounds good!

Comment by jackmalde on EA capital allocation is an inner ring · 2021-03-20T09:50:52.763Z · EA · GW

Even if your intentions are good surely it should be clear at this point that your approach is proving completely ineffective?

Comment by jackmalde on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding · 2021-03-20T06:34:09.406Z · EA · GW

Could you provide some possible examples of AI Safety Interventions that could be carried out? I’m unclear on what these might look like

Comment by jackmalde on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding · 2021-03-19T11:15:10.514Z · EA · GW

Part of me wonders if a better model than the one outlined in this post is for Nonlinear to collaborate with well-established AI research organisations who can advise on the high-impact interventions, for which Nonlinear then proceeds to do the grunt work to turn into a reality.

Even in this alternative model I agree that Nonlinear would probably benefit from someone with in-depth knowledge of AI safety as a full-time employee.

Comment by jackmalde on Against neutrality about creating happy lives · 2021-03-18T19:02:40.140Z · EA · GW

Definitely a useful discussion and I look forward to seeing you write more on all of this!

Comment by jackmalde on Against neutrality about creating happy lives · 2021-03-18T17:48:28.217Z · EA · GW

Ah, well maybe we should just defer to Broome and Greaves and not engage in the object-level discussions at all!

Hah perhaps I deserved this. I was just trying to indicate that there are people who both 'understand the theory' and hold that the <A, B1, B2> argument is important which was a response to your "I find people do tend to very easily dismiss the view, but usually without really understanding how it works!" comment. I concede though that you weren't saying that of everyone.

All views in pop ethics have bonkers results, something that is widely agreed by population ethicists.

Yes I understand that it's a matter of accepting the least bonkers result. Personally I find the idea that it might be neutral to bring miserable lives into this world is up there with some of the more bonkers results.

You may just write me off as a monster, but I quite like symmetries and I'm minded to accept a symmetrical person-affecting view

I don't write you off as a monster! We all have different intuitions about what is repugnant. It is useful to have (I think) reached a better understanding of both of our views.

My view goes something like:

  • I am not willing to concede that it might be neutral to bring terrible lives into this world which means I reject necessitarianism and therefore feel the force of the <A, B1, B2> argument (as I also hold transitivity to be an important axiom). I'm not sure if I'm convinced by your argument that necessitarianism gets you out the quandary (maybe it does, I would have to think about it more) but ultimately it doesn't matter to me as I reject necessitarianism anyway.
  • I note that MichaelStJules says that you can hold onto transitivity at the expense of IIA, but I don't think this does a whole lot for me. I am also concerned by the non-identity problem. Ultimately I'm not really convinced by arguably the least objectionable person-affecting view out there (you can see my top-level comment on this post), and this all leads me to having more credence in total utilitarianism than person-affecting views (which certainly wasn't always the case).
  • The 'bonkers result' with total utilitarianism is the repugnant conclusion which I don't find to be repugnant as I think "lives barely worth living" are actually pretty decent - they are worth living after all! But then there's the "very repugnant conclusion" which still somewhat bothers me. (EDIT: I am also interested by the claim in this paper that the repugnant conclusion afflicts all population axiologies, including person-affecting views, although I haven't actually read through the paper yet to understand it completely).
  • So overall I'm still somewhat morally uncertain about population axiology, but probably have highest credence in total utilitarianism. In any case it is interesting to note that it has been argued that even minimal credence in total utilitarianism can justify  acting as a total utilitarian, if one resolves moral uncertainty by maximising expected moral value.
  • So all in all I'm content to act as a total utilitarian, at least for now.

It was actually fairly useful to write that out.

Comment by jackmalde on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-18T13:08:20.219Z · EA · GW

I'm not sure on the exact valuation research agendas should get, but I would argue that well thought-through research agendas can be hugely beneficial in that they can reorient many researchers in high-impact directions, leading them to write papers on topics that are vastly more important than they might have otherwise chosen.  

I would argue an 'ingenious' paper written on an unimportant topic isn't anywhere near as good as a 'pretty good' paper written on a hugely important topic.

Comment by jackmalde on Against neutrality about creating happy lives · 2021-03-18T09:22:03.592Z · EA · GW

FWIW, I find people do tend to very easily dismiss the view, but usually without really understanding how it works!

I would just point out that Greaves and Broome probably  understand how person-affecting views work and seem to find this <A, B1, B2> argument highly problematic. I used to hold a person-affecting view (genuinely I did) and when I came across this argument I found my faith in it severely tested. I haven't really been convinced by your defence (partly because I still find necessitarianism a bit bonkers - more on this below), but I may need to think about it more.

Should you create a child? Well, on necessitarianism, that depends solely on the effects this has on other, necessary people (and thus not the child). Okay, once you've had/are going to have a child, should you torture it. Um, what do you think?

Perhaps I'm misunderstanding necessitarianism but it doesn't seem hard to find bizarre implications of it (my first one was sloppy). What about the choice between:

A) Not having a child

B) Having a child you know/have very strong reasons to expect will live a dreadful life for reasons other than what you will do to the child when it is born (a fairly realistic scenario in my opinion e.g. terrible genetic defect)

Necessitarianism would seem to imply the two are equally permissible and I'm pretty comfortable in saying that they are not.

Comment by jackmalde on Against neutrality about creating happy lives · 2021-03-17T18:58:20.981Z · EA · GW

All told, I doubt the choice-set <A, B1, B2> is (metaphysically?) possible. This is important because its existence is taken as a strong objection to person-affecting views. I don't think the existence of choice sets like <A, B1, C1> - which is the ordinary non-identity problem - are nearly so problematic.

I think I agree with MichaelStJules here. I don't think that the "practical" possibility of a choice set like <A, B1, B2> is in fact important. The important thing I think is that we can conceive of such a choice set - it's not difficult to consider a scenario where I don't exist, a scenario where I exist with happiness +5, and a scenario where I exist with happiness +10. Broome's example is essentially a thought experiment, and thought experiments can be weird and unrealistic whilst still being very powerful (that's what makes them so fun!).

A standard person-affecting route is to say the only persons who matter are those who exist necessarily (i.e. under all circumstances under consideration)

I find this bizarre. So if you have a choice between (A) not having a baby or (B) having a baby and then immediately torturing it to death we can ignore the "torture to death" aspect when making this decision because the child isn't "existing necessarily"? Maybe I'm misunderstanding, but I find any such person-affecting view very easy to dismiss.

Comment by jackmalde on Why do so few EAs and Rationalists have children? · 2021-03-17T06:04:54.735Z · EA · GW

As a deeper aside, it's odd that he defines meaning pretty much as life satisfaction / evaluation which is normally "how you evaluate your whole life". They obviously aren't the same to people if they give opposite rankings of countries. 

Yeah I think he may actually be referring to life satisfaction, but calling it meaning as a sort of informal short-hand. I'm not sure "meaning" is a very common wellbeing metric anyway.

Comment by jackmalde on Against neutrality about creating happy lives · 2021-03-17T05:59:17.879Z · EA · GW

I think about comparing pairs of outcomes by comparing how much better/worse they are for each person who exists in both, then adding up the individual differences

If you do this (I think) the problem remains? B1 and B2 have the same people but one of the people is better off in B2.

Therefore focusing on personal value, and adopting neutrality, we have:

  • A is as good as B1
  • A is as good as B2
  • By transitivity, B1 is as good as B2 (but this is clearly wrong from both a personal and impersonal point of view)

We've been talking about comparativism vs non-comparativism.

I think the non-comparitivist has to adopt some sort of principle of neutrality (right?), and Greaves' (well originally Broome's) example shows why neutrality violates some important constraint. Therefore this example should undermine non-comparativism. Joe actually mentions this argument briefly in his post (search for "Broome").

Comment by jackmalde on Possible misconceptions about (strong) longtermism · 2021-03-16T22:09:50.785Z · EA · GW

I think I agree with this (at least intuitively agree, not given it deep though). I raised 1. as it as I think it is a useful example of where the Case for Strong Longtermism paper focuses on AL rather than CL. See section 3 p9 – the authors say if short-term actions are also the best long-term actions then AL is trivially true and then move on.  The point you raise here is just not raised by the authors as it is not relevant to the truth of AL.

I just don't really see a meaningful / important distinction between AL and CL to be honest. Let's consider that AL is true, and also that cultivated meat happens to be the best intervention from both a shortermist and longtermist perspective. 

A shortermist might say: I want cultivated meat so that people stop eating animals reducing animal suffering now

A longtermist might say: I want cultivated meat so that people stop eating animals and therefore develop moral concern for all animals. This will reduce risks of us locking in persistent animal suffering in the future

In this case, if AL is true, I think we should also be colloquial longtermists and justify cultivated meat in the way the longtermist does, as that would be the main reason cultivated meat is good. If evidence were to come out that stopping eating meat doesn't improve moral concern for animals, cultivated meat may no longer be great from a longtermist point of view - and it would be important to reorient based on this fact. In other words, I think AL should push us to strive to be colloquial longtermists.

Otherwise, thanks for the reading, I will have a look at some point!

Comment by jackmalde on Why do so few EAs and Rationalists have children? · 2021-03-16T21:49:15.575Z · EA · GW

To be fair to Kaj they only said that one may rationally trade-off happiness for meaning, not that meaning intrinsically matters more.

For example you could theoretically have both meaning and happiness as components of wellbeing, with both having diminishing marginal contribution to wellbeing. In this case it would likely be best to have some meaning and some happiness. If one was very happy, but with no meaning, one could rationally trade off happiness for meaning to improve overall wellbeing - and this wouldn't require thinking that meaning is intrinsically better than happiness.

Comment by jackmalde on Against neutrality about creating happy lives · 2021-03-16T21:35:13.532Z · EA · GW

If it's the same theme as in the slides you linked, then it I don't think it responds to the claims above. Bader supposes 'better for' is a dyadic (two-place) relation between the two lives. Hilary is responding to arguments that suppose 'better for' is a triadic (three-place) relation: between two worlds and the person. I don't think I understand why one would want to formulate it the latter way. I'll take a look at Hilary's paper when it's available.

OK fair enough!

Re your last point: I'm not 100% what you're claiming in the other post because I found the diagrams hard to follow. You're stating a standard version of the non-identity problem, right? I don't think person-affecting views do face intransitivity, but that's a promissory note that, if I'm honest, I don't expect to get around to writing up until maybe 2022 at the earliest.

No it's not the non-identity problem. Disappointed my diagrams didn't work haha. Let me copy what Greaves says about this in section 5.2 of this paper:

5.2 The ‘Principle of equal existence’

If adding an extra person makes a state of affairs neither better nor worse, perhaps it results in a state of affairs that is equally as good as the original state of affairs. That is, one might try to capture the intuition of neutrality via the following principle:

The Principle of Equal Existence: Let A be any state of affairs. Let B be a state of affairs that is just like A, except that an additional person exists who does not exist in A. (In particular, all the people who exist in A also exist in B, and have the same well-being level in B as in A.) Then A and B are equally good.

As Broome (1994, 2004, pp.146-9) points out, however, this principle is all but self-contradictory. This is because there is more than one way of adding an extra person to A — one might add an extra person with well-being level 5, say (leading to state of affairs B1), or (instead) add the same extra person with well-being level 100 (leading to state of affairs B2) — and these ways are not all equally as good as one another. In our example, B2 is clearly better than B1; but the Principle of Equal Existence would require that B1 and A are equally good, and that A and B2 are equally good, in which case (by transitivity of ‘equally as good as’) B1 and B2 would have to be equally as good as one another. The Principle of Equal Existence therefore cannot be correct.