Posts

The case for strong longtermism - June 2021 update 2021-06-21T21:30:16.365Z
Possible misconceptions about (strong) longtermism 2021-03-09T17:58:54.851Z
Important Between-Cause Considerations: things every EA should know about 2021-01-28T19:56:31.730Z
What is a book that genuinely changed your life for the better? 2020-10-21T19:33:15.175Z
jackmalde's Shortform 2020-10-05T21:53:33.811Z
The problem with person-affecting views 2020-08-05T18:37:00.768Z
Are we neglecting education? Philosophy in schools as a longtermist area 2020-07-30T16:31:37.847Z
The 80,000 Hours podcast should host debates 2020-07-10T16:42:06.387Z

Comments

Comment by jackmalde on What would you do if you had half a million dollars? · 2021-07-22T05:03:00.724Z · EA · GW

Thanks, I understand all that. I was confused when Khorton said:

I meant increasing the number of grantmakers who have spent significant time thinking about where to donate significant capital

I wouldn't say the lottery increases the number of grantmakers who have spent significant time thinking, I think it in fact reduces it.

I agree with you when you say however:

The overall amount of time spent is actually less than before, but the depth is far greater, and with dramatically less redundancy.


 

Comment by jackmalde on What would you do if you had half a million dollars? · 2021-07-21T19:59:34.453Z · EA · GW

I think perhaps we agree then - if after significant research, you realize you can't beat an EA Fund, that seems like a reasonable fallback, but that should not be plan A.

Yeah that sounds about right to me.

I meant increasing the number of grantmakers who have spent significant time thinking about where to donate significant capital

I still don't understand this. The lottery means one / a small number of grantmakers get all the money to allocate. People who don't win don't need to think about where to donate. So really it seems to me that the lottery reduces the number of grantmakers and indeed the number of who spend time thinking about where to donate.

Comment by jackmalde on What would you do if you had half a million dollars? · 2021-07-20T18:33:58.678Z · EA · GW

I'm not sure I understand how the lottery increases the diversity of funding sources / increases the number of grantmakers if one or a small number of people end up winning the lottery. Wouldn't it actually reduce diversity / number of grantmakers? I might be missing something quite obvious here...

Reading this it seems the justification for lotteries is that it not only saves research time for the EA community as a whole, but also improves the allocation of the money in expectation. Basically if you don't win you don't have to bother doing any research (so this time is saved for lots of people), and if you do win you at least have the incentive to do lots of research because you're giving away quite a lot of money (so the money should be given away with a great deal of careful thought behind it).

Of course if everyone in the EA community just gives to an EA Fund and knows that they would do so if they won the lottery, that would render both of the benefits of the lottery redundant. This shouldn't be the case however as A) not everyone gives to EA Funds - some people really research where they give, and B) people playing donor lotteries shouldn't be certain of where they would give the money if they won - the idea is that they would have to research. I see no reason why this research shouldn't lead to giving to an EA Fund.

Comment by jackmalde on What would you do if you had half a million dollars? · 2021-07-20T02:52:21.525Z · EA · GW

Although whether increasing the population is a good thing depends of if you are an average utilitarian or a total utilitarian.

Yes that is true. For what it's worth most people who have looked into population ethics at all reject average utilitarianism as it has some extremely unintuitive implications like the "sadistic conclusion" whereby one can make things better by bringing into existence people with terrible lives, as long as they're still bringing up the average wellbeing level by doing so i.e. if existing people have even worse lives.

Comment by jackmalde on What would you do if you had half a million dollars? · 2021-07-20T02:48:07.938Z · EA · GW

I got the impression that their new, general-purpose pool would still be fairly longtermist, but it's possible they will have to make sacrifices.

To clarify it's not that I don't think they would be "longtermist" it's more that I think they may have to give to longtermist options that "seem intuitively good to a non-EA", e.g. giving to an established organisation like MIRI or CHAI, rather than give to longtermist options that may be better on the margin but seem a bit weirder at first glance like "buying out some clever person so they have more time to do some research".

That pretty much gets to the heart of my suspected difference between Longview and LTFF - I think LTFF funds a lot of individuals that may struggle to get funding from elsewhere whereas Longview tends to fund organisations that may struggle a lot less - although I do see on their website that they funded Paul Slovic (but he seems a distinguished academic so may have been able to get funding elsewhere).

Comment by jackmalde on What would you do if you had half a million dollars? · 2021-07-20T02:32:26.473Z · EA · GW

Yeah you probably should - unless perhaps you think there are scale effects to giving which makes you want to punt on being able to give far more.

Worth noting of course that Patrick didn’t know he was going to give to a capital allocator when he entered the lottery though, and of course still doesn’t. Ideally all donor lottery winners would examine the LTFF very carefully and honestly consider whether they think they can do better than LTFF. People may be able to beat LTFF, but if someone isn’t giving to LTFF I would expect clear justification as to why they think they can beat it.

Comment by jackmalde on What would you do if you had half a million dollars? · 2021-07-19T20:19:36.985Z · EA · GW

Would you mind linking some posts or articles assessing the expected value of the long-term future?

You're right to question this as it is an important consideration. The Global Priorities Institute has highlighted "The value of the future of humanity" in their research agenda (pages 10-13). Have a look at the "existing informal discussion" on pages 12 and 13, some of which argues that the expected value of the future is positive.

Sure, it's possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point

I think you misunderstood what I was trying to say. I was saying that even if we reach the limits of individual happiness, we can just create more and more humans to increase total happiness.

Comment by jackmalde on What would you do if you had half a million dollars? · 2021-07-19T20:00:59.120Z · EA · GW

I'm not really sure what to think about digital sentience. We could in theory create astronomical levels of happiness, astronomical levels of suffering, or both. Digital sentience could easily dominate all other forms of sentience so it's certainly an important consideration.

It seems unlikely to me that we would go extinct, even conditional on "us" deciding it would be best.

This is a fair point to be honest!

Comment by jackmalde on What would you do if you had half a million dollars? · 2021-07-18T03:51:53.243Z · EA · GW

In general, it kind of seems like the "point" of the lottery is to do something other than allocate to a capital allocator.

If you enter a donor lottery your expected donation amount is the same as if you didn't enter the lottery. If you win the lottery, it will be worth the time to think more carefully about where to allocate the money than if you had never entered, as you're giving away a much larger amount. Because extra time thinking is more likely to lead to better (rather than worse) decisions, this leads to more (expected) impact overall, even though your expected donation size stays the same. More on all of this here.

So the point of the lottery really is just to think very carefully about where to give if you win, allowing you to have more expected impact than if you hadn't entered. It seems quite possible (and in my opinion highly likely) that careful thinking  would lead one to give to a capital allocator as they have a great deal of expertise.

Comment by jackmalde on What would you do if you had half a million dollars? · 2021-07-18T03:40:58.823Z · EA · GW

There is still the possibility that the Pinkerites are wrong though, and quality of life is not improving.

Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.

happiness levels in general should be roughly stable in the long run regardless of life circumstances.

Maybe, but if we can't make people happier we can always just make more happy people. This would be very highly desirable if you have a total view of population ethics.

Regarding averting extinction and option value, deciding to go extinct is far easier said than done.

This is a fair point. What I would say though is that extinction risk is only a very small subset of existential risk so desiring extinction doesn't necessarily mean you shouldn't want to reduce most forms of existential risk.

Comment by jackmalde on What would you do if you had half a million dollars? · 2021-07-17T20:09:53.574Z · EA · GW

B) the far future can be reasonably expected to have significantly more happiness than suffering

I think EAs who want to reduce x-risk generally do believe that the future should have more happiness than suffering, conditional on no existential catastrophe occurring. I think these people generally argue that quality of life has improved over time and believe that this trend should continue (e.g. Steven Pinker's The Better Angels of Our Nature). Of course life for farmed animals has got worse...but I think people believe we should successfully render factory farming redundant on account of cultivated meat.

Also, considering extinction specifically, Will MacAskill has made the argument that we should avert human extinction based on option value even if we think extinction might be best. Basically even if we avert extinction now, we can in theory go extinct later on if we judge that to be the best option. In the meantime it make sense to reduce existential risk if we are uncertain about the sign of the value of the future, to leave open the possibility of an amazing future.

Comment by jackmalde on What would you do if you had half a million dollars? · 2021-07-17T19:43:43.737Z · EA · GW

I have to say I'm pretty glad you won the lottery as I like the way you’re thinking! I have a few thoughts which I put below. I’m posting here so others can respond, but I will also fill out your survey to provide my details as I would be happy to help further if you are interested in having my assistance!

TLDR: I think LTFF and PPF are the best options, but it’s very hard to say which is the better of the two.

  • Longview Philanthropy: it’s hard to judge this option without knowing more about their general-purpose fund - I didn’t see anything on this on their website at first glance. With my current knowledge, I would say this option isn’t as good as giving to LTFF. Longview is trying to attract existing philanthropists who may not identify as Effective Altruists, which will to some extent constrain what they can grant to as granting to something too “weird” might put off philanthropists. Meanwhile LTFF isn’t as constrained in this way, so in theory giving to LTFF should be better as LTFF can grant to really great opportunities that Longview would be afraid to. Also, LTFF appears to have more vetting resource than Longview and a very clear funding gap.
  • Effective Altruism Infrastructure Fund: it seems to me that if your goal is to maximise your positive impact on the long-term future then giving to LTFF would be better. This is simply because EA is wider in scope than longtermism so naturally the Infrastructure Fund will fund things that will be somewhat targeted to ‘global health and wellbeing’ opportunities which don’t have a long-term focus. If you look at LTFF’s Fund Scope you will see that LTFF funds opportunities to directly reduce existential risks, but also opportunities to build infrastructure for people working on longtermist projects and promoting long-term thinking - so LTFF also has a “growth” mindset if that's what you're interested in.
  • Patient Philanthropy Fund: personally I’m super excited about this but it’s very difficult to say which is better out of PPF or LTFF. Founders Pledge’s report is very positive about investing to give, but even they say in their report that “giving to investment-like giving opportunities could be a good alternative to investing to give”. I think that which is better of investment-like giving opportunities or investing to give is very much an open, and difficult, question. You do say that “even if the general idea of investing to give later isn’t the best use of these funds, donating to help get the PPF off the ground could still be”. I agree with this and like your idea of “supporting it with at least a portion of the donor-lottery funds”. How much exactly to give is hard to say.
  • Invest the money and wait a few years: do you have good reason to believe that you/the EA community will be in a much better position in a few years? Why? If it’s just generally “we learn more over time” then why would 'in a few years' be the golden period? If 'learning over time' is your motivation, PPF would perhaps be a better option as the fund managers will very carefully think about when this golden period is, as well as probably invest better than CEA.
  • Pay someone to help me decide: doubtful this would be the best option. LTFF basically does this for free. If you find someone / a team who you think is better than the LTFF grant team then fine, but I’m sceptical you will. LTFF has been doing this for a while which has let them develop a track record, develop processes, learn from mistakes etc. so I would think LTFF is a safer and better option.

So overall my view would be that LTFF and PPF are the best options, but it’s very hard to say which is the better of the two. I like the idea of giving a portion to each - but I don't really think diversification like this has much philosophical backing so if you do have a hunch one option is better than the other, and won't be subject to significant diminishing returns, then you may want to just give it all to that option.

Comment by jackmalde on What would you do if you had half a million dollars? · 2021-07-17T18:37:45.766Z · EA · GW

I'm not saying that reducing S-risks isn't a great thing to do, nor that it would reduce happiness, I'm just saying that it isn't clear that a focus on reducing S-risks rather than on reducing existential risk  is justified if one values reducing suffering and increasing happiness equally.

Comment by jackmalde on What would you do if you had half a million dollars? · 2021-07-17T18:08:28.103Z · EA · GW

My understanding is that Brian Tomasik has a suffering-focused view of ethics in that he sees reducing suffering as inherently more important than increasing happiness - even if the 'magnitude' of the happiness and suffering are the same.

If one holds a more symmetric view where suffering and happiness are both equally important it isn't clear how useful his donation recommendations are.

Comment by jackmalde on What are the long term consequences of poverty alleviation? · 2021-07-12T22:14:25.857Z · EA · GW

Probably depends on how you're reducing poverty...and how long-term your "long-term" is. Something like removing trade restrictions is likely to have very different long-term effects than distributing bednets. Even then I really don't have good answers for you on the nature of these differences.

You might want to check out the persistence studies literature. For example work by Nathan Nunn, who Will MacAskill references in this talk. This may not precisely align to what you're asking for, but Nunn has studies finding for example that:

  • Countries that adopted the plough have lower female labour force participation today
  • Greater raiding for slaves in Africa led to lower social trust and GDP per capita today
  • Medieval trading in Indian towns led to lower levels of Hindu-Muslim conflict today

Persistence studies seems a very interesting field and one I want to delve into more.

Comment by jackmalde on What should we call the other problem of cluelessness? · 2021-07-03T18:24:00.551Z · EA · GW

Could also go for tractable and intractable cluelessness?

Also I wonder if we should be distinguishing between empirical and moral cluelessness - with the former being about claims about consequences and the latter about fundamental ethical claims.

Comment by jackmalde on The case for strong longtermism - June 2021 update · 2021-06-27T10:13:07.712Z · EA · GW

Thanks Robert. I've never seen this breakdown of cluelessness and it could be a useful way for further research to define the issue.

The Global Priorities Institute raised the modelling of cluelessness in their research agenda and I'm looking forward to further work on this. If interested, see below for the two research questions related to cluelessness in the GPI research agenda. I have a feeling that there is still quite a bit of research that could be conducted in this area.

------------------

Forecasting the long-term effects of our actions often requires us to make difficult comparisons between complex and messy bodies of competing evidence, a situation Greaves (2016) calls “complex cluelessness”. We must also reckon with our own incomplete awareness, that is, the likelihood that the long-run future will be shaped by events we’ve never considered and perhaps can’t fully imagine. What is the appropriate response to this sort of epistemic situation? For instance, does rationality require us to adopt precise subjective probabilities concerning the very-long-run effects of our actions, imprecise probabilities (and if so, how imprecise?), or some other sort of doxastic state entirely?

Faced with the task of comparing actions in terms of expected value, it often seems that the agent is ‘clueless’: that is, that the available empirical and theoretical evidence simply supplies too thin a basis for guiding decisions in any principled way (Lenman 2000; Greaves 2016; Mogensen 2020) (INFORMAL: Tomasik 2013; Askell 2018). How is this situation best modelled, and what is the rational way of making decisions when in this predicament? Does cluelessness systematically favour some types of action over others?

Comment by jackmalde on The case for strong longtermism - June 2021 update · 2021-06-23T03:35:15.941Z · EA · GW

Yeah I meant ruling out negative EV in a representor may be slightly extreme, but I’m not really sure - I need to read more.

Comment by jackmalde on The case for strong longtermism - June 2021 update · 2021-06-22T20:23:43.819Z · EA · GW

Thanks, I really haven't given sufficient thought to the cluelessness section which seems the most novel and tricky. Fanaticism is probably just as important, if not more so, but is also easier to get one's head around.  

I agree with you in your other comment though that the following seems to imply that the authors are not "complexly clueless" about AI safety:

For example, we don’t think any reasonable representor even contains a probability function according to which efforts to mitigate AI risk save only 0.001 lives per $100 in expectation.

I mean I guess it is probably the case that if you’re saying it’s unreasonable for a probability function associated with very small positive expected value to be contained in your representor, you’ll also say a probability function associated with negative expected value also isn't contained in it. This does seem to me to be a slightly extreme view.

Comment by jackmalde on The case for strong longtermism - June 2021 update · 2021-06-21T21:33:10.316Z · EA · GW

On their (new) view on what objections against strong longtermism are strongest - I think that this may be the most useful update in the paper. I think it is very important to pinpoint the strongest objections to a thesis, to focus further research. 

It is interesting that the authors essentially appear to have dismissed the intractability objection. It isn’t clear if they no longer think this is a valid objection, or if they just don’t think it is as strong as the other objections they highlight this time around. I would like to ask them about this in an AMA.

The authors concede that there needs to be further research to tackle these new objections. Overall, I got the impression that the authors are still “strong longtermists”, but are perhaps less confident in the longtermist thesis than they were when they wrote the first version of the paper - something else I would like to ask them about.

Comment by jackmalde on The case for strong longtermism - June 2021 update · 2021-06-21T21:31:29.920Z · EA · GW

On addressing cluelessness - for the most part I agree with the authors’ views, which includes the view that there needs to be further research in this area.

I do find it odd however that they attempt to counter the worry of ‘simple cluelessness’ but not that of ‘complex cluelessness’ i.e. to counter the possibility that there could be semi-foreseeable unintended consequences of longtermist interventions that make us ultimately uncertain on the sign of the expected-value assessment of these interventions. Maybe they see this as obviously not an issue...but I would have appreciated some thoughts on this.

Comment by jackmalde on The case for strong longtermism - June 2021 update · 2021-06-21T21:31:09.891Z · EA · GW

On the new definition - as far as I can tell it does pretty much the same job as the old definition, but is clearer and more precise, bar a small nitpick I have...

One deviation is from “a wide class of decision situations” to “the most important decision situations facing agents today”. As far as I can tell, Greaves and MacAskill don’t actually narrow the set of decision situations they argue ASL applies to in the new paper. Instead, I suspect the motivation for this change in wording was because “wide” is quite imprecise and subjective (Greaves concedes this in her 80,000 Hours interview). Therefore, instead of categorising the set of decision situations as wide, which was supposed to communicate the important decision-relevance of ASL, the authors instead describe these same decision situations as “the most important faced by agents today” on account of the fact that they have particularly great significance for the well-being of both present and future sentient beings. In doing so they still communicate the important decision-relevance of ASL, whilst being slightly more precise.

The authors also change from “fairly small subset of options whose ex ante effects on the very long-run future are best” to “options that are near-best for the far future”. It is interesting that they don’t specify “ex ante” best - as if it was simply obvious that is what they mean by “best”…(maybe you can tell I’m not super impressed by this change…unless I’m missing something?).

Otherwise, splitting the definition into two conditions seems to have just made it easier to understand.

Comment by jackmalde on Help me find the crux between EA/XR and Progress Studies · 2021-06-02T21:25:24.372Z · EA · GW

You mention that some EAs oppose progress / think that it is bad. I might be wrong, but I think these people only "oppose" progress insofar as they think x-risk reduction from safety-based investment is even better value on the margin. So it's not that they think progress is bad in itself, it's just that they think that speeding up progress incurs a very large opportunity  cost. Bostrom's 2003 paper outlines the general reasoning why many EAs think x-risk reduction is more important than quick technological development.

Also, I think most EAs interested in x-risk reduction would say that they're not really in a Pascal's mugging as the reduction in probability of an existential catastrophe occurring that can be achieved isn't astronomically small. This is partly because x-risk reduction is so neglected that there's still a lot of low-hanging fruit.

I'm not super certain on either of the points above but it's the sense I've gotten from the community.

Comment by jackmalde on EA Survey 2020: Demographics · 2021-05-15T21:08:13.312Z · EA · GW

I would absolutely expect EAs to differ in various ways to the general population. The fact that a greater proportion of EAs are vegan is totally expected, and I can understand the computer science stat as well given how important AI is in EA at the moment. 

However when it comes to sexuality it isn't clear to me why the EA population should differ. It may not be very important to understand why, but then again the reason why could be quite interesting and help us understand what draws people to EA in the first place. For example perhaps LGBTQ+ people are more prone to activism/trying to improve the world because they find themselves to be discriminated against, and this means they are more open to EA. If so, this might indicate that outreach to existing activists might be high value. Of course this is complete conjecture and I'm not actually sure if it's worth digging further (I asked the question mostly out of curiosity). 

Comment by jackmalde on EA Survey 2020: Demographics · 2021-05-13T16:33:46.341Z · EA · GW

Thanks. Any particular reason why you decided to do unguided self-description?

You could include the regular options and an "other (please specify)" option too. That might give people choice, reduce time required for analysis, and make comparisons to general population surveys easier.

Comment by jackmalde on EA Survey 2020: Demographics · 2021-05-13T07:26:48.158Z · EA · GW

We observed extremely strong divergence across gender categories. 76.9% of responses from male participants identified as straight/heterosexual, while only 48.6% of female responses identified as such. 

The majority of females don't identify as heterosexual? Am I the only one who finds this super interesting? I mean in the UK around 2% of females in the wider population identify as LGB.

Even the male heterosexual figure is surprising low. Any sociologists or others want to chime in here?

Comment by jackmalde on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-07T18:52:44.633Z · EA · GW

I think that's fair but I also think that non-neglectedness is actually bad for two reasons:

  1. Diminishing returns (which may not be the case if people are solving the problem poorly)
  2. Crowdedness meaning it's harder to change direction even if people are solving the problem poorly (although this point is really tractability so one needs to be careful about not double-counting when doing ITN).

I'm thinking number 2 could be quite relevant in this case. Admittedly it's quite relevant for any EA intervention that involves systemic change, but I get the impression that other systemic change interventions may be even higher in importance.

Comment by jackmalde on Concerns with ACE's Recent Behavior · 2021-04-20T05:39:34.272Z · EA · GW

The only thing of interest here is what sort of compromise ACE wanted. What CARE said in response is not of immediate interest, and there's certainly no need to actually share the messages themselves.

Perhaps you can understand why one might come away from this conversation thinking that ACE tried to deplatform the speaker? To me at least it feels hard to interpret "find a compromise" any other way.

Comment by jackmalde on Concerns with ACE's Recent Behavior · 2021-04-17T15:20:49.190Z · EA · GW

Thanks for writing this comment as I think you make some good points and I would like people who disagree with Hypatia to speak up rather than stay silent.

Having said that, I do have a few critical thoughts on your comment. 

Your main issue seems to be the claim that these harms are linked, but you just respond by only saying how you feel reading the quote, which isn't a particularly valuable approach.

I don’t think this was Hypatia’s main issue. Quoting Hypatia directly, they imply the following are the main issues:

  • The language used in the statement makes it hard to interpret and assess factually
  • It made bold claims with little evidence
  • It recommended readers spend time going through resources of questionable value

Someone called Encompass a hate group (which as a side note, it definitely is not). The Anima Executive Director in question liked this comment.

You bring this up a few times in your comment. Personally I give the ED the benefit of the doubt here because the comment in question also said “what does this have to do with helping animals" which is a point the ED makes elsewhere in the thread, so it’s possible that they were agreeing with this part of the comment as opposed to the ‘hate group’ part. I can’t be sure of course, but I highly doubt the ED genuinely agrees that Encompass is a hate group given their other comments in the thread seeming fairly respectful of Encompass including “it's not really about animal advocacy, it's about racial injustice and how animal advocates can help with that. That's admirable of course, I just don't think it's relevant to this group”.

This was a red-flag to ACE (and probably should have been to many people), since the ED had both liked some pretty inflammatory / harmful statements, and was speaking on a topic they clearly had both very strong and controversial views on, regarding which they had previously picked fights on.

You seem to imply that others should have withdrawn from the conference too, or at least that they should have considered it? This all gets to the heart of the issue about free speech and cancel culture. Who decides what’s acceptable and what isn’t? When is expressing a different point of view just that vs. "picking a fight". Is it bad to hold "strong and controversial views"?

People were certainly affected by the ED’s comments, but people are affected by all sorts of comments that we don’t, and probably shouldn't, cancel people for. People will be affected by your comment, and people will be affected by my comment. When talking about contentious issues, people will be affected. It’s unavoidable unless we shut down debate altogether. You imply that the ED's actions were beyond the pale, but we need to realise that this is an inherently subjective viewpoint and it's clearly the case that not everyone agrees. So whilst ACE had the right to withdraw, I'm not sure we can imply that others should have too.

Comment by jackmalde on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-16T07:57:21.214Z · EA · GW

I don't find your comment to have much in the way of argument as to why it might be bad if papers like this one become more widespread. What are you actually worried would happen? This isn't super clear to me at the moment.

I agree a paper that just says "we should ignore the repugnant conclusion" without saying anything else isn't very helpful, but this paper does at least gather reasons why the repugnant conclusion may be on shaky ground which seems somewhat useful to me.

Comment by jackmalde on Confusion about implications of "Neutrality against Creating Happy Lives" · 2021-04-11T17:14:40.264Z · EA · GW

My short answer is that 'neutrality against creating happy lives' is not a mainstream position in the EA community. Some do hold that view, but I think it's a minority. Most think that creating happy lives is good.

Comment by jackmalde on On the longtermist case for working on farmed animals [Uncertainties & research ideas] · 2021-04-11T09:38:01.936Z · EA · GW

Thanks for writing this Michael, I would love to see more research in this area. 

Thus, it seems plausible that expanding a person’s moral circle to include farm animals doesn’t bring the “boundary” of that person’s moral circles any “closer” to including whatever class of beings we’re ultimately concerned about (e.g., wild animals or artificial sentient beings). Furthermore, even if expanding a person’s moral circle to include farm animals does achieve that outcome, it seems plausible that that the outcome would be better achieved by expanding moral circles along other dimensions (e.g., by doing concrete wild animal welfare work, advocating for caring about all sentient beings, or advocating for caring about future artificial sentient beings).[2] 

This is definitely an important point.

This is very speculative, but part of me wonders if the best thing to advocate for is (impartial) utilitarianism. This would, if done successfully, expand moral circles across all relevant boundaries including farm animals, wild animals and artificial sentience, and future beings. Advocacy for utilitarianism would naturally include "examples", such as ending factory farming, so it wouldn't have to be entirely removed from talk of farmed animals. I'm quite uncertain if such advocacy would effective (or even be good in expectation), but it is perhaps an option to consider.

(Of course this all assumes that utilitarianism is true/the best moral theory we currently have).

Comment by jackmalde on Possible misconceptions about (strong) longtermism · 2021-04-06T18:40:49.831Z · EA · GW

To be honest I'm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing I took from Greaves was to realise there seems to be an issue of complex cluelessness in the first place - where we can't really form precise credences in certain instances where people have traditionally felt like they can, and that these instances are often faced by EAs when they're trying to do the most good.

Maybe we're also complexy clueless about what day to conceive a child on, or which chair to sit on, but we don't really have our "EA hat on" when doing these things. In other words, I'm not having a child to do the most good, I'm doing it because I want to. So I guess in these circumstances I don't really care about my complex cluelessness. When giving to charity, I very much do care about any complex cluelessness because I'm trying to do the most good and really thinking hard about how to do so.

I'm still not sure if I would class myself as complexly clueless when deciding which chair to sit on (I think from a subjective standpoint I at least feel simply clueless), but I'm also not sure this particular debate really matters.

Comment by jackmalde on Possible misconceptions about (strong) longtermism · 2021-04-05T11:04:55.956Z · EA · GW

So far, I feel I've been able to counter any proposed example, and I predict I would be able to do so for any future example (unless it's the sort of thing that would never happen in real life, or the information given is less than one would have in real life).

I think simple cluelessness is a subjective state.  In reality one chair might be slightly older, but one can be fairly confident that it isn't worth trying to find out (in expected value terms). So I think I can probably just modify my example to one where there doesn't seem to be any subjectively salient factor to pull you to one chair or the other in the limited time you feel is appropriate to make the decision, which doesn't seem too far-fetched to me (let's say the chairs look the same at first glance, they are both in front row and either side of the aisle etc.).

I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the other - otherwise they fall prey to  paralysis. Admittedly I haven't read James Lenman closely enough to know if he does in fact invoke paralysis as a necessary consequence for consequentialists, but I think it would probably be the conclusion.

EDIT: To be honest I'm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing from Greaves was to realise there seems to be an issue of complex cluelessness in the first place where we can't really form precise credences. 

FWIW, I also think other work of Greaves has been very useful. And I think most people. -though not everyone - who've thought about the topic think the cluelessness stuff is much more useful than I think it is

For me, Greaves' work on cluelessness just highlighted to me a problem I didn't think was there in the first place. I do feel the force of her claim that we may no longer be able to justify certain interventions (for example giving to AMF), and I think this should hold even for shorttermists (provided they don't discount indirect effects at a very high-rate). The decision-relevant consequence for me is trying to find interventions that don't fall prey to problem, which might be the longtermist ones that Greaves puts forward (although I'm uncertain about this).

Comment by jackmalde on Possible misconceptions about (strong) longtermism · 2021-04-05T08:00:56.726Z · EA · GW

Your critique of the conception example might be fair actually. I do think it's possible to think up circumstances of genuine 'simple cluelessness' though where, from a subjective standpoint, we really don't have any reasons to think one option may be better or worse than the alternative. 

For example we can imagine there being two chairs in front of us and making a choice of which chair to sit on. There doesn't seem to be any point stressing about this decision (assuming there isn't some obvious consideration to take into account), although it is certainly possible that choosing the left chair over the right chair could be a terrible decision ex post. So I do think this decision is qualitatively different to donating to AMF. 

However I think the reason why Greaves introduces the distinction between complex and simple cluelessness is to save consequentialism from Lenman's cluelessness critique (going by hazy memory here). If a much wider class of decisions suffer from complex cluelessness than Greaves originally thought, this could prove problematic for her defence. Having said that, I do still think that something like working on AI alignment probably avoids complex cluelessness for the reasons I give in the post, so I think Greaves' work has been useful.

Comment by jackmalde on Possible misconceptions about (strong) longtermism · 2021-04-05T07:32:48.956Z · EA · GW

Thanks for all your comments Michael, and thanks for recommending this post to others!

I have read through your comments and there is certainly a lot of interesting stuff to think about there. I hope to respond but I might not be able to do that in the very near future.  

I'd suggest editing the post to put the misconceptions in the headings in quote marks

Great suggestion thanks, I have done that.

Comment by jackmalde on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-05T01:54:11.812Z · EA · GW

OK thanks I think that is clearer now.

Comment by jackmalde on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-05T01:37:51.594Z · EA · GW

Thanks yeah, I saw this section of the paper after I posted my original comment. I might be wrong but I don't think he really engages in this sort of discussion in the video, and I had only watched the video and skimmed through the paper. 

So overall I think you may be right in your critique. It might be interesting to ask Tarsney about this (although it might be a fairly specific question to ask).

Comment by jackmalde on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-05T01:24:31.239Z · EA · GW

OK that's clearer, although I'm not immediately sure why the paper would have achieved the following:

I somewhat updated my views regarding: 

  • how likely such a lock-in is
    • and in particular how likely it is that a state that looks like it might be a lock-in would actually be a lock-in
      • ...

I think Tarsney implies that institutional reform is less likely to be a true lock-in, but he doesn't really back this up with much argument. He just implies that this point is somewhat obvious. Under this assumption, I can understand why his model would lead to the following update:

  • ...
    • ...
      • and in particular how much the epistemic challenge to longtermism might undermine a focus on this type of potential lock-in in particular

In other words, if Tarsney had engaged in a discussion about why institutional change isn't actually likely to be stable/persistent, providing object-level reasons for why (which may involve disagreeing with Greaves and MacAskill's points), I think I too would update away from thinking institutional change is that important, but I don't think he really engages in this discussion.

I should say that I haven't properly read through the whole paper (I have mainly relied on watching the video and skimming through the paper), so it's possible I'm missing some things.

Comment by jackmalde on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-04T16:23:45.129Z · EA · GW

In case anyone is interested, Rob Wiblin will be interviewing Tarsney on the 80,000 Hours podcast next week. Rob is accepting question suggestions on Facebook (I think you can submit questions to Rob on Twitter or by email too).

Comment by jackmalde on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-04T15:50:02.238Z · EA · GW

I agree with you that Tarsney hasn't been clear, but I think you've got it the wrong way around (please tell me if you think I'm wrong though). The abstract to the paper says:

But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these “Pascalian” probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.

These two sentences seem to say different things, as you have outlined. The first implies that you need fanaticism, whilst the second implies you need either fanaticism or non-obvious but plausible empirical views. Counter to you I think the former is actually correct.

Tarsney initially runs his model using point estimates for the parameters and concludes that the case for longtermism is "plausible-but-uncertain" if we assume that humanity will eventually spread to the starts, and "extremely demanding" if we don't make that assumption. Therefore longtermism doesn't really "survive the epistemic challenge" when using point estimates.

Tarsney says however that "The ideal Bayesian approach would be to treat all the model parameters as random variables rather than point estimates". So if we're Bayesians we can pretty much ignore the conclusions so far and everything is still to play for.

When Tarsney does incorporate uncertainty for all parameters, the expectational superiority of longtermism becomes clear because "the potential upside of longtermist interventions is so enormous". In other words the use of random variables allows for fanaticism to take over and demonstrates the superiority of longtermism. 

So it seems to me that it really is fanaticism that is doing the work here. Would be interested to hear your thoughts.

EDIT: On a closer look at his paper Tarsney does say that it isn't clear how Pascalian the superiority of longtermism is because of the "tremendous room for reasonable disagreement about the relevant probabilities". Perhaps this is what you're getting at Michael?

Comment by jackmalde on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-04T14:48:27.366Z · EA · GW

This indeed seems like an interesting implication of Tarsney's model, and indeed updates me towards placing a bit less emphasis on reducing non-extinction existential risks - e.g., reducing the chance of lock-in of a bad governmental system or set of values. 

Surely "lock-in" implies stability and persistence?

Greaves and MacAskill introduce the concept of the 'non-extinction attractor state' to capture interventions that can achieve the persistence Tarsney says is so important, but that don't rely on extinction to do so. 

This includes institutional reform:

But once such institutions were created, they might persist indefinitely. Political institutions often change as a result of conflict or competition with other states. For strong world governments, this consideration would not apply (Caplan 2008). In the past, governments have also often changed as a result of civil war or internal revolution. However, advancing technology might make that far less likely for a future world government: modern and future surveillance technologies could prevent insurrection, and AI-controlled police and armies could be controlled by the leaders of the government, thereby removing the possibility of a military coup (Caplan 2008; Smith 2014).

Comment by jackmalde on Formalising the "Washing Out Hypothesis" · 2021-04-03T08:36:47.023Z · EA · GW

I haven't read that post but will definitely have a look, thanks.

Comment by jackmalde on Formalising the "Washing Out Hypothesis" · 2021-04-03T08:35:28.665Z · EA · GW

Yeah this all makes sense, thanks.

Comment by jackmalde on Formalising the "Washing Out Hypothesis" · 2021-04-03T08:33:56.047Z · EA · GW

Thanks for this Michael, I'd be very interested to read this post when you publish it. Especially as my career has taken a (potentially temporary) turn in the general direction of speeding up progress, rather than towards safety. I still feel that Ben Todd and co are probably right, but I want to read more.

Also, relevant part from Greaves and MacAskill's paper:

Just how much of an improvement [speeding up progress] amounts to depends, however, on the shape of the progress curve. In a discrete-time model, the benefit of advancing progress by one time period (assuming that at the end of history, one thereby gets one additional time period spent in the “end state”) is equal to the duration of that period multiplied by the difference between the amounts of value that are contained in the first and last periods. Therefore, if value per unit time is set to plateau off at a relatively modest level, then the gains from advancing progress are correspondingly modest. Similarly, if value per unit time eventually rises to a level enormously higher than that of today, then the gains from advancing progress are correspondingly enormous.

Comment by jackmalde on Any EAs familiar with Partha Dasgupta's work? · 2021-03-31T09:51:32.348Z · EA · GW

I haven't actually read the Dasgupta review, only that first link you shared. Overall I think EAs probably don't disagree that much with what Dasgupta is saying but probably focus on other things due to neglectedness. Even if economics doesn't account for nature enough, there are still loads of people shouting about the the negative effect we have on nature, and this review was actually commissioned by the UK Government so they are clearly aware of the problem. It's also hardly news that GDP isn't a perfect measure. Compare this to things like biorisk and risk from unaligned AI which important people generally don't think about.

Otherwise a few things jumped out to me from that first link:

Biological diversity is, in fact, declining faster now than at any time in our history. Since 1970, there has been on average almost a 70% drop in the populations of mammals, birds, fish, reptiles, and amphibians. Around one million animal and plant species – almost a quarter of the global total – are believed to be threatened with extinction.

Beyond its intrinsic – and incalculable – worth, biodiversity provides fundamental natural “dividends” that nourish and protect us: from basic sustenance through fish stocks or insects that pollinate crops, to soil regeneration, and water and flooding regulation. Not to mention the cultural and spiritual values that enrich our lives.

Dasgupta doesn't appear to have factored in animal welfare. Fish "sustaining" us probably isn't a great thing (unless perhaps some people literally don't have any other options) and reduction in wild animal populations could actually be good if they live net negative lives (which is quite possible).

The review also refers to 'intrinsic' value of biodiversity. I'd imagine EAs mostly reject this thinking biodiversity only has instrumental value.

Thank you for raising this though, I'm hoping to read the report (or maybe a good summary!) and it's possible that the EA community should too. If natural capital is indeed important in sustaining economic development then it is an important consideration from a long-term perspective.

Comment by jackmalde on The Long-Term Future Fund has room for more funding, right now · 2021-03-30T14:00:09.195Z · EA · GW

OK thanks that makes sense

Comment by jackmalde on The Long-Term Future Fund has room for more funding, right now · 2021-03-29T17:27:01.657Z · EA · GW

It's worth noting that I only believe this under the assumption that the individual donors know about some specific opportunities that the fund managers are unaware of, or perhaps have significant worldview differences with the fund managers.

The long-term future fund can only give to people who apply for funding though (right?) whereas someone who wins a donor lottery can give literally anywhere. This seems another reason why a donor lottery winner might give better?

Comment by jackmalde on The Long-Term Future Fund has room for more funding, right now · 2021-03-29T16:07:45.286Z · EA · GW

Thanks! I'm actually not surprised that the quality of grant applications might be increasing e.g. due to people learning more about what makes for a good grant.

I have a follow-on question. Do you think that the increase in the size of the grant requests is justified? Is this because people are being more ambitious in what they want to do?

Comment by jackmalde on The Long-Term Future Fund has room for more funding, right now · 2021-03-29T14:46:57.093Z · EA · GW

We’ve recently changed parts of the fund’s infrastructure and composition, and it’s possible that these changes have caused us to unintentionally lower our standards for funding. My personal sense is that this isn’t the case

Can you say more about why the changes might have led lower standards for funding? It sounds like you think there are some at least somewhat plausible reasons why this might be the case.

Can you also say more about why you actually don't think the standards have fallen despite these possible reasons?