Posts

To fund research, or not to fund research, that is the question 2022-04-24T18:46:04.144Z
An uncomfortable thought experiment for anti-speciesist non-vegans 2022-04-18T09:17:01.831Z
What is a neutral life like? 2022-04-16T15:57:45.486Z
New research programme: The Leverhulme Centre for Life in the Universe 2022-03-22T12:56:29.332Z
A guided cause prioritisation flowchart 2022-01-03T20:32:42.526Z
Earning to give may be the best option for patient EAs 2021-12-26T10:26:41.011Z
The case for strong longtermism - June 2021 update 2021-06-21T21:30:16.365Z
Possible misconceptions about (strong) longtermism 2021-03-09T17:58:54.851Z
Important Between-Cause Considerations: things every EA should know about 2021-01-28T19:56:31.730Z
What is a book that genuinely changed your life for the better? 2020-10-21T19:33:15.175Z
JackM's Shortform 2020-10-05T21:53:33.811Z
The problem with person-affecting views 2020-08-05T18:37:00.768Z
Are we neglecting education? Philosophy in schools as a longtermist area 2020-07-30T16:31:37.847Z
The 80,000 Hours podcast should host debates 2020-07-10T16:42:06.387Z

Comments

Comment by Jack Malde (jackmalde) on What reason is there NOT to accept Pascal's Wager? · 2022-08-05T15:25:54.787Z · EA · GW

It does seem to me, if you think the general reasoning of the wager is sound, that the most rational thing to do is to pick one of the cards and hope for the best, as opposed to not picking any of them.

You could for example pick Christianity or Islam, but also regularly pray to the “one true god” whoever he may be, and respectfully ask for forgiveness if your faith is misplaced. This might be a way of minimising the chances of going to hell, although there could be even better ways on further reflection.

Having said all that I’m atheist and never pray. But I’m not necessarily sure that’s the best way to be…

Comment by Jack Malde (jackmalde) on On the Vulnerable World Hypothesis · 2022-08-01T15:25:18.375Z · EA · GW

I looked through your post very quickly (and wrote this very quickly) so I may have missed things, but my main critical thoughts are around the “costs probably outweigh the benefits” argument as I don’t think you have adequately considered the benefits.

Surveillance is really shit, most people would accept that, but perhaps even more shit is the destruction of humanity or humanity entering a really bad persistent state (e.g. AI torturing humans for the rest of time). If we really want to avoid these existential catastrophes a solution that limits free thought may easily be worth it.

You do briefly cover that surveillance could lead to an existential catastrophe in itself, and I’d like to see a more in-depth exploration of this. But even so (and this might sound very weird) there are better and worse existential catastrophes. For example a 1984-type scenario whilst really shit, is probably better than AI torturing us for the rest of time. So I do think some weighing up of risks and their badness is warranted here.

This criticism doesn’t cover your other points e.g. that there may be more effective ways of reducing risks. I actually think there are a lot of valid points here that need more exploration. I’m just saying that I think your CBA is incomplete.

Comment by Jack Malde (jackmalde) on Longtermism as Effective Altruism · 2022-07-29T05:02:46.493Z · EA · GW

I'm not sure who is saying longtermism is an alternative to EA but it seems a bit nonsensical to me as longtermism is essentially the view that we should focus on positively influencing the longterm future to do the most good. It's therefore quite clearly a school of thought within EA.

Also I have a minor(ish) bone to pick with your claim that "Longtermism says to calculate expected value while treating lives as morally equal no matter when they occur. Longtermists do not discount the lives of future generations."  Will MacAskill defines longtermism as the following:

Longtermism is the view that positively influencing the longterm future is a key moral priority of our time.

There's nothing in this definition about expected value or discounting. I will plug a post I wrote which explains that it has been suggested one can get a longtermist conclusion using a different decision theory than maximising expected value, just as one may still get a longtermist conclusion if one discounts future lives.
 

Comment by Jack Malde (jackmalde) on Confused about "making people happy" vs. "making happy people" · 2022-07-17T10:34:17.188Z · EA · GW
  • Presumably you're not neutral about creating someone who you know will live a dreadful life? If so it seems there's no fundamental barrier to comparing existence and non-existence, and it would analogously seem you should not be neutral about creating someone you know will live a great life. You can get around this by introducing an asymmetry, but this seems ad hoc.
  • I used to hold a person-affecting view but I found the transitivity argument against being neutral about making happy people quite compelling. Similar to the money pump argument I think. Worth noting that you can get around breaking transitivity by giving up the independence of irrelevant alternatives instead, but that may not be much of an improvement.
  • If it's a person-affecting intuition that makes you neutral about creating happy lives you can run into some problems, most famously the non-identity problem. The non-identity problem implies for example that there's nothing wrong with climate change and making our planet a hellscape, because this won't make lives worse for anyone in particular as climate change itself will change the identities of who comes into existence. Most people agree the non-identity problem is just that...a problem, because not caring about climate change seems a bit silly.
Comment by Jack Malde (jackmalde) on Why Effective Altruists Should Put a Higher Priority on Funding Academic Research · 2022-07-01T19:01:01.660Z · EA · GW

Not sure how useful this is but I tried to develop a model to help us decide between carrying out our best existing interventions and carrying out research into potentially better interventions: https://forum.effectivealtruism.org/posts/jp3yaQczFWk7yiNXz/to-fund-research-or-not-to-fund-research-that-is-the

My key takeaway was that the longer the timescale we care about doing good over, the better research is relative to carrying out existing interventions. This is because there is a greater period over which we would gain from a better intervention.

As someone with long timescales I’m therefore very on board with more research!

Comment by Jack Malde (jackmalde) on What should I ask Alan Hájek, philosopher of probability, Bayesianism, expected value and counterfatuals? · 2022-07-01T18:29:48.336Z · EA · GW
  • Would he pay the mugger in a Pascal's mugging? Generally does he think acting fanatically is an issue?
  • How does he think we should set a prior for the question of whether or not we are living at the most influential time? Uniform prior or otherwise?
  • What are his key heuristics for doing good philosophy, and how does he spot bad philosophical arguments?


 

Comment by Jack Malde (jackmalde) on What should I ask Alan Hájek, philosopher of probability, Bayesianism, expected value and counterfatuals? · 2022-07-01T18:16:38.225Z · EA · GW

As well as St. Petersburg Paradox I'd be interested in his thoughts on the Pasadena Game

Comment by Jack Malde (jackmalde) on Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies · 2022-06-30T07:23:35.745Z · EA · GW

Thanks for writing this Michael! I think economists are too quick to jump to the conclusion that economic growth will mean more happiness. This is a really clear and useful summary on where we currently are. I do have a few half-baked critical thoughts:

  • Easterlin’s long run view is still much too short: most people in EA, and I assume the progress studies community, don’t discount the future much, if at all. This means they will care about timescales of millions and even billions of years. The compounding nature of economic growth means that increased growth could mean people far in the future become much richer than they otherwise would have been. So instead of considering if growth has made us happier over the past few decades it might be more helpful to compare our happiness now to what it likely was like hundreds or even thousands of years ago. Obviously we don’t have subjective wellbeing data stretching back that far, but we can still consider that other metrics such as infant mortality, disease etc. have dropped significantly over this time, and that part of the reason for this is economic progress. Further growth could reap similar benefits in the future. Generally I’m worried that interventions such as those carried out by StrongMinds are simply far too shorttermist, only improving the wellbeing of current generations. I’d like to see more research on how to improve wellbeing over an undiscounted future. I have already shared this with you, but I think that further research into innovative mental health treatments could be high impact.
  • Scale effects over long time periods are very plausible: it seems very plausible to me that a hunter gatherer would rate themselves as a 8/10 because they simply don’t know how good life can be. As Fin said, I’m not sure your argument against scale effects is convincing, and I’d like to see more work on this. Also, I used to be on the side of life satisfaction data, but issues such as scale effects have tilted me to think that more affective measures such as happiness might be more useful as they could be less prone to such biases. More research is likely needed here though.
  • You cannot ignore population ethics when it comes to these questions: you mention at the end of your post that the Easterlin paradox is only relevant for average wellbeing. This means it’s most useful for those who subscribe to an average utilitarian population axiology, or similar. There are serious problems with average utilitarianism (sadistic conclusion). I suppose the Easterlin paradox may be relevant for those with a person-affecting view, but it is unlikely to be that informative to a total utilitarian as economic growth likely increases the number of people who will live. I think one should be explicit if a paradox only bites under certain population axiologies.
Comment by Jack Malde (jackmalde) on Critiques of EA that I want to read · 2022-06-26T13:00:18.510Z · EA · GW

When it comes to comparisons of values between PAVs and total views I don't really see much of a problem as I'm not sure the comparison is actually inter-theoretic. Both PAVs and total views are additive, consequentialist views in which welfare is what has intrinsic value. It's just the case that some things count under a total view that don't under (many) PAVs i.e. the value of a new life. So accounting for both PAVs and a total view in a moral uncertainty framework doesn't seem too much of a problem to me.

What about genuine inter-theoretic comparisons e.g. between deontology and consequentialism? Here I'm less sure but generally I'm inclined to say there still isn't a big issue. Instead of choosing specific values, we can choose 'categories' of value. Consider a meteor hurtling to earth destined to wipe us all out. Under a total view we might say it would be "astronomically bad" to let the meteor wipe us out. Under a deontological view we might say it is "neutral" as we aren't actually doing anything wrong by letting the meteor wipe us out (if you have a view that invokes an act/omission distinction). So what I'm doing here is assigning categories such as "astronomically bad", "very bad", "bad", "neutral", "good" etc. to acts under different ethical views - which seems easy enough. We can then use these categories in our moral uncertainty reasoning. This doesn't seem that arbitrary to me, although I accept it may still run into issues.

Comment by Jack Malde (jackmalde) on Critiques of EA that I want to read · 2022-06-24T17:44:40.294Z · EA · GW

I'm looking forward to reading these critiques! A few thoughts from me on the person-affecting views critique:

  1. Most people, myself included, find existence non-comparativism a bit bonkers. This is because most people accept that if you could create someone who you knew with certainty would live a dreadful life, that you shouldn't create them, or at least that it would be better if you didn't (all other things equal). So when you say that existence non-comparativism is highly plausible, I'm not so sure that is true...
  2. Arguing that existence non-comparativism and the person-affecting principle (PAP) are plausible isn't enough to argue for a person-affecting view (PAV), because many people reject PAVs on account of their unpalatable conclusions (which can signal that underlying motivations for PAVs are flawed). My understanding is that the most common objection of PAVs is that they run into the non-identity problem, implying for example that there's nothing wrong with climate change and making our planet a hellscape, because this won't make lives worse for anyone in particular as climate change itself will change the identities of who comes into existence. Most people agree the non-identity problem is just that...a problem, because not caring about climate change seems a bit stupid. This acts against the plausibility of narrow person-affecting views.
    • Similarly, if we know people are going to exist in the future, it just seems obvious to most that it would be a good thing, as opposed to a neutral thing, to take measures to improve the future (conditional on the fact that people will exist).
  3. It has been that argued that moral uncertainty over population axiology  pushes one towards actions endorsed by a total view even if one's credence in these theories is low. This assumes one uses an expected moral value approach to dealing with moral uncertainty. This would in turn imply that having non-trivial credence in a narrow PAV isn't really a problem for longtermists. So I think you have to do one of the following:
    • Argue why this Greaves/Ord paper has flawed reasoning
    • Argue that we can have zero or virtually  zero credence in total views
    • Argue why an expected moral value approach isn't appropriate for dealing with moral uncertainty (this is probably your best shot...)
Comment by Jack Malde (jackmalde) on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-22T07:23:42.499Z · EA · GW

I'm confused by the fact Eliezer's post was posted on April Fool's day. To what extent does that contribute to conscious exaggeration on his part?

Comment by Jack Malde (jackmalde) on Questions to ask Will MacAskill about 'What We Owe The Future' for 80,000 Hours Podcast (possible new audio intro to longtermism) · 2022-06-21T20:08:43.183Z · EA · GW

My comment on your previous post should have been saved for this one. I copy the questions below:

  • What do you think is the best approach to achieving existential security and how confident are you on this?
  • Which chapter/part of "What We Owe The Future" do you think most deviates from the EA mainstream?
  • In what way(s) would you change the focus of the EA longtermist community if you could?
  • Do you think more EAs should be choosing careers focused on boosting economic growth/tech progress?
  • Would you rather see marginal EA resources go towards reducing specific existential risks or boosting economic growth/tech progress?
  • The Future Fund website highlights immigration reform, slowing down demographic decline, and innovative educational experiments to empower young people with exceptional potential as effective ways to boost economic growth. How confident are you that these are the most effective ways to boost growth?
  • Where would you donate to most improve the long-term future?
    • Would you rather give to the Long-Term Future Fund or the Patient Philanthropy Fund?
  • Do you think you differ to most longtermist EAs on the "most influential century" debate and, if so, why?
  • How important do you think Moral Circle Expansion (MCE) is and what do you think are the most promising ways to achieve it?
  • What do you think is the best objection to longtermism/strong longtermism?
    • Fanaticism? Cluelessness? Arbitrariness?
  • How do you think most human lives today compare to the zero wellbeing level?
Comment by Jack Malde (jackmalde) on Longtermist slogans that need to be retired · 2022-06-21T20:00:42.856Z · EA · GW

Well I’d say that funding lead elimination isn’t longtermist all other things equal. It sounds as if FTX’s motivation for funding it was for community health / PR reasons in which case it may have longtermist benefits through those channels.

Whether longtermists should be patient or not is a tricky, nuanced question which I am unsure about, but I would say I’m more open to patience than most.

Comment by Jack Malde (jackmalde) on Critiques of EA that I want to read · 2022-06-20T20:22:58.398Z · EA · GW

Broad longtermist interventions don't seem so robustly positive to me, in case the additional future capacity is used to do things that are in expectation bad or of deeply uncertain value according to person-affecting views, which is plausible if these views have relatively low representation in the future.

Fair enough. I shouldn't really have said these broad interventions are robust to person-affecting views because that is admittedly very unclear. I do find these broad interventions to be robustly positive overall though as I think we will get closer to the 'correct' population axiology over time.

I'm admittedly unsure if a "correct" axiology even exists, but I do think that continued research can uncover potential objections to different axiologies allowing us to make a more 'informed' decision.
 

Comment by Jack Malde (jackmalde) on Critiques of EA that I want to read · 2022-06-20T20:13:12.406Z · EA · GW

AI safety's focus would probably shift significantly, too, and some of it may already be of questionable value on person-affecting views today. I'm not an expert here, though.

I've heard the claim that optimal approaches to AI safety may depend on one's ethical views, but I've never really seen a clear explanation how or why. I'd like to see a write-up of this.

Granted I'm not as read up on AI safety as many, but I've always got the impression that the AI safety problem really is "how can we make sure AI is aligned to human interests?", which seems pretty robust to any ethical view. The only argument against this that I can think of is that human interests themselves could be flawed. If humans don't care about say animals or artificial sentience, then it wouldn't be good enough to have AI aligned to human interests - we would also need to expand humanity's moral circle or ensure that those who create AGI have an expanded moral circle.

Comment by Jack Malde (jackmalde) on Critiques of EA that I want to read · 2022-06-20T12:05:48.075Z · EA · GW

And, if there was a convincing version of a person-affecting view, it probably would change a fair amount of longtermist prioritization.

This is an interesting question in itself that I would love someone to explore in more detail. I don't think it's an obviously true statement. Two give a few counterpoints:

  • People have justified work on x-risk only thinking about the effects an existential catastrophe would have on people alive today (see here, here and here).
  • The EA longtermist movement has a significant focus on AI risks which I think stands up to a person-affecting view, given that it is a significant s-risk.
  • Broad longtermist approaches such as investing for the future, global priorities research and movement building seem pretty robust to plausible person-affecting views.

I’d really love to see a strong defense of person-affecting views, or a formulation of a person-affecting view that tries to address critiques made of them.

I'd point out this attempt which was well-explained in a forum post. There is also this which I haven't really engaged with much but seems relevant. My sense is that the philosophical community has been trying to formulate a convincing person-affecting view and has, in the eyes of most EAs, failed. Maybe there is more work to be done though.

Comment by Jack Malde (jackmalde) on How to dissolve moral cluelessness about donating mosquito nets · 2022-06-08T14:56:09.118Z · EA · GW

Ok, although it’s probably worth noting that climate change is generally not considered to be an existential risk so I’m not sure considerations of emissions/net zero are all that relevant here. I think population change is more relevant in terms of impacts on economic growth / tech stagnation which in turn should have an impact on existential risk.

Comment by Jack Malde (jackmalde) on How to dissolve moral cluelessness about donating mosquito nets · 2022-06-08T11:59:18.550Z · EA · GW

To a donor who would like to save lives in the present without worsening the long-term future, however, we may just have reduced moral cluelessness enough for them to feel comfortable donating bednets.

I have to admit I find this slightly bizarre. Such a person would accept that we can improve/worsen the far future in expectation and that the future has moral value. At the same time, such a person wouldn't actually care about improving the far future, they would simply not want to worsen it. I struggle to understand the logic of such a view.

Comment by Jack Malde (jackmalde) on How to dissolve moral cluelessness about donating mosquito nets · 2022-06-08T11:29:20.686Z · EA · GW

I appreciate this attempt - I do think trying to understand the impact of reduced mortality on population sizes is pretty key (considering this paper and this paper together implies that population size could be quite crucial for a longtermist perspective). I'm not quite sure you've given this specific point enough attention though. You seem to acknowledge that whilst population should increase in the short-term that it could cause a population decline in several generations - but you don't really discuss how to weight these two points against each other, unless I missed it?

Comment by Jack Malde (jackmalde) on What YouTube channels do you watch? · 2022-06-02T20:30:15.710Z · EA · GW

I guess you can easily collect answers to multiple questions through a form. You can also see correlations e.g. if people watch a certain YT channel are they also more likely to listen to a certain podcast. Plus upvotes on Forum can be strong/weak which you may not want and people may simply upvote existing options rather than adding new ones, biasing what was put up early.

Comment by Jack Malde (jackmalde) on Longtermist slogans that need to be retired · 2022-05-16T21:32:28.427Z · EA · GW

Founders Pledge's Investing to Give report is an accessible resource on this.

I wrote a short overview here.

Comment by Jack Malde (jackmalde) on Longtermist slogans that need to be retired · 2022-05-09T08:10:45.568Z · EA · GW

I think the existence of investing for the future as a meta option to improve the far future essentially invalidates both of your points. Investing money in a long-term fund won’t hit diminishing returns anytime soon. I think of it as the “Give Directly of longtermism”.

Comment by Jack Malde (jackmalde) on Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill) · 2022-05-04T18:19:07.931Z · EA · GW

Certainly agree there is something weird there! 

Anyway I don't really think there was too much disagreement between us, but it was an interesting exchange nonetheless!

Comment by Jack Malde (jackmalde) on Should we buy coal mines? · 2022-05-04T08:11:50.098Z · EA · GW

I’ve read your overview and skimmed the rest. You say there will probably be better ways to limit coal production or consumption, but I was under the impression this wasn’t the main motivation for buying a coal mine. I thought the main motivation was to ensure we have the energy resources to be able to rebuild society in case we hit some sort of catastrophe. Limiting coal production and consumption was just an added bonus. Am I wrong?

EDIT: appreciate you do argue the coal may stay in the ground even if we don’t buy the mine which is very relevant to my question

EDIT2: just realised limiting consumption is important to preserve energy stores, but limiting production perhaps not

Comment by Jack Malde (jackmalde) on Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill) · 2022-05-03T16:33:51.735Z · EA · GW

Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate?

A longtermist career spans decades, as would going vegan for life or donating regularly for decades. So it was mostly a temporal thing, trying to somewhat equalise the commitment associated with different altruistic choices.

but why should the locus of agency be the individual? Seems pretty arbitrary.

Hmm well aren't we all individuals making individual choices? So ultimately what is relevant to me is if my actions are fanatical?

If you agree that voting is fanatical, do you also agree that activism is fanatical?

Pretty much yes. To clarify - I have never said I'm against acting fanatically. I think the arguments for acting fanatically, particularly the one in this paper, are very strong. That said, something like a Pascal's mugging does seem a bit ridiculous to me (but I'm open to the possibility I should hand over the money!).

Comment by Jack Malde (jackmalde) on Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill) · 2022-05-03T08:27:42.220Z · EA · GW

That's fair enough, although when it comes to voting I mainly do it for personal pleasure / so that I don't have to lie to people about having voted!

When it comes to something like donating to GiveWell charities on a regular basis / going vegan for life I think one can probably have greater than 50% belief they will genuinely save lives / avert suffering. Any single donation or choice to avoid meat will have far lower probability, but it seems fair to consider doing these things over a longer period of time as that is typically what people do (and what someone who chooses a longtermist career essentially does).

Comment by Jack Malde (jackmalde) on Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill) · 2022-05-02T21:12:52.046Z · EA · GW

Probabilities are on a continuum. It’s subjective at what point fanaticism starts. You can call those examples fanatical if you want to, but the probabilities of success in those examples are probably considerably higher than in the case of averting an existential catastrophe.

Comment by Jack Malde (jackmalde) on Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill) · 2022-05-02T20:13:01.497Z · EA · GW

Hmm I do think it's fairly fanatical. To quote this summary:

For example, it might seem fanatical to spend $1 billion on ASI-alignment for the sake of a 1-in-100,000 chance of preventing a catastrophe, when one could instead use that money to help many people with near-certainty in the near-term.

The probability that any one longtermist's actions will actually prevent a catastrophe is very small. So I do think longtermist EAs are acting fairly fanatically.

Another way of thinking about it is that, whilst the probability of x-risk may be fairly high, the x-risk probability decrease any one person can achieve is very small. I raised this point on Neel's post. 

Comment by Jack Malde (jackmalde) on Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill) · 2022-05-02T18:57:24.274Z · EA · GW

Yeah that's fair. As I said I'm not entirely sure on the motivation point. 

I think in practice EAs are quite fanatical, but only to a certain point. So they probably wouldn't give in to a Pascal's mugging but many of them are willing to give to a long-term future fund over GiveWell charities -  which is quite a bit fanaticism! So justifying fanaticism still seems useful to me, even if EAs put their fingers in their ears with regards to the most extreme conclusion...

Comment by Jack Malde (jackmalde) on To fund research, or not to fund research, that is the question · 2022-05-02T15:42:18.886Z · EA · GW

Hi Michael, thanks for your reply! I apologise I didn’t check with you before saying that you have ruled out research a priori. I will put a note to say that this is inaccurate. Prioritising based on self-reports of wellbeing does preclude funding research, but I’m glad to hear that you may be open to assessing research in the future.

Sorry to hear you struggled to follow my analysis. I think I may have over complicated things, but it did help me to work through things in my own head! I haven’t really looked at the literature into VOI.

In a nutshell my model implies that, the longer the time period you are willing to consider, the better further research is (all other things equal). This is because if you find a better intervention, you can fund it for the rest of time. So even a very slightly better intervention can deliver vastly more good than funding our best existing intervention. This effect is likely to dominate the opportunity cost of research (I.e. not improving mental health now), provided you’re considering a long enough time period.

My tentative view is that someone who doesn’t discount the future should almost definitely prefer funding research than existing interventions. So I personally would give to top research institutes over giving to StrongMinds. One might ask when one would ever want to stop giving to research. My model implies this might be the case when we’re very sceptical we can do better than our best intervention, when we think the likely improvement we can achieve is negligible, when for some reason we’re only interested in considering a short time period (e.g. perhaps we’re near heat death), or some constellation of these factors. I don’t think any of these are likely to be the case now, so I would fund research.

Hopefully that makes some sense! I doubt I’m saying anything ground-breaking here though…

Comment by Jack Malde (jackmalde) on Consider Changing Your Forum Username to Your Real Name · 2022-05-02T12:08:08.154Z · EA · GW

FYI you can contact the EA Forum team to get your profile hidden from search engines (see here).

Comment by Jack Malde (jackmalde) on Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill) · 2022-05-02T10:55:41.385Z · EA · GW

Yes I disagree with b) although it's a nuanced disagreement.

I think the EA longtermist movement is currently choosing the actions that most increase probability of infinite utility, by reducing existential risk.

What I'm less sure of is that achieving infinite utility is the motivation for reducing existential risk. It might just be that achieving "incredibly high utility" is the motivation for reducing existential risk. I'm not too sure on this.

My point about the long reflection was that when we reach this period it will be easier to see the fanatics from the non-fanatics. 

Comment by Jack Malde (jackmalde) on Consider Changing Your Forum Username to Your Real Name · 2022-05-01T16:02:32.370Z · EA · GW

I’ve reversed an earlier decision and have settled on using my real name. Wish me luck!

Comment by Jack Malde (jackmalde) on Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill) · 2022-05-01T15:56:23.658Z · EA · GW

I'm super excited for you to continue making these research summaries! I have previously written about how I want to see more accessible ways to understand important foundational research - you've definitely got a reader in me.

I also enjoy the video summaries. It would be great if GPI video and written summaries were made as standard. I appreciate it's a time commitment, but in theory there's quite a wide pool of people who could do the written summaries and I'm sure you could get funding to pay people to do them.

As a non-academic I don't think I can assist with writing any summaries but if a bottleneck is administrative resource let me know and I may be happy to volunteer some time to help with this.

Comment by Jack Malde (jackmalde) on Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill) · 2022-05-01T15:42:39.716Z · EA · GW

you should be striving to increase the probability of making something like this happen. (Which, to be clear, could be the right thing to do! But it's not how longtermists tend to reason in practice.)

As you said in your previous comment we essentially are increasing the probability of these things happening by reducing x-risk. I'm not convinced we don't tend to reason fanatically in practice - after all Bostrom's astronomical waste argument motivates reducing x-risk by raising the possibility of achieving incredibly high levels of utility (in a footnote he says he is setting aside the possibility of infinitely many people). So reducing x-risk and trying to achieve existential security seems to me to be consistent with fanatical reasoning.

It's interesting to consider what we would do if we actually achieved existential security and entered the long reflection. If we take fanaticism seriously at that point (and I think we will) we may well go for infinite value. It's worth noting though that certain approaches to going for infinite value will probably dominate other approaches by having a higher probability of success. So we'd probably decide on the most promising possibility and run with that. If I had to guess I'd say we'd look into creating infinitely many digital people with extremely high levels of utility.

Comment by Jack Malde (jackmalde) on Effective altruism’s odd attitude to mental health · 2022-04-29T22:19:54.617Z · EA · GW

I think the point Caleb is making is that your EAG London story doesn't necessarily show the tension that you think it does. And for what it's worth I'm sceptical this tension is very widespread.

Comment by Jack Malde (jackmalde) on Effective altruism’s odd attitude to mental health · 2022-04-29T11:33:26.786Z · EA · GW

I don't know for sure that we have prioritised mental health over other productivity interventions, although we may have. Effective Altruism Coaching doesn't have a sole mental health focus (also see here for 2020 annual review) but I think that is just one person doing the coaching so may not be representative of wider productivity work in EA.

It's worth noting that it's plausible that mental health may be proportionally more of a problem within EA than outside, as EAs may worry more about the state of the world and if they're having impact etc. - which may in turn require novel resources/approaches to treating mental health problems that aren't necessarily widely available elsewhere. 

Other things like how best to work productively may be well covered by existing resources and so may not need EA-specific materials.

Comment by Jack Malde (jackmalde) on Effective altruism’s odd attitude to mental health · 2022-04-29T10:35:35.133Z · EA · GW

Pretty much this. I don’t think discussions on improving mental health in the EA community are motivated by improving wellbeing, but instead by allowing us to be as effective as a community as possible. Poor mental health is a huge drain on productivity.

If the focus on EA community mental health was based on direct wellbeing benefits I would be quite shocked. We’re a fairly small community and it’s likely to be far more cost-effective to improve the mental health of people living in lower income countries (as HLI’s StrongMinds recommendation suggests).

Comment by Jack Malde (jackmalde) on Why the expected numbers of farmed animals in the far future might be huge · 2022-04-25T19:57:47.117Z · EA · GW

Seems relevant: SpaceX: Can meat be grown in space?

A test to see if we can grow cultivated meat in space.

Comment by Jack Malde (jackmalde) on My GWWC donations: Switching from long- to near-termist opportunities? · 2022-04-24T05:35:27.627Z · EA · GW

Sorry it’s not entirely clear to me if you think good longtermist giving opportunities have dried up, or if you think good opportunities remain but your concern is solely about the optics of giving to them.

On the optics point, I would note that you don’t have to give all of your donations to the same thing. If you’re worried about having to tell people about your giving to LTFF, you can also give a portion of your donations to global health (even if small), allowing you to tell them about that instead, or tell them about both.

You could even just give everything to longtermism yet still choose to talk to people about how great it can be to give global health. This may feel a bit dishonest to you though so you may not want to.

Comment by Jack Malde (jackmalde) on How much current animal suffering does longtermism let us ignore? · 2022-04-22T23:14:13.850Z · EA · GW

I'm just making an observation that longtermists tend to be total utilitarians in which case they will want loads of beings in the future. They will want to use AI to help fulfill this purpose. 

Of course maybe in the long reflection we will think more about population ethics and decide total utilitarianism isn't right, or AI will decide this for us, in which case we may not work towards a huge future. But I happen to think total utilitarianism will win out, so I'm sceptical of this.

Comment by Jack Malde (jackmalde) on How much current animal suffering does longtermism let us ignore? · 2022-04-22T23:02:52.247Z · EA · GW

Am I missing something basic here?

No you're not missing anything that I can see. When OP says:

Does longtermism mean ignoring current suffering until the heat death of the universe?

I think they're really asking:

Does longtermism mean ignoring current suffering until near the heat death of the universe?

Certainly the closer an impartial altruist is to heat death the less forward-looking the altruist needs to be.

Comment by Jack Malde (jackmalde) on How much current animal suffering does longtermism let us ignore? · 2022-04-22T22:55:26.786Z · EA · GW

I agree with all of that. I was objecting to the implication that longtermists will necessarily reduce suffering. Also (although I'm unsure about this), I think that the EA longtermist community will increase expected suffering in the future, as it looks like they will look to maximise the number of beings in the universe.

Comment by Jack Malde (jackmalde) on How much current animal suffering does longtermism let us ignore? · 2022-04-22T20:57:16.301Z · EA · GW

I upvoted OP because I think comparison to humans is a useful intuition pump, although I agree with most of your criticism here. One thing that surprised me was:

Obviously not? That means you never reduced suffering? What the heck was the point of all your longtermism?

Surprised to hear you say this. It is plausible that the EA longtermist community is increasing the expected amount of suffering in the future, but accepts this as they expect this suffering to be swamped by increases in total welfare. Remember one of the founding texts of longtermism says we should be maximising the probability that space colonisation will occur. Space colonisation will probably increase total suffering over the future simply because there will be so many more beings in total. 

When OP says :

D. Does longtermism mean ignoring current suffering until the heat death of the universe?

My answer is "pretty much yes". (Strong) longtermists will always ignore current suffering and focus on the future, provided it is vast in expectation. Of course a (strong) longtermist can simply say "So what? I'm still maximising undiscounted utility over time" (see my comment here).

Comment by Jack Malde (jackmalde) on How much current animal suffering does longtermism let us ignore? · 2022-04-22T04:52:54.347Z · EA · GW

However, it seems to me that at least some parts of longtermist EA , some of the time, to some extent, disregard the animal suffering opportunity cost almost entirely.

I'm not sure how you come to this conclusion, or even what it would mean to "disregard the opportunity cost". 

Longtermist EAs generally know their money could go towards reducing animal suffering and do good. They know and generally acknowledge that there is an opportunity cost of giving to longtermist causes. They simply think their money could do the most good if given to longtermist causes.

Comment by Jack Malde (jackmalde) on How much current animal suffering does longtermism let us ignore? · 2022-04-21T19:13:57.555Z · EA · GW

even though I just about entirely buy the longtermist thesis

If you buy into the longtermist thesis why are you privileging the opportunity cost of giving to longtermist causes and not the opportunity cost of giving to animal welfare?

Are you simply saying you think the marginal value of more money to animal welfare is greater than to longtermist causes?

Comment by Jack Malde (jackmalde) on How much current animal suffering does longtermism let us ignore? · 2022-04-21T12:25:12.580Z · EA · GW

Thanks for writing this! I like the analogy to humans. I did something like this recently with respect to dietary choice. My thought experiment specified that these humans had to be mentally-challenged so that they have similarity capacities for welfare as non-human animals which isn’t something you have done here, but I think is probably important. I do note that you have been conservative in terms of the number of humans however.

Your analogy has given me pause for thought!

Comment by Jack Malde (jackmalde) on How much current animal suffering does longtermism let us ignore? · 2022-04-21T11:57:03.957Z · EA · GW

There's a crude inductive argument that the future will always outweigh the present, in which case we could end up like Aesop's miser, always saving for the future until eventually we die.

I would just note that, if this happens, we’ve done longtermism very badly. Remember longtermism is (usually) motivated by maximising expected undiscounted welfare over the rest of time.

Right now, longtermists think they are improving the far future in expectation. When we actually get to this far future it should (in expectation) be better than it otherwise would have been if we hadn’t done what we’re doing now i.e. reducing existential risk. So yes longtermists may always think about the future rather than the present (until the future is no longer vast in expectation), but that doesn’t mean we will never reap the gains of having done so.

EDIT: I may have misunderstood your point here

Comment by Jack Malde (jackmalde) on Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"? · 2022-04-20T18:03:53.686Z · EA · GW

Yeah I think that’s true if you only have the term “longtermist”. If you have both “longtermist” and “non-longtermist” I’m not so sure.

Comment by Jack Malde (jackmalde) on Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"? · 2022-04-20T08:49:23.713Z · EA · GW

I don’t think it’s negative either although, as has been pointed out, many interpret it as meaning that one has a high discount rate which can be misleading