Posts

How much more important is work in USA over UK? 2021-01-02T21:21:43.867Z
What is a book that genuinely changed your life for the better? 2020-10-21T19:33:15.175Z
jackmalde's Shortform 2020-10-05T21:53:33.811Z
The problem with person-affecting views 2020-08-05T18:37:00.768Z
Are we neglecting education? Philosophy in schools as a longtermist area 2020-07-30T16:31:37.847Z
The 80,000 Hours podcast should host debates 2020-07-10T16:42:06.387Z

Comments

Comment by jackmalde on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be · 2021-01-23T08:31:37.349Z · EA · GW

I haven't finished listening to the podcast episode yet but I picked up on a few of these inaccuracies and was disappointed to hear them. As you say I would be surprised if Ajeya isn't aware of these things. Anyone who has read Greaves and MacAskill's paper The Case for Strong Longtermism should know that longtermism doesn't necessarily mean a focus on reducing x-risk, and that it is at least plausible that longtermism is not conditional on a total utilitarianism population axiology*.

However, given that many people listening to the show might not have read that paper, I feel these inaccuracies are important and might mislead people. If longtermism is robust to different views (or at least if this is plausible), then it is very important for EAs to be aware of this. I think that it is important for EAs to be aware of anything that might be important in deciding between cause areas, given the potentially vast differences in value between them. 

*Even the importance of reducing extinction risk isn't conditional on total utilitarianism. For example, it could be vastly important under average utilitarianism if we expect the future to be good, conditional on humans not going extinct. That said, I'm not sure how many people take average utilitarianism seriously.

Comment by jackmalde on What is going on in the world? · 2021-01-20T11:06:21.696Z · EA · GW

Thanks for doing this! 

One suggestion - I think it would be cool to have more links included so that people can read more if they're interested. 

Comment by jackmalde on A Funnel for Cause Candidates · 2021-01-19T22:42:13.934Z · EA · GW

OK, I think that's probably fine as long as you are very clear on the scope and the fact that some cause areas that you 'funnel out' may in fact still be very important through other lenses.

It sounds like you might be doing something quite similar to Charity Entrepreneurship so you may (or may not) want to collaborate with them in some way. At the very least they might be interested in the outcome of your research.

Speaking of CE, they are looking to incubate a non-profit that will work full-time on evaluating cause areas. I actually think it might be good if you have a somewhat narrow focus, because I'd imagine their organisation will inevitably end up taking quite a wide focus.

Comment by jackmalde on Big List of Cause Candidates · 2021-01-19T19:45:45.564Z · EA · GW

Hey, fair enough. I think overall you and Nuno are right. I did write in my original post that it was all pretty speculative anyway. I regret if I was too defensive.

I think those proposals sound good. I think they aim to achieve something different to what I was going for as I was mostly going for a "broadly promote positive values" angle on a societal level which I think is potentially important from a longtermist point of view, as opposed to educating smaller pockets of people, although I think the latter approach could be high value.

Comment by jackmalde on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-17T21:25:01.776Z · EA · GW

My moral intuitions say that there isn’t really an objective way that I should act, however I do think there are states of the world that are objectively better than others and that this betterness ordering is determined by whatever the best version of utilitarianism is. 

So it is indeed better if I don’t give my family special treatment, but I’m not actually obligated to. There’s no rule in my opinion which says “you must make the world as good as possible”.

This is how I have always interpreted utilitarianism. Not having studied philosophy formally I’m not sure if this is a common view or if it is seen as stupid, but I feel it allows me to give my family some special treatment whilst also thinking utilitarianism is in some way “right”.

Comment by jackmalde on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-17T21:00:23.889Z · EA · GW

Thanks for this, an interesting proposal.

Do you have a view on how this approach might compare with having a strong credence in utilitarianism and smaller but non-zero credences in other moral theories, and then acting in a way that factors in moral uncertainty, perhaps by maximising expected choiceworthiness (MEC)?

I might be off the mark, but it seems there are some similarities in that MEC can avoid extreme situations and be pluralist, although it might be a bit more prescriptive than you would like.

Comment by jackmalde on RISC at UChicago is Seeking Ideas for Improving Animal Welfare · 2021-01-16T16:23:09.858Z · EA · GW

Hey Jennifer, that’s great, fish welfare is very neglected so it might be quite interesting to them. 

I don’t know of others planning to submit and to be honest I wasn’t planning to submit one myself. I’m not really very deep into EAA research myself. One idea could be to  set up a google sheet to collect submission ideas, including submitted wording and who is submitting the idea. This could prevent duplications of submissions. I’m unsure if it would definitely be worth the effort, but it could be.

Comment by jackmalde on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-16T11:39:52.362Z · EA · GW

Thanks for this, that’s an interesting idea. It certainly seems like a useful approach to bracket possibly confounding intuitions!

Comment by jackmalde on The problem with person-affecting views · 2021-01-16T11:26:26.457Z · EA · GW

Thanks, this is an interesting example!

I think if you are a pure consequentialist then it is just a fact of the matter that there is a goodness ordering of the three options, and IIA seems compelling again. Perhaps IIA potentially breaks down a bit when one strays from pure consequentialism, I’d like to think about that a bit more.

Comment by jackmalde on A Funnel for Cause Candidates · 2021-01-16T11:18:40.595Z · EA · GW

This cube approach is interesting, but my instinctive response is to agree with MichaelA, if someone doesn’t think influencing the long-run future is tractable then they will probably just want to entirely filter out longtermist cause areas from the very start and focus on shorttermist areas. I’m not sure comparing areas/volumes between shorttermist and longtermist areas will be something they will be that interested in doing. My feeling is the cube approach may be over complicating things.

If I were doing this myself, or starting an ’exploratory altruism’ organisation similar to the one Charity Entrepreneurship is thinking about starting, I would probably take one of the following two approaches:

  1. Similar to 80,000 Hours, just decide what the most important class of cause areas is to focus on at the current margin and ignore everything else. 80K has decided to focus on longtermist cause areas and has outlined clearly why they are doing this (key ideas page has a decent overview). So people know what they are getting from 80K and 80K can freely assume totalism, vastness of the future etc. when they are carrying out their research. The drawback of this approach is it alienates a lot of people, as evidenced by the founding of a new careers org, ‘Probably Good’. 
     
  2. Try to please everyone by carrying out multiple distinct funnelling exercises, one for each class of cause area (say near-term human welfare, near-term animal welfare, x-risk, non-x-risk longtermist areas). Each funnelling exercise would make different foundational assumptions according to that cause area. People could then just choose which funnelling exercise to pay attention to and, in theory, everybody wins. The drawback to this approach that 80K would state is that it probably means spending a lot of time focusing on cause areas that you don’t think are actually that high value, which may just be very inefficient.

I think this decision is tough, but on balance I would probably go for option 1 and would focus on longtermist cause areas, in part because shorttermist areas have historically been given much more thought so there is probably less meaningful progress that can be made there.

Comment by jackmalde on A Funnel for Cause Candidates · 2021-01-14T17:43:52.698Z · EA · GW

Absolutely, every stage is important.

And reading back what I wrote, it was perhaps a little too strong. I would quite happily adopt MichaelA's suggested paragraph in place of my penultimate one!

Comment by jackmalde on A Funnel for Cause Candidates · 2021-01-14T08:40:18.562Z · EA · GW

Yes, I agree. I actually think this model could work well if we do multiple funnelling exercises, one for each type of cause area.

The only reason I was perhaps slightly forceful in my comment is because from this post and the previous post (Big List of Cause Candidates) I have got the impression that there is going to be a single funnelling exercise that aims to directly compare shorttermist vs longtermist areas including on their 'scale'.

Nuno - I don't want to give the impression that I fundamentally don't like your idea because I don't, I just think some care has to be taken.

Comment by jackmalde on A Funnel for Cause Candidates · 2021-01-14T08:11:53.918Z · EA · GW

You're right. "Personal" wasn't the best choice of word, I'm going to blame my 11pm brain again.

I sort of think you've restated my position, but worded it somewhat better, so thanks for that.

Comment by jackmalde on A Funnel for Cause Candidates · 2021-01-14T08:05:37.026Z · EA · GW

Yeah absolutely, this was my tired 11pm brain. I meant to refer to extinction risk whenever I said x-risk. I'll edit.

Comment by jackmalde on A Funnel for Cause Candidates · 2021-01-14T08:03:46.459Z · EA · GW

That's all fair. I would endorse that rewording (and potential change of approach)

Comment by jackmalde on A Funnel for Cause Candidates · 2021-01-13T23:13:15.025Z · EA · GW

My shorter (and less strong) comment concerns this:

Implementation stage: It has been determined that it's most likely a good idea to start a charity to work in a specific cause. Now the hard work begins of actually doing it.

I don't believe that every cause area naturally lends itself to starting a charity. In fact, many don't. For example, if one wants to estimate the philanthropic discount rate more accurately, one probably doesn't need to start a charity to do so. Instead, one may want to do an Econ PhD. 

So I think viewing the end goal as charity incubation may not be helpful, and in fact may be harmful if it results in EA dismissing particular cause areas that don't perform well within this lens, but may be high-impact through other lenses.

Comment by jackmalde on A Funnel for Cause Candidates · 2021-01-13T23:05:00.853Z · EA · GW

Thanks for this. I have two initial thoughts which I'll separate into different comments (one long, one short - guess which one this is).

OK so firstly, I think in your evaluation phase things get really tricky, and more tricky than you've indicated. Basically, comparing a shorttermist cause area to a longtermist cause area in terms of scale seems to me to be insanely hard, and I don't think heuristics or CEAs are going to help much, if at all. I think it really depends on which side you fall on with regards to some tough, and often contentious, foundational questions that organisations such as GPI are trying to tackle. To give just a few examples:

  • How seriously do you take the problem of complex cluelessness and how should one respond to it? If you think it's a problem you might then give every 'GiveWell-type' cause area a scale rating of "Who the hell knows", funnel them all out immediately, and then simply consider cause areas that arguably don't run into the cluelessness problem - perhaps longtermist cause areas such as values spreading or x-risk reduction (I acknowledge this is just one way to respond to the problem of cluelessness)
  • Do you think influencing the long-run future is intractable? If so you may want to funnel out all longtermist cause areas (possibly not including extinction-risk cause areas)
  • Are you convinced by strong longtermism? If so you may just want to funnel out all 'short-termist' cause areas because they're just a distraction
  • Do you hold a totalist view of population ethics? If you don't you may want to funnel out all extinction-risk reduction charities

Basically my point is, depending on answers to questions such as the above, you may think a longtermist cause area is WAY better than a shorttermist cause area, or vice versa, and we haven't even gone near a CEA (which I'm not sure would help matters). I can't emphasise that 'WAY' enough.

To some significant extent, I just think choice of cause area is quite personal. Some people are longtermists, some aren't. Some people think it's good to reduce x-risk, some don't etc. The question for you, if you're trying to apply a funnel to all cause areas, is how do you deal with this issue?

Most research organisations deal with this issue by not trying to apply a funnel to all cause areas in the first place. Instead they focus on a particular type of cause area and prioritise within that e.g. ACE focuses on near-term animal suffering, and GiveWell focuses on disease. Therefore, for example, GiveWell can make certain assumptions about those who are interested in their work - that they aren't worried by complex cluelessness, that they probably aren't (strong) longtermists etc. They can then proceed on this basis. A notable exception may be 80,000 Hours that has funnelled from all cause areas, landing on just longtermist ones and 

So part of me thinks your project may be doomed from the start unless you're very clear about where you stand on these key foundational questions. Even in that case there's a difficulty, in that anyone who disagrees with your stance on these foundational questions would then have the right to throw out all of your funnelling work and just do their own. (EDIT: I no longer really endorse this paragraph, see comments below).

I would be interested to hear your thoughts on all of this.

Comment by jackmalde on RISC at UChicago is Seeking Ideas for Improving Animal Welfare · 2021-01-13T07:58:30.200Z · EA · GW

I wonder, is it worth cooperating with each other to ensure a decent number of the most promising 'EA approved' ideas get submitted? 

It isn't really clear if a single person/team is allowed to submit more than one idea. If not, then cooperation could be particularly useful.

Comment by jackmalde on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-12T18:41:09.280Z · EA · GW

Chris Meacham? I'm starstruck!

In all seriousness, good point, I think you're right but I would be interested to see what Arden/Michelle say in response.

Since both W1 and W2 will yield harm while W3 won’t, it looks like W3 will come out obligatory. I can see why one might worry about this.

I thought I'd take this opportunity to ask you: do you hold the person-affecting view you outlined in the paper and, if so, do you then in fact see ensuring extinction as obligatory?

Comment by jackmalde on Big List of Cause Candidates · 2021-01-11T13:15:22.722Z · EA · GW

Thanks for the clarifications in your previous two comments. Helpful to get more of an insight into your thought process.

Just a few comments:

  • I strongly don't think a charity to work on philosophy in schools would be helpful and I don't like that way of thinking about it. My suggestions were having prominent philosophers join (existing) advocacy efforts for philosophy in the curriculum, more people becoming philosophy teachers (if this might be their comparative advantage), trying to shift educational spending towards values-based education, more research into values-based education (to name a few).
  • This is a whole separate conversation that I'm not sure we have to get into right now too deeply (I think I'd rather not) but I think there are severe issues with development economics as a field to the extent that I would place it near the bottom of the pecking order within EA. Firstly the generalisability of RCT results is highly questionable (for example see Eva Vivalt's research). More importantly and fundamentally, the problem of complex cluelessness (see here and here).  It is partly considerations of cluelessness that makes me interested in longtermist areas such as moral circle expansion and broadly promoting positive values, along with x-risk reduction.

I'm hoping we're nearing a good enough understanding of each other's views that we don't need to keep discussing for much longer, but I'm happy to continue a bit if helpful.

Comment by jackmalde on Big List of Cause Candidates · 2021-01-11T11:07:53.636Z · EA · GW

OK I mean you can obviously do what you want and  I appreciate that you've got a lot of causes to get through.

I don't place that much stock in S1 when evaluating things as complex as how to do the most good in the world. Especially when your S1 leads to comments such as:

  • Philosophy seems like a terrible field - I'd imagine you're in the firm minority here and when that is the case I'd imagine it's reasonable to question your S1 and investigate further. Perhaps you should do a critique of philosophy on the forum (I'd certainly be interested to read it). There are people who have argued that philosophy does make progress and that it may not be as obvious, as philosophical progress tends to spawn other disciplines that then don't call themselves philosophy. See here for a write-up of philosophical success stories. In any case what I really care about in a philosophical education is teaching people how to think (e.g. Socratic questioning, Bayesian updating etc.), not get people to become philosophers. 
  • I also studied philosophy at university and overall came away with a mostly negative impression - I mean, what about all the people who don't come away with a negative impression? They seem fairly abundant in EA.
  • I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn't seem central to that idea - I still don't get this comment to be honest. In my opinion the EA you speak of isn't doing something similar to what I propose, and even if they were, why would the fact that they don't see philosophy as central to what they're doing mean that teaching philosophy would obviously fail?

Anyway I won't labour the point much more. 43 karma on my philosophy in schools post is a sign it isn't going to be revolutionary in EA and I've accepted that, so it's not that I want you to rate it highly, it's just that I'm sceptical of your process of how you did rate it.

Comment by jackmalde on Big List of Cause Candidates · 2021-01-11T06:33:44.891Z · EA · GW

I'm not sure why your instinct is to go by your own experience or ask some other people. This seems fairly 'un-EA' to me and I hope whatever you're doing regarding the scoring doesn't take this approach.

I would go by the available empirical evidence, whilst noting any likely weaknesses in the studies. The weaknesses brought up by Khorton (and which you referenced in your comment) were actually noted in the original empirical review paper, which said the following regarding the P4C process:

  • “Many of the studies could be criticized on grounds of methodological rigour, but the quality and quantity of evidence nevertheless bears favourable comparison with that on many other methods in education.”
  • “It is not possible to assert that any use of the P4C process will always lead to positive outcomes, since implementation integrity may be highly variable. However, a wide range of evidence has been reported suggesting that, given certain conditions, children can gain significantly in measurable terms both academically and socially through this type of interactive process.”
  • “further investigation is needed of wider generalization within and beyond school, and of longer term maintenance of gains”

My overall feeling on scale was therefore that it was 'promising' but still unclear. I'm not impressed with just giving scale rating = 1 based on personal feeling/experience to be honest. Your tractability points possibly seem more objective and justified.

Comment by jackmalde on Big List of Cause Candidates · 2021-01-10T21:27:49.313Z · EA · GW

OK thanks for this reply! I think some of this is fair and as I say, I'm not clinging to this idea as being hugely promising. Some of your comments seem quite personal and possibly contentious, but then again I don't know what the context of the scoring is so maybe that's sort of the idea at this stage.

A few specific thoughts.

Your posts conflicts with my personal experience about how philosophy in schools can be taught. (Spain has philosophy, ethics & civics classes for kids as their curriculum, and I remember them being pretty terrible. In a past life, I also studied philosophy at university and overall came away with a mostly negative impression).

OK this seems fairly personal and anecdotal (as I said maybe this is fine at this stage but I hope this sort of analysis wouldn't play a huge role in scoring at a later stage).

I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn't seem central to that idea.

Not sure what point you're making here (I also know this EA by the way).

I believe that there aren't enough excellent philosophy teachers for it to be implemented at scale.

I don't give much credence to the papers you cite replicating at scale.

Perhaps fair! We could always train more teachers though.

Philosophy seems like a terrible field. It has low epistemic standards. It can't come to conclusions. It has Hegel. There is simply a lot of crap to wade through.

Hmm. Well I at least feel fairly confident that a lot of people will disagree with you here. And any good curriculum designer should leave out the crap. My experience with philosophy has led me to go vegan, engage with EA and give effectively (think Peter Singer type arguments). I've found it quite important in shaping my views and I'm quite excited by the field of global priorities research which is essentially econ and philosophy.

I imagine that teaching utilitarianism at scale in schools is not very feasible.

If you teach philosophy, you will probably spend at least a little bit of time teaching utilitarianism within that. Not really sure what you're saying here.

I'd expect EA to loose a political value about teaching EA values (as opposed to, say, Christian values, or liberal values, or feminist values, etc.). I also expect this fight to be costly.

It's teaching philosophy, not teaching values.  In the post I don't suggest we include EA explicitly in the curriculum. In any case, EA is the natural conclusion of a utilitarian philosophy and I would expect any reasonable philosophy curriculum to include utilitarianism.

If I think about how philosophy in schools would be implemented, and you can see this in Spain, I imagine this coming about as a result of a campaign promise, and lasting for a term or two (4, 8 years) until the next political party comes with their own priorities. In Spain we had a problem with politicians changing education laws too often.

Ok interesting. I didn't really consider that its inclusion might just be overturned by another party. From my personal experience you don't get subjects being dropped very often and so I was hopeful for staying power, but maybe this is a fair criticism.

When I think of trying to come up with a 100 or 1000 years research program to study philosophy in schools, the idea doesn't strike me as much superior to the 10 year version of: do a literature review of existing literature of philosophy in schools and try to get it implemented. This is in contrast with other areas for which e.g., a 100-1000 years+ observatory for global priorities research or unknown existential risks does strike as more meaningful.

OK fine this (and your later comments) was probably me just not knowing what you meant by 'time horizon'.

Comment by jackmalde on Big List of Cause Candidates · 2021-01-10T19:51:04.822Z · EA · GW

Can you give a bit more of an explanation about the scoring in the google sheet? E.g. time horizon, readiness, promisingness etc.

I was slightly disappointed to see such low scores for my idea of philosophy in schools (but I guess I should have realised by now that it's not cause X!). I'm not sure I agree with 'time horizon' being 'very short' though given that some of the main channels through which I hope the intervention would be good are in terms of values spreading (which you rate as 'medium') and moral circle expansion (which you rate as 'long'). The whole point of my post was to argue for this intervention from a longtermist angle and it was partly in response to 80,000 Hours listing 'broadly promoting positive values' as a potential highest priority. So saying time horizon is 'very short' is a sign that you didn't engage with the post at all, or (quite possibly!) that I've misunderstood something quite important. If you do have some specific feedback on the idea I'd appreciate it!

Comment by jackmalde on An introduction to global priorities research for economists · 2021-01-10T10:15:35.616Z · EA · GW

Thanks for this, I think it's great!

I would like to (tentatively) suggest that it might be good to cover some basic ethics in the very first week, even before prospecting for gold.

When I did my MSc in Economics at University College London I completed an optional course called "Ethics in Welfare Economics" which started off by making the distinction between positive and normative economics and stressing that it is necessary to make value judgements about what has intrinsic value in order to make normative statements about what should be done. This sounds really simple but I still think it's important to discuss, if only briefly, because I think a surprising number of economists don’t actually appreciate this.

For example, some people who study economics will think it important to maximise economic growth without ever asking the question, why? Why is it important to maximise economic growth? Is it because economic growth is intrinsically valuable? Is it because economic growth is simply instrumentally valuable insofar as it helps promote something else that has intrinsic value e.g. happiness? If the latter, the causal link between economic growth and happiness should be examined before we actually endorse maximising economic growth. EAs are quite philosophically-minded so this might be quite obvious to them, but this may not be the case for many economists that have had little exposure to philosophy.

The course also stressed Hume’s is-ought problem and how, to quote John Stuart Mill, “Questions of ultimate ends are not amenable to direct proof”. This is to say that we have to make our value judgements outside of the realm of positive economics, which is not something a lot of economists fancy doing, even though they must.

I admit I haven’t looked at your syllabus in (much) detail and it’s possible that the above wouldn’t fit in that well. Overall though I feel quite passionate about there being a greater focus on ethics/philosophy in economics and applaud your inclusions of normative uncertainty and population ethics, but wonder if there should be more preliminaries in this vein. If there is any interest in this then feel free to get in touch as I am potentially interested in making some resources on all of this (possibly including a youtube video).

Comment by jackmalde on jackmalde's Shortform · 2021-01-10T07:23:41.190Z · EA · GW

Thanks for that

Comment by jackmalde on Why EA meta, and the top 3 charity ideas in the space · 2021-01-10T06:56:15.195Z · EA · GW

OK, that's great, thanks.

Just to clarify, I agree that the post as a whole is definitely relevant, but I also think this part is too:

We —Ozzie Gooen of the Quantified Uncertainty Research Institute and I— might later be interested in expanding this work and eventually using it for forecasting —e.g., predicting whether each candidate would still seem promising after much more rigorous research. At the same time, we feel like this list itself can be useful already. 

See the comments for more of an explanation as to what this might entail (I asked).

This seems like the sort of analysis that an 'Exploratory Altruism' charity would do, so it may be worth contacting Ozzie/Nuno to discuss avoiding potential duplication of effort, or to enquire about collaborating with them. It's possible the latter approach is preferable as they certainly have some highly interesting methodological ideas about how to assess the cause areas (for example see here and here).

Comment by jackmalde on jackmalde's Shortform · 2021-01-09T17:15:34.079Z · EA · GW

This is plausible. Unfortunately the opposite possibility - that people become less concerned about eating animals if their welfare is better - is also quite plausible. I would be interested in seeing some evidence on this matter.

Comment by jackmalde on Open and Welcome Thread: January 2021 · 2021-01-09T09:21:20.249Z · EA · GW

I’ve just realised that Facebook polls can’t be made anonymous which is a bit of a drawback, although one could just link to an external poll.

I think polls might be a decent idea for the EA Forum, but I suspect only if they’re away from the main page. I don’t think it’s worth congesting the main page anymore than it already is.

Comment by jackmalde on jackmalde's Shortform · 2021-01-08T21:51:07.144Z · EA · GW

As I continue to consider the implications of a longtermist philosophy, it is with a heavy heart that the animal-focused side of me feels less enthused about welfare improvements.

This post by Tobias Baumann provides some insight into what might be important when thinking about animal advocacy from a longtermist point of view and is leading me to judge animal advocacy work against three longtermist criteria:

  1. Persistence: A focus on lasting change as opposed to just short-term suffering reduction
  2. Stability: Low risk of bringing controversy to the effective animal advocacy movement or causing divisions within it
  3. Moral Circle Expansion: The potential to expand humanity's moral circle to include anything sentient

I don’t feel that welfare improvements score particularly well against any of these criteria. 

Persistence - Corporate welfare improvements are promises that are made public and reversing them should come with reputational damage, however there are reasons to believe that some companies will fail to follow through with their commitments (see here). It isn't clear how well welfare improvements might persist beyond the short-run into the medium/long-run.

Stability - A fairly healthy contingent in the animal advocacy and even EAA movements, feels uncomfortable about a focus on welfare improvements as this can be seen to implicitly support animal agriculture and so may be counterproductive to the goal of abolition of all animal exploitation.

Moral Circle Expansion - Unclear. It is possible that welfare improvements may make people feel less concerned about consuming animal products, resulting in a persistent lack of concern for their moral status. It is also possible that there is an opposite effect (see here).

I look forward to further work on longtermism and animal advocacy. I suspect such work may redirect efforts away from welfare improvements and towards areas such as legal/legislative work, wild animal suffering, capacity building, cultured meat, and maybe even general advocacy. Whilst I feel slightly uncomfortable about a potential shift away from welfare improvements, I suspect it may be justified.

Comment by jackmalde on evelynciara's Shortform · 2021-01-08T19:06:08.087Z · EA · GW

Also just realised that the new legal priorities research agenda touches on this with some academic citations on pages 14 and 15.

Comment by jackmalde on evelynciara's Shortform · 2021-01-08T18:52:18.398Z · EA · GW

Toby Ord has spoken about non-consequentialist arguments for existential risk reduction, which I think also work for longtermism more generally. For example, Ctlr+F for "What are the non-consequentialist arguments for caring about existential risk reduction?" in this link. I suspect relevant content is also in his book The Precipice.

Some selected quotes from the first link:

  • "my main approach, the guiding light for me, is really thinking about the opportunity cost, so it's thinking about everything that we could achieve, and this great and glorious future that is open to us and that we could do"
  • "there are also these other foundations, which I think also point to similar things. One of them is a deontological one, where Edmund Burke, one of the founders of political conservatism, had this idea of the partnership of the generations. What he was talking about there was that we've had ultimately a hundred billion people who've lived before us, and they've built this world for us. And each generation has made improvements, innovations of various forms, technological and institutional, and they've handed down this world to their children. It's through that that we have achieved greatness ... is our generation going to be the one that breaks this chain and that drops the baton and destroys everything that all of these others have built? It's an interesting kind of backwards-looking idea there, of debts that we owe and a kind of relationship we're in. One of the reasons that so much was passed down to us was an expectation of continuation of this. I think that's, to me, quite another moving way of thinking about this, which doesn't appeal to thoughts about the opportunity cost that would be lost in the future."
  • "And another one that I think is quite interesting is a virtue approach ... When you look at humanity's current situation, it does not look like how a wise entity would be making decisions about its future. It looks incredibly juvenile and immature and like it needs to grow up. And so I think that's another kind of moral foundation that one could come to these same conclusions through."
Comment by jackmalde on Why EA meta, and the top 3 charity ideas in the space · 2021-01-08T18:06:18.705Z · EA · GW

Whoops I have fixed it now

Comment by jackmalde on Why EA meta, and the top 3 charity ideas in the space · 2021-01-08T16:12:33.175Z · EA · GW

I like these charity ideas, especially the 'Exploratory altruism' one.

I don't know if you've seen this but Nuno Sempere and Ozzie Gooen mention here that they may look to evaluate existing cause area ideas to determine if they should be researched further. Seems closely linked to the 'Exploratory altruism' idea.

Comment by jackmalde on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-06T19:10:21.451Z · EA · GW

I think it's important to ask why you think it's horrible to bomb the planet into non-existence. Whatever reason you have, I suspect it probably just simplifies down to you disagreeing with the core rationale of person-affecting views.

For example, perhaps you're concerned that bombing the plant will prevent a future that you expect to be good. In this case you're just disagreeing with the very core of person-affecting views: that adding happy people can't be good.

Or perhaps you're concerned by the suffering caused by the bombing. Note that Meacham's person-affecting view thinks that the suffering is 'harmful' too, it just thinks that the bombing will avoid a greater quantity of harm in the future. Also note that many people, including totalists, also hold intuitions that it is OK to cause some harm to prevent greater harm. So really what you're probably disagreeing with in this case is the claim you would actually be avoiding a greater harm by bombing. This is probably because you disagree that adding some happy future people can't  ever outweigh the harm of adding some unhappy future people. In other words, once again, you're simply disagreeing with the very core of person-affecting views: that adding happy people can't be good.

Or perhaps you don't like the bombing for deontological reasons i.e. you just can't countenance that such an act could be OK. In this case you don't want a moral view that is purely consequentialist without any deontological constraints. So you're disagreeing with another core of person-affecting views: pure consequentialism.

I could probably go on, but my point is this: I do believe you find the implication horrible, but my guess is that this is because you fundamentally don't accept the underlying rationale.

Comment by jackmalde on How much more important is work in USA over UK? · 2021-01-06T17:54:25.213Z · EA · GW

Thanks, this is very useful. 

Would you say then that general EA movement building is likely to be more important in US? To make this more concrete: at the current margin do you think one additional person doing EA movement building in US is likely to do more good than one additional person doing EA movement building in UK?

This will of course depend both on how influential the US relative to the UK, but also on how well-known EA currently is in UK versus US. My impression is that the proportion of Americans who are EAs is far less than the proportion of British who are EAs, so the US is likely to win on both metrics.

Comment by jackmalde on Can I have impact if I’m average? · 2021-01-05T19:02:38.251Z · EA · GW

to have an impact you need to be among the 0.1%-1% best in your field.

This LessWrong post makes an interesting point about 'exploiting dimensionality' to have an impact. 

For example (using the top comment on the post), you may not be the best at AI Safety and you may not be the best YouTuber, but if you combine the two and become an AI Safety Youtuber you may well be one of the best at that and have very high impact! (EDIT: I'd recommend people read the post as there's a bit more too it than this and it's very interesting).

That's a bit of an aside from your point though. I completely agree we need to counter the despair that people feel at not having unusually high impact - it's not helpful at all.

Comment by jackmalde on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T16:40:56.313Z · EA · GW

II. Harm is done to a subject in a world if and only if she exists in that world and her welfare there is lower than her welfare in an alternate world.

III. In worlds where a subject doesn’t exist, we treat her welfare as if it is equal to 0 (but again, she cannot be harmed in that world).

Given this:

  • If a person exists in only one of two outcomes and they have negative wellbeing in the outcome where they exist, then they have been harmed.
  • If a person exists in only one of two outcomes and they have positive wellbeing in the outcome where they exist, then there is no harm to anyone.

So creating net negative lives is bad under Meacham's view. 

It's possible I'm getting something wrong, but this is how I'm reading it. I find thinking of 'counting for zero' confusing so I'm framing it differently.

Comment by jackmalde on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T13:55:18.138Z · EA · GW

Deleted my previous comment because I got the explanation wrong. 

In "not bomb" there will be various people who go on to exist in the future. In "bomb" these people won't exist and so will be given wellbeing level 0. So all you need is for one future person in the "not bomb" world to have negative welfare and there is harm. If you bomb everyone then there will be no one that can be harmed in the future.

This is why world 2 is better than world 1 here (see 'central illustration of HMV' section):

It's quite possible I've got this wrong again and should only talk about population ethics when I've got enough time to think about it carefully!

Comment by jackmalde on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T13:10:27.854Z · EA · GW

We have only two choices and all those people who exist in one outcome (i.e. the future people) have their welfare ignored on this view - they couldn't have been better off.

Good challenge.

I'm not sure if I'm right here as I don't have time to think about this in much depth, but I think it depends on your interpretation of "possible worlds". If we just consider the possible worlds to be "bomb" and "not bomb" I think you're right.

If you allow for there to be a whole range of possible "not bomb" worlds, then not bombing will result in a great deal of harm (as you would be able to compare counterparts across all these possible worlds), whereas bombing will ensure you minimise harm to zero.

It's not clear to me that just because we are making a choice between bombing and not bombing, that we can then consider only two possible worlds, but I'm not sure about this and need to think about this more.

Comment by jackmalde on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T10:43:14.852Z · EA · GW

Thanks for sharing Arden. I strongly upvoted because I think considering alternative views in population ethics is important and thought this write-up was interesting and clear (this applies to both your explanation of the paper and then your subsequent reaction). I'm also not sure if I ever would have got around to reading the Meacham paper myself, but I'm glad I now understand it. Overall I would be happy if you were to share similar write-ups in the future!

To give some more specific thoughts:

  • I found your discussion about how rejecting IIA may not be that weird in certain situations quite interesting and not something I have really considered before. Having said that I still think (counter to you) that I prefer accepting the mere addition paradox over rejecting IIA, but I now want to think about that more and it's possible I could change my mind on this
  • I think I agree with your 'ad hoc' point about Meacham's saturating counterpart relations. It all seems a bit contrived to me
  • Having said all that I don't think I find the 'radical implications' counterargument compelling. I don't really trust my intuitions on these things that much and it's worth noting that some people don't find the idea that ending the world may be a good thing to be counterintuitive (I actually used to feel quite strongly that it would be good!). Plus maybe there is a better way to do this than by bombing everything. Instead of rejecting things because of 'radical implications' I prefer to just factor in moral uncertainty to my decision-making, which can then lead to not wanting to bomb the planet even if one really likes a person-affecting view (EDIT: I agree with Halstead's comment on this and think he has put the argument far better than I have)

So thanks for giving me some things to think about and I hope to see more of these in the future. For now I remain a (slightly uneasy) totalist.

Comment by jackmalde on What posts do you want someone to write? · 2021-01-04T07:31:11.661Z · EA · GW

The implications of Brexit for the potential to do good when located in the UK.

Plausibly lessened if the UK has less influence on the world stage. I appreciate this may be seen as a somewhat political post, but I think it may be possible to write it without actually passing judgement on whether Brexit was a good or bad thing on the whole.

Comment by jackmalde on How much more important is work in USA over UK? · 2021-01-04T07:19:17.742Z · EA · GW

Would you be able to at least say become a staffer or something even if you couldn't become a senior civil servant?

Good point. I need to look into this further and figure out what I may still be able to do from a policy/government focus in the US even without having signed up for the selective service.

I know very little about the UK and its influence but I would imagine it to have diminished considerably since leaving the EU

Another good point that I surprisingly have not really considered much. It seems plausible that Brexit could have significant implications for the potential to do good in the UK. Perhaps this is something that should be discussed more.

Thanks for your comment, you've given me some things to think about!

Comment by jackmalde on Open and Welcome Thread: January 2021 · 2021-01-04T07:10:29.628Z · EA · GW

Thanks, I actually wasn't aware that Facebook group existed.

Comment by jackmalde on How much more important is work in USA over UK? · 2021-01-03T11:06:01.158Z · EA · GW

Yep fair enough. I probably should have given my personal situation up front. 

Thanks for your thoughts though, still helpful.

Comment by jackmalde on How much more important is work in USA over UK? · 2021-01-03T10:54:27.857Z · EA · GW

I welcome general answers that would be of use to anyone, but thought I'd give some info around my personal motivation for asking as well:

I am a dual citizen US/UK but currently live in UK. I am young, single (at least for now!) and am quite open to moving to the US. I even have some family over there, so overall it wouldn't be too stressful/difficult to move.

If I were eligible for good government roles in the US, I would just make that a priority. However, rather unfortunately, I never signed up for the US Selective Service (essentially their military draft) which you have to do before age 26 (I realised this when I was 26) and not having signed up to this makes you ineligible for many US federal jobs. Living in the UK I basically never even heard about this, and I know I'm not the only one this has happened to.

It's still the case however that I can easily move to the US, so now I want to think about if I should still go to the US for another type of role (say movement building role), or to go into policy in UK. These aren't the only two options I am considering, but for the purposes of this question I am narrowing it to these. I'm still figuring out some things about personal fit so you may as well just assume for the purposes of this question that my personal fit is constant across all roles, although I know of course that personal fit is a key consideration.

Comment by jackmalde on How much more important is work in USA over UK? · 2021-01-03T10:49:13.976Z · EA · GW

Thanks for this Alex, that certainly makes sense. 

I suppose it wouldn't hurt to give a bit more info around my motivation for asking. I am a dual citizen US/UK but currently live in UK. I am young, single (at least for now!) and am quite open to moving to the US. I even have some family over there, so overall it wouldn't be too stressful/difficult to move.

If I were eligible for good government roles in the US, I would just make that a priority. However, rather unfortunately, I never signed up for the US Selective Service (essentially their military draft) which you have to do before age 26 (I realised this when I was 26) and not having signed up to this makes you ineligible for many US federal jobs. Living in the UK I basically never even heard about this, and I know I'm not the only one this has happened to.

So now I want to think about if I should still go to the US for another type of role, or to go into policy in UK (these aren't the only two options I am considering but for the purposes of this question I am narrowing it to these). I think what you're saying is that, if I would go for policy roles in the US, then I may as well go for them in the UK as well (provided I have decent personal fit for policy). Is that fair?

Comment by jackmalde on Open and Welcome Thread: January 2021 · 2021-01-02T19:02:59.976Z · EA · GW

What do people think about making it possible to conduct polls on the forum? This could be an easy way to gauge what EAs (or at least those who engage on this forum) think about certain issues. 

Comment by jackmalde on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T20:54:08.969Z · EA · GW

If you instead adopt a problem/knowledge focused ethics,  then you get to keep all the good aspects of longtermism (promoting progress, etc), but don't open yourself up to what (in my view) are its drawbacks

Maybe (just maybe) we're getting somewhere here. I have no interest in adopting a 'problem/knowledge focused ethic'. That would seem to presuppose the intrinsic value of knowledge. I only think knowledge is instrumentally valuable insofar as it promotes welfare.

Instead most EAs want to adopt an ethic that prioritises 'maximising welfare over the long-run'. Longtermism claims that the best way to do so is to actually focus on long-term effects, which may or may not require a focus on near-term knowledge creation - whether it does or not is essentially an empirical question. If it doesn't require it, then a strong longtermist shouldn’t consider a lack of knowledge creation to be a significant drawback.

Comment by jackmalde on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T19:03:55.474Z · EA · GW

Thanks for this! All interesting and I will have to think about this more carefully when my brain is fresher. I admit I'm not very familiar with the literature on Knightian uncertainty and it would probably help if I read some more about that first.

It is misleading. Work like this leads people to think that Will+Hillary and others believe that expected value calculations are the key tool for decision making. They are not. (I am assuming they only reference expected value calculations for illustrative purposes, if I am incorrect then their paper is either really poor or I really don’t get it.)

OK if I understand you correctly, what you have said is that Will and Hilary present Knightian uncertainty as axiologically different to EV reasoning, when you don't think it is. I agree with you that ideally section 4.5 should be considering some axiologically different decision-making theories to EV.

Regarding the actual EV calculations with numbers, I would say, as I did in a different comment, that I think it is pretty clear that they only carry out EV calculations for illustrative purposes. To quote:

Of course, in either case one could debate these numbers. But, to repeat, all we need is that there be one course of action such that one ought to have a non-minuscule credence in that action’s having non-negligible long-lasting influence. Given the multitude of plausible ways by which one could have such influence, diverse points of view are likely to agree on this claim

This is the point they are trying to get across by doing the actual EV calculations.