Posts

Voting open for Project for Awesome 2021! 2021-02-12T02:22:20.339Z
Project for Awesome 2021: Video signup and resources 2021-01-31T01:57:59.188Z
Project for Awesome 2021: Early coordination 2021-01-27T19:11:00.600Z
Even Allocation Strategy under High Model Ambiguity 2020-12-31T09:10:09.048Z
[Summary] Impacts of Animal Well‐Being and Welfare Media on Meat Demand 2020-11-05T09:11:38.138Z
Hedging against deep and moral uncertainty 2020-09-12T23:44:02.379Z
Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? 2020-06-22T16:41:58.831Z
Physical theories of consciousness reduce to panpsychism 2020-05-07T05:04:39.502Z
Replaceability with differing priorities 2020-03-08T06:59:09.710Z
Biases in our estimates of Scale, Neglectedness and Solvability? 2020-02-24T18:39:13.760Z
[Link] Assessing and Respecting Sentience After Brexit 2020-02-19T07:19:32.545Z
Changes in conditions are a priori bad for average animal welfare 2020-02-09T22:22:21.856Z
Please take the Reducing Wild-Animal Suffering Community Survey! 2020-02-03T18:53:06.309Z
What are the challenges and problems with programming law-breaking constraints into AGI? 2020-02-02T20:53:04.259Z
Should and do EA orgs consider the comparative advantages of applicants in hiring decisions? 2020-01-11T19:09:00.931Z
Should animal advocates donate now or later? A few considerations and a request for more. 2019-11-13T07:30:50.554Z
MichaelStJules's Shortform 2019-10-24T06:08:48.038Z
Conditional interests, asymmetries and EA priorities 2019-10-21T06:13:04.041Z
What are the best arguments for an exclusively hedonistic view of value? 2019-10-19T04:11:23.702Z
Defending the Procreation Asymmetry with Conditional Interests 2019-10-13T18:49:15.586Z
Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests 2019-07-04T23:56:44.330Z

Comments

Comment by MichaelStJules on Silk production: global scale and animal welfare issues · 2021-04-21T23:15:12.039Z · EA · GW

Ya, fully banning silk might be (much) more feasible than banning chicken meat/eggs, although Lewis said he thinks meat chicken welfare reforms reduce 5-75% of their suffering (~40% in the US, ~25% in the UK), so full bans are only a few times better in expectation, in suffering terms. (I'm don't have a strong view on this.)

Maybe work on fur bans is a useful comparison?

Comment by MichaelStJules on EA Forum feature suggestion thread · 2021-04-21T20:33:49.856Z · EA · GW

A topic could be controversial in society but the votes could still go mostly one way on the EA Forum itself, though. For example, I wouldn't be surprised if Democrat-favouring election posts were not scored as very controversial on the EA Forum, given the political leanings of EA. Do we also want to consider posts on controversial topics more broadly?

Comment by MichaelStJules on EA Forum feature suggestion thread · 2021-04-21T17:24:04.903Z · EA · GW

Maybe turn off strong voting in comments or even comment karma from counting to users' total karma in such posts? How do we decide which posts to consider controversial, though? Just the mods do it (they kept object-level election posts in the personal blog)?

Comment by MichaelStJules on If Bill Gates believes all lives are equal, why is he impeding vaccine distribution? · 2021-04-21T02:00:45.369Z · EA · GW

Rather than impeding vaccine distribution, could this not ensure profit motive to incentivize further vaccine distribution? A company motivated by profit and distributing vaccines at a loss might not be too excited about doing that very quickly, and try to spend some of their efforts on more profitable things. At a good enough profit, they'll put in 100%.

Did they guarantee lower prices for developing countries, though? Developed countries can pay extra without too much harm, but I'd worry about developing ones.

Comment by MichaelStJules on Silk production: global scale and animal welfare issues · 2021-04-21T01:46:11.865Z · EA · GW

Thanks for this post!

That being said, given the scale of silk farming, advocacy on this issue could plausibly be highly cost-effective when compared on a species-neutral basis to interventions to reduce vertebrate farmed animal suffering.

It would depend on the specific intervention (or issues of tractability and neglectedness), but the scale of silk farming seems most likely lower than chicken farming (for meat and eggs) after adjusting for moral weight and probability of sentience, at least by my judgement. In your table, you have 41-99 billion silk worms alive on average at any time, but the global chicken population is ~20 billion at any time, so the silk worm population is only 2-5 times larger. I'd guess the average chicken has it more than 5x worse than the average silk worm in expectation after accounting for moral weight and probability of sentience, and either alone could do it.

According to these tables, chickens (red junglefowl) have 221,000,000 neurons in their whole nervous system and 61,000,000 neurons in their sensory-associative structure (the pallium/DVR), whereas the insects in these tables have at most 1,180,000 neurons in their whole nervous systems, and span 2,500 (common fruitfly) to 200,000 (common cockroach) in their sensory-associative structures (the corpora pedunculata), so a chicken should have > 187x more overall and >300x more in their sensory-associative structure, specifically. If we use linear weighting for moral weight, this would be >187x more moral weight per chicken than per silkworm, and with square-root weighting, >13x more, before accounting for probability of sentience.

Interestingly, larval zebrafish only have 100,000 neurons in their whole nervous system according to the first table, 10x less than common cockroaches and some bees, so I'd guess it would be less than this for silkworms, and chickens would have >2,000x more neurons overall.

Comment by MichaelStJules on Changes in conditions are a priori bad for average animal welfare · 2021-04-20T23:04:54.575Z · EA · GW

Thanks for the comment!

you're conflating fitness and welfare. Counterexample: a high resources high predator environment were being super paranoid and stressed out all the time gives you high fitness and poor welfare.

I think this is fair, but I still expect the correlation to usually be positive and my argument is only probabilistic and a priori, anyway.

most traits are polygenic and, therefore, normal-ish. Changes in the environment (e.g. higher temperature) increase fitness on one end of the distribution (e.g. the more heat tolerant individuals are doing great) while decreasing it in the other (e.g. the less heat tolerant are doing badly). What are the total welfare implications of that?

In this example, you're shifting the mean temperature, and I think a given species will be best off in the short-run with the mean temperature they're adapted to. A change in either direction will put the average away from what the individuals are best adapted to on average and under a symmetric assumption (like approximate normality), I'd expect it to hurt more individuals than it helps. So, for average welfare, there's more reason to think it's bad than good and the expected value is negative.

you're assuming a single species and an abiotic environment, but in reality almost any system has many species. And these species often respond individualistically to changes in the environment (e.g. one species is limited by temperature while another is limited by food and is indifferent to temperature within the relevant range). One species decreasing in abundance is an opportunity for others who now find themselves in abundance.

I didn't assume this, and the general structure of my argument seems as strong whether you include other interacting species or not; they're just part of the environment and conditions under consideration. However, I did invoke symmetry assumptions a lot, which I'd be more reluctant to apply in any specific case about which I'd have more information, and would instead have complex cluelessness.

a regularly changing environment might select for adaptability (e.g. through phenotypic plasticity), even if the changes are not cyclical. Changes might be dealt with without decreased welfare.

Good point. Something I've been thinking about lately is that shorter and larger generations (which I'd expect to have lower average welfare due to higher mortality and lower investment per offspring) and r-selected species would be favoured by this, though, which also seems bad for average welfare.

On the other hand, small enough changes might lead to antifragility and be good for average welfare.

Comment by MichaelStJules on Concerns with ACE's Recent Behavior · 2021-04-18T00:01:00.994Z · EA · GW

(I'm currently an intern for ACE, but speaking only for myself.)

With the context from your edit that was omitted from the original post, I think it does make sense and is not absurd at all on its face, but the phrasing "simply by being white" was hyperbole (which does lend itself to misinterpretation, so better to avoid), and was explained by the claims that follow. I think the OP omitting this context was probably bad and misleading, although I don't think it was intended to mislead.

Comment by MichaelStJules on Concerns with ACE's Recent Behavior · 2021-04-17T19:48:28.785Z · EA · GW

(I'm currently an intern for ACE, but speaking only for myself.)

First, I'd like to point out some related discussion here and here.

I think EA/EAA should have evidence-based conversations about how important social justice, inclusion, equity, diversity/representation, etc. are for EA/EAA, including whether they deserve much attention at all and whether some things might cause more harm than good (I do think there are at least some small and fairly uncontroversial useful steps organizations can make and have already made [1]), but the main EAA Facebook group does not seem like an appropriate place to have them, since it's one of the first places people get exposed to EAA. I think the EA Forum is an appropriate place to have these conversations. Smaller FB groups that aren't the first point of exposure for many to EA/EAA are probably okay, too.

Imagine being worried about an issue that personally affects you and/or the people close to you, and going to one of your first EA meetups, where your worries are debated and dismissed by many. It wouldn't be surprising if many people in similar situations would not want to come back after that, or to find out that our community's demographics are so skewed. This is not to say there aren't other important - maybe more important - contributors to our skewed demographics, e.g. EA seems more appealing to atheists with quantitative backgrounds, and the demographics of people with such backgrounds are already skewed. One might also respond that we want to select against people who would be off-put by discussions of prioritization, since EA is about prioritization, but I think we should give people some slack for issues that affect them personally, and keep in mind their own perceptions of how bad it is for them.

(EDIT: I've made substantial edits to this paragraph after reading through the study more.) Furthermore, the event was not just about racism in society (or the US) as a whole, but also racism in the animal advocacy movement specifically.  From this Faunalytics study of animal advocates in the US and Canada, this graph suggests female and non-binary animal advocates and animal advocates from minority groups are much more likely to report experiencing discrimination in their roles as animal advocates than male animal advocates and animal advocates not belonging to minority groups, and this graph shows 28.6% of the 14 advocates of colour (a small sample) having experienced discrimination or harassment on the basis of their race/colour/ethnicity specifically [2]. Common sense, this graph, this figure for paid advocates' intentions to leave the movement and this figure for unpaid advocates' suggest this is bad for retention and movement growth, although I don't know what the actual rates of turnover and people completely leaving the movement are. Organizations previously recommended by ACE and granted to by Open Phil have had issues with harassment in general with multiple changes in leadership; see some discussion in the comments here about more recent issues. 

So, the comments on that FB post weren't just dismissing issues that affect people interested in animal advocacy as not worth prioritizing, they could be read (whether this was the intention or not) as dismissing issues that people of colour experience in and push them away from animal advocacy itself and a request to help address such issues.

  1. e.g. better and better-enforced anti-discrimination and anti-harassment policies, and posting job openings more widely or even also specifically to underrepresented communities to avoid taking applicants only from already badly unrepresentative networks.
  2. There isn't a comparison for white people on the basis of being white, although I'd guess it's lower. There could also be survivorship bias, although the study did include some former advocates.
Comment by MichaelStJules on My personal cruxes for focusing on existential risks / longtermism / anything other than just video games · 2021-04-14T07:55:07.499Z · EA · GW

It seems confusing for a view that's suffering-focused not to commit you (or at least the part of your credence that's suffering-focused, which may compromise with other parts) to preventing suffering as a priority. I guess people include weak NU/negative-leaning utilitarianism/prioritarianism in (weakly) suffering-focused views.

What would count as weakly suffering-focused to you? Giving 2x more weight to suffering than you would want to in your personal tradeoffs? 2x more weight to suffering than pleasure at the same "objective intensity"? Even less than 2x?

FWIW, I think a factor of 2 is probably within the normal variance of judgements about classical utilitarian pleasure-suffering tradeoffs, and there probably isn't any objective intensity or at least it isn't discoverable, so such a weakly suffering-focused view wouldn't really be distinguishable from classical utilitarianism (or a symmetric total view with the same goods and bass).

Comment by MichaelStJules on My personal cruxes for focusing on existential risks / longtermism / anything other than just video games · 2021-04-13T06:56:32.797Z · EA · GW

Interesting!

By x-risks, do you mean primarily extinction risks? Suffering-focused and downside-focused views, which you cover after strong longtermism, still support work to reduce certain x-risks, specifically s-risks. Maybe it would say more about practical implications to do x-risks before suffering-focused or downside-focused? On the other hand, if you say you should focus on x-risks, but then find that there are deep tradeoffs between x-risks important to downside-focused views compared to upside-focused views, and you have deep/moral uncertainty or cluelessness about this, maybe it would end up better to not focus on x-risks at all.

In practice, though, I think current work on s-risks would probably look better than non x-risk work even for upside-focused views, whereas some extinction risk work looks bad for downside-focused views (by increasing s-risks). Some AI safety work could look very good to both downside- and upside-focused views, so you might find you have more credence in working on that specifically.

It also looks like you're actually >50% downside-focused, conditional on strong longtermism, just before suffering-focused views. This is because you gave "not suffering-focused" 75% conditional on previous steps and "not downside-focused" 65% conditional on that, so 48.75% neither suffering-focused nor downside-focused, and therefore 51.25% suffering-focused or downside-focused, but (I think) suffering-focused implies downside-focused, so this is 51.25% downside-focused. (All conditional on previous steps.)

Comment by MichaelStJules on Research suggests BLM protests increase murder overall · 2021-04-12T22:16:03.448Z · EA · GW

Red is for murder and blue is for property crime, right?

Even if it's still above at year 4 , that's as far as the analysis warrants a conclusion for, and 4 years is still the short-term. You could specify "in the 4 years following the first protest" (although does this figure include police killings?). Your title reads to me as saying the rate will settle higher than without the protests or at least the drop in police killings will never make up for the increase in other murders, neither of which follow, and looking at that figure, neither seems more likely than not. How I read the title, it does't even seem more likely to be true than false.

Comment by MichaelStJules on On the longtermist case for working on farmed animals [Uncertainties & research ideas] · 2021-04-12T04:33:04.573Z · EA · GW

Great post, thanks for writing this!

I would be most excited about projects 3c and 4a, since I think we could draw the strongest conclusions from them by directly asking about artificial sentience, not-so-intelligent artificial sentience (more like nonhuman animals) in particular and more neglected animals (wild animals, invertebrates) and possibly infer causation.

For 3c specifically, I'd want to see how people's attitudes towards artificial sentience and neglected animals change in response to major animal welfare events, e.g. animal advocacy/welfare media attention in general, or specifically ballot initiatives, new legislation, corporate commitments, undercover investigations, etc.. I think we'd need to collect a lot of data to do this, though.

This also might be relevant, for moral circle expansion towards farmed animals from humans, although I not sure we can assume causality rather than just a common cause (e.g. liberal/progressive values): https://www.washingtonpost.com/politics/2019/07/26/who-supports-animal-rights-heres-what-we-found/

Comment by MichaelStJules on Confusion about implications of "Neutrality against Creating Happy Lives" · 2021-04-12T03:44:53.298Z · EA · GW

I agree with Jack that neutrality about creating happy lives is (probably) a minority view within EA, although I'm not sure. 80% of EAs are consequentialist according to the most recent EA survey, and most of those probably reject neutrality: https://www.rethinkpriorities.org/blog/2019/12/5/ea-survey-2019-series-community-demographics-amp-characteristics

The conclusion in favour of extinction doesn't necessarily follow, though, depending on the exact framing of the asymmetry and neutrality (although I think it would according to the views CLR defends, but I don't even think everyone at CLR agrees with those views). See the soft asymmetry and conclusion here: https://globalprioritiesinstitute.org/teruji-thomas-the-asymmetry-uncertainty-and-the-long-term/

Note that this view does satisfy transitivity, but not the independence of irrelevant alternatives, i.e. whether A is better than B can depend on what other options are available. I think standard intuitions about the repugnant conclusion, which the soft asymmetry avoids (if so recall correctly), do not satisfy the independence of irrelevant alternatives. There are other cases where independence is violated by common intuitions: https://forum.effectivealtruism.org/posts/HyeTgKBv7DjZYjcQT/the-problem-with-person-affecting-views?commentId=qPDNPCsWuCF86hsqi

For what it's worth, this is a new view put forth, so it's likely few people know about it, but I suspect it's closest to a temporally impartial version of most people's moral intuitions.

There's also the possibility of s-risks by omission, like failing to help aliens (causally or causally), which extinction would exacerbate, although I'm personally skeptical that we would find and help aliens. Some discussion here: https://centerforreducingsuffering.org/s-risk-impact-distribution-is-double-tailed/

Personally, I basically agree with the views in that article by CLR, the asymmetry in particular is one of my strongest intuitions (the hard version, additional happy lives aren't good), and I think that an empty future would be optimal because of the asymmetry. I do not find this counterintuitive.

Comment by MichaelStJules on Small animals have enormous brains for their size · 2021-04-10T21:16:04.565Z · EA · GW

Here's another potentially interesting example, based on your article here and this Vox article.

Dan Fergus, a researcher that works with Menninger, estimates that the average person has between 1.5 and 2.5 million mites, but no one really knows.

You wrote:

No formal count has been made of the number of neurons in a springtail, but Tomasik compares its body size to a fruit fly and concludes that if neurons scale linearly with body size between the two, a springtail has about 5800 neurons.

(The number is taken from Brian Tomasik's article here.)

5,800 might be an overestimate for mites, if they're much smaller, but I would assume mites have at least as many neurons as C. elegans, 302. Combining the estimate for the number of mites and the estimate for the number of neurons per mite, it looks like at least 0.5 billion neurons from mites on your body, but maybe up to around 10x more. The human brain has ~86 billion neurons. So at least around 0.6% of the neurons in or in close proximity to your body are not your own, but invertebrates'. I wonder what this would look like if you include the mites on your bed, in your room, or your whole home. Could the number of neurons in a house be in the same order of magnitude for human neurons and mite neurons?

Comment by MichaelStJules on Announcing "Naming What We Can"! · 2021-04-10T06:03:37.706Z · EA · GW

SWB already means subjective well-being. Can't do that.

Comment by MichaelStJules on Voting reform seems overrated · 2021-04-10T00:58:07.709Z · EA · GW
  • Coalitions have the problem of it being hard for voters to hold specific parties or leaders to account for policies.

 

Is this worse than under FPTP, though, when there are often effectively two choices? There will be a leader of the coalition, so you could vote for anyone else, but when there are effectively two parties, and you hate one of them, your options are increasing the risk of letting the party you hate win, or failing to hold your preferred party to account.

Comment by MichaelStJules on Research suggests BLM protests increase murder overall · 2021-04-09T21:57:56.404Z · EA · GW

I think you should modify the title to include "in the short term", since the increase in non-police killings was temporary. It seems pretty plausible that killings are reduced overall in the longer term.

Comment by MichaelStJules on Research suggests BLM protests increase murder overall · 2021-04-09T21:42:40.091Z · EA · GW

Did they check the sizes of the effects over time since the protest? If BLM achieves their aims and gets the reforms they want, killings by police might remain reduced (but perhaps less so), while policing would become less politicized and scrutinized, so become more effective again.

EDIT: Towards the end of the Vox article:

The good news is that even if Campbell’s finding about the increase in murders following BLM protests holds up to further scrutiny, the effect doesn’t appear to last for long. By year four, Campbell no longer observes a statistically significant increase in murders, indicating that whatever is going on with murders is hopefully not long term.

Comment by MichaelStJules on Forget replaceability? (for ~community projects) · 2021-04-09T21:22:32.399Z · EA · GW

I'm guessing 2 is in response to the example I removed from my comment, roughly starting a new equally cost-effective org working on the same thing as another org would be pointless and create waste. I agree that there could be efficiency improvements, but now we're asking how much and if that justifies the co-founders' opportunity costs and other costs. The impact of the charity now comes from a possibly only marginal increase in cost-effectiveness. That's a completely different and much harder analysis. I'm also more skeptical of the gains in cases where EA charities are already involved, since they are already aiming to maximize cost-effectiveness.

Comment by MichaelStJules on JamesOz's Shortform · 2021-04-09T06:10:23.371Z · EA · GW

I'm not very familiar with the grassroots, so maybe I'm way off.

I think some of the big effective animal adocacy groups started as grass roots, and then because they were judged to be cost-effective, they were recommended by ACE or funded by Open Phil until they became big and weren't really grassroots anymore.

  1. Maybe it's primarily because big funders don't value a lot of grassroots work (rightly or wrongly), and if they did, those orgs would professionalize and scale up.
  2. Or, some grassroots work is necessarily too low-scale (even if cost-effective) and it's not worth the effort to try to estimate its value. So, projects with greater scale will disproportionately be more well-funded.
  3. Or, maybe grassroots work, even on the same things in different regions, is more variable in cost-effectiveness because of less structure and different organizers in each region, or funders expect it to be. A fur campaign succeeding in one city might not tell you much about whether a fur campaign in another city will succeed if they share none of the same organizers.

The Humane League started as a grassroots group, has a large network of campus activists so still does grassroots work, and they support smaller groups with the Open Wing Alliance. I think they have done training for other groups, too. Maybe they're pretty unique this way, though?

Comment by MichaelStJules on Forget replaceability? (for ~community projects) · 2021-04-09T05:41:03.472Z · EA · GW

Hmm, I'm kind of skeptical. Suppose there's a group working on eliminating plastic straws. There's some value in doing that, but suppose that just the existence of the group takes attention away from more effective environmental interventions to the point that it does more harm than good regardless of what (positive) price you can buy its impact for. Would a market ensure that group gets no funding and does no work? Would you need to allow negative prices? Maybe within a market of eliminating plastic waste, they would go out of business since there are much more cost-effective approaches, but maybe eliminating plastic waste in general is a distraction from climate change, so that whole market shouldn't exist.

So you might get VCs who become expert in judging when early-stage projects are a good bet. Then people thinking of starting projects can somewhat outsource the question to the VCs by asking "could we get funding for this?"

It sounds like VCs would need to make these funding diversion externality judgements themselves, or it would be better if they could do them well.

Comment by MichaelStJules on Status update: Getting money out of politics and into charity · 2021-04-08T01:59:31.317Z · EA · GW

Do you have both registered Democrats and Republicans on your team? Or maybe set up a bipartisan and politically representative board. My guess is that could help build trust.

Comment by MichaelStJules on Forget replaceability? (for ~community projects) · 2021-04-08T01:52:03.778Z · EA · GW

Would impact markets be useful without people doing this kind of modeling? Would they be at risk of assuming away these externalities otherwise?

Comment by MichaelStJules on How we averted 130,000 animal deaths (in expectation) with a volunteer campaign. · 2021-04-06T15:23:56.340Z · EA · GW

See under the section Decreased Consumption of Animal Products here: https://animalcharityevaluators.org/charity-review/sociedade-vegetariana-brasileira-svb/#c3

They report number of veg meals and program costs, but don't estimate what this comes out to for animals or divide animals spared by costs to get a cost-effectiveness ratio.

Comment by MichaelStJules on How we averted 130,000 animal deaths (in expectation) with a volunteer campaign. · 2021-04-06T07:55:49.729Z · EA · GW

It's worth mentioning that Sociedade Vegetariana Brasileira is recommended by ACE, in large part for their Meatless Monday program.

(I'm an intern for ACE, but I'm not speaking for them.)

Comment by MichaelStJules on How we averted 130,000 animal deaths (in expectation) with a volunteer campaign. · 2021-04-06T06:42:43.711Z · EA · GW

Looking at your model:

  1. It might be more accurate to break up meals by primary and secondary students and then sum them, since primary students probably eat a lot less than secondary students on average.
  2. You also divided the number of animals per person per day by 3, I assume for breakfast, lunch and dinner. Do people in the UK usually eat meat for breakfast? And do they eat more meat at dinner than lunch?

I don't think these would have huge effects on your final numbers, though.

Comment by MichaelStJules on How we averted 130,000 animal deaths (in expectation) with a volunteer campaign. · 2021-04-06T06:33:34.585Z · EA · GW

I'm surprised it took so few hours of work, about 40 hours according to your model. This is summing everyone's time spent, right? Impressive!

Comment by MichaelStJules on How we averted 130,000 animal deaths (in expectation) with a volunteer campaign. · 2021-04-06T06:25:22.532Z · EA · GW

Volunteers then are asking local councillors and relevant Cabinet members to instate two vegetarian days per week across all primary schools. Our rationale for this is that if we asked for one day per week, we believe councils would try to negotiate down to less.

What would this look like? A veg day every two weeks, or just serving fewer meat meals once a week? This seems kind of weird to me to negotiate down to, but I guess not implausible.

Maybe it's worth experimenting with asking for 1 day vs 2 days?

Comment by MichaelStJules on How we averted 130,000 animal deaths (in expectation) with a volunteer campaign. · 2021-04-06T06:11:01.233Z · EA · GW

This looking pretty promising! Thanks for sharing!

  • I think we brought this change happening forward by 2-6 years (the counterfactual of when it would have happened otherwise).

What was this based on? Is this assuming no other groups would run a similar campaign, or if they would have, since you ran this campaign, they'd do something else similarly impactful instead, e.g. meatless days at different institutions, or other institutional or corporate asks?

It seems like the schools most receptive and eager to run such programs would also be most likely to do it on their own without the push.

Also, it seems like you're modelling none of these schools as quitting these programs over that period. Some schools might have poor rollouts and then quit early (and making sure things go smoothly could be worth the extra work!), but I'd guess if a school makes it a few months without too many problems (e.g. lots of complaints, costs, logistical issues), it would be unlikely to quit before a full year passes, since they wouldn't go out of their way to revisit it again soon after the first evaluation at the end of a "trial period", but they might revisit school lunch programs regularly on a schedule, e.g. yearly, and then it might get cut when that happens. This is just speculation by me; I don't know how it works in schools, let alone schools in the UK.

Comment by MichaelStJules on How we averted 130,000 animal deaths (in expectation) with a volunteer campaign. · 2021-04-06T03:13:32.166Z · EA · GW

Do you think there's much risk they'll serve much more eggs in response? That could be one way the intervention ends up being worse for animals than you'd have expected, and possibly even bad overall.

Comment by MichaelStJules on How we averted 130,000 animal deaths (in expectation) with a volunteer campaign. · 2021-04-06T03:10:06.054Z · EA · GW
  • are the kids eating the plant-based meals in the long run, or they bring meat sandwiches from home for that day, because they don't like the plant-based alternative; or they eat extra meat at home in the evening to compensate for the lack of meat at school that day;

(...)

I hope this will give you a sense of how you can go about it, because there is only one study on this type of intervention and if I remember correctly it was either not effective at all or barely effective. Hopefully, someone will find a link, so you can check how they went about the calculations. 

 

Here's some relevant research and writing I've come across, but they don't seem to estimate effects on meals at home:

  1. Forced Choice Restriction in Promoting Sustainable Food Consumption: Intended and Unintended Effects of the Mandatory Vegetarian Day in Helsinki Schools
    1. More skipped meals (when allowed, depending on the school level), plate waste and eating less on vegetarian days in the short term (and pretty significant effects, like 18%-40% for each), while in the medium term, only eating less on veg days and skipping meals but also students eating more vegetarian meals on other days. It seems reasonably likely these students would eat more meat at home on average to compensate, but I don't think this would cut the cost-effectiveness down by more than half.
  2. Nutritional quality and acceptability of a weekly vegetarian lunch in primary-school canteens in Ghent, Belgium: 'Thursday Veggie Day' | Public Health Nutrition | Cambridge Core
    1. Differences in plate waste were small enough to ignore.
  3. Meat Reduction by Force: The Case of “Meatless Monday” in the Norwegian Armed Forces
  4. Vox: A French city announced it would serve meatless school lunches. The backlash was swift.

I would assume they don't fully compensate on average and they would do so less in the long run, but I don't know how much they do (or whether some eat even fewer animal products at home), and this is something worth looking further into. There is research on rebound effects for voluntary (including nudging) meat reduction interventions (mostly seem small, from what I've seen), but we probably shouldn't generalize from it, given how differently people react to being forced to do something. 

The parents would also have a say on whether or not the students would eat more meat after school to compensate. They might encourage it or discourage it.

Comment by MichaelStJules on Spears & Budolfson, 'Repugnant conclusions' · 2021-04-06T02:37:16.462Z · EA · GW

I haven't read the newly published paper, but assuming the results are the same as in the precursor (about the extended very repugnant conclusion), this thread on NU and my other comment here may be of interest.

Comment by MichaelStJules on Spears & Budolfson, 'Repugnant conclusions' · 2021-04-06T02:28:50.404Z · EA · GW

I discuss the precursor paper in this comment, and with respect to negative axiologies like negative utilitarianism with antimonyanthony in this thread.

Comment by MichaelStJules on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-06T02:15:44.287Z · EA · GW

On his estimate of the difference in probability we can achieve promoting one state over its complement, it's worth mentioning that this does not consider the possibility of doing more harm than good, e.g. AI safety work advancing AGI more than it aligns it, and with the very low (but in his view, extremely conservative) probabilities that he uses in his argument, the possibility of backfire effects outweighing them becomes more plausible.

Furthermore, it does not argue that we can effectively predict that any particular state is better than its complement, e.g. is extinction good or bad? How should we deal with moral uncertainty, especially around population ethics?

For these reasons, it may be difficult to justifiably identify robustly positive expected value longtermist interventions ahead of time, which the case for longtermism depends on. I mean this even with subjective probabilities, since such probabilities supporting longtermist interventions tend to be particularly poorly-informed (largely for absence of good evidence) and so seem more prone to biases and whims, e.g. wishful thinking and the non-rational particulars of people's brains and priors. This is just deep uncertainty and moral cluelessness.

For what it's worth, I don't think it makes much sense for this paper to address such issues in detail given its current length already, although they seem worth mentioning.

(Also, I read the paper a while ago, so maybe it did discuss these issues and I missed it.)

Comment by MichaelStJules on Possible misconceptions about (strong) longtermism · 2021-04-06T01:30:17.801Z · EA · GW

I think the guidelines and previous syllabi/reading lists are/were biased against downside-focused views, practically pessimistic views, and views other than total symmetric and classical utilitarianism (which are used most to defend work against extinction) in general, as discussed in the corresponding sections of the post. This is both on the normative ethics side and discussion of how the future could be bad or extinction could be good. I discussed CLR's guidelines with Jonas Vollmer here. CLR's guidelines are here, and the guidelines endorsed by 80,000 Hours, CEA, CFAR, MIRI, Open Phil and particular influential EAs are here. (I don't know if these are current.)

On the normative ethics side, CLR is expected to discuss moral uncertainty and non-asymmetric views in particular to undermine asymmetric views, and while the other side is expected to discuss moral uncertainty and  s-risks, they are not expected to discuss asymmetric views in particular, so this biases us away from asymmetric views, according to which the future may be bad and extinction may be good.

On discussion of how the future could be bad or extinction could be good, from CLR's guidelines:

Minimize the risk of readers coming away contemplating causing extinction, i.e., consider discussing practical ways to reduce s-risks instead of saying how the future could be bad

(...)

In general, we recommend writing about practical ways to reduce s-risk without mentioning how the future could be bad overall. We believe this will likely have similar positive results with fewer downsides because there are already many articles on theoretical questions.

(emphasis mine)

So, CLR associates are discouraged from arguing that the future could be bad and extinction could be good, biasing us against theses hypotheses.

I'm not sure that the guidelines for CLR are actually bad overall, though, since I think the arguments for them are plausible, and I agree that people with pessimistic or downside-focused views should not seek to cause extinction, except possibly through civil discussion and outreach causing people to deprioritize work on preventing extinction.  But the guidelines rule out ways of doing the latter, too.

 

I have my own (small) personal example related to normative ethics, too. The coverage of the asymmetry on this page, featured on 80,000 Hours' Key Ideas page, is pretty bad:

One issue with this is that it’s unclear why this asymmetry would exist.

The article does not cite any literature making positive cases for the asymmetry (although they discuss the repugnant conclusion as being a reason for person-affecting views). I cite some in this thread.

The bigger problem though is that this asymmetry conflicts with another common sense idea.

Suppose you have the choice to bring into existence one person with an amazing life, or another person whose life is barely worth living, but still more good than bad. Clearly, it seems better to bring about the amazing life, but if creating a happy life is neither good or bad, then we have to conclude that both options are neither good nor bad. This implies both options are equally good, which seems bizarre.

There are asymmetric views to which this argument does not apply, some published well before this page, e.g. this and this. Also, the conclusion may not be so bizarre if the lives are equally content/satisfied, in line with negative accounts of welfare (tranquilism/Buddhist axiology, antifrustrationism, negative utilitarianism, etc.).

Over a year ago, I criticized this for being unfair in the comments section of that page, linking to comments in my own EA Forum shortform and other literature with arguments for the asymmetry, and someone strong downvoted the comments in my shortform with a downvote strength of 7 and without any explanation. There was also already another comment criticizing the discussion of the asymmetry.

Comment by MichaelStJules on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-04T09:04:16.148Z · EA · GW

I think these would basically be just constant factors multiplying the whole impacts, assuming we remain near the peaks for far longer than we spend making significant moves towards the peaks.

The difference between intentionally optimizing for hedonistic welfare and a default with human-like minds could itself be on the scale of an existential catastrophe for a classical utilitarian, and more important than extinction, although it could also be far less tractable and not really an attractor state at all if it's not stable/persistent. This could also generalize to other theories of welfare, just with different targets.

Comment by MichaelStJules on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-04T08:36:55.607Z · EA · GW

There's also a talk. https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/

When I reference work by GPI, I usually link to the page with both the talk and the pdf.

Comment by MichaelStJules on Forget replaceability? (for ~community projects) · 2021-04-01T05:31:43.474Z · EA · GW

I agree with giving more weight to upside when you can monitor results effectively and shut down if things don't go well, but you can actually model all of this explicitly. Maybe the model will be too imprecise to be very useful in many cases, but sensitivity analysis can help.

You can estimate the effects in the case where things go well and you scale up, and in the case where you shut down, including the effects of diverting donations from effective charities in each case, and weight the conditional expectations by the probabilities of scaling up and shutting down. If I recall correctly, this is basically what Charity Entrepreneurship has done, with shutdown within the first 1 or 2 years in the models I looked at. Shutting down minimizes costs and diverting of funding.

You wouldn't start a charity with a negative expected impact after including all of these effects, including the effects of diverting funding from other charities.

Comment by MichaelStJules on Forget replaceability? (for ~community projects) · 2021-03-31T15:35:35.060Z · EA · GW

I think starting a new charity is an interesting special case. Sometimes, it might be worth it to start a charity that would be less cost-effective on average than an existing charity is cost-effective on the margin, if you think you can get funding from people who wouldn't have otherwise donated to cost-effective charities. However, the more the funding ends up coming from EA, the worse, and at some point it might be bad to start the charity at all. Charity Entrepreneurship (where I was an intern) has taken expectations about counterfactual donations into account in their cost-effectiveness models, or at least the ones I looked at.

In some cases, you might be taking government funding, and that funding might have been used well otherwise; I'm thinking public health funding in developing countries, but I'm not that familiar with the area, so this might be wrong.

Comment by MichaelStJules on EA for Jews - Proposal and Request for Comment · 2021-03-29T02:30:07.024Z · EA · GW

Hmm, that's too bad. On Matnat Chaim, another way of looking at it that up to half of the donors did not request Jewish recipients, but maybe those donors were less likely to be Orthodox Jews specifically, though. And even if few donors did not request someone of the same religion or ethnicity, there could till be something to learn from Matnat Chaim's approach.

This article paints an even more pessimistic picture at the time it was written: almost all of the donors (309 of 311) were Orthodox Jews, and all of them requested Jewish recipients. However, this was earlier in the organization's history, and maybe things have changed since then.

I suppose there are also some particularities about Orthodox Judaism and the permissibility of using organs from people who are dying but whose hearts have not yet stopped which is apparently how most donations happen (the heart is kept artificially beating), and so live organ donation might be the only practical permissible option for Orthodox Jews to donate and receive kidneys. This might partially explain why they are so much more likely to donate kidneys than average.

Comment by MichaelStJules on EA for Jews - Proposal and Request for Comment · 2021-03-28T20:21:43.337Z · EA · GW

According to this video, Orthodox Jews make up >15% of live altruistic kidney donors in the US, despite making up only 0.2% of the US population. The video directs to Renewal, an organization for kidney donations, and they have a page of endorsements by Rabbis.

In Israel, apparently the faith-based community organization Matnat Chaim has had a lot of success. Might be worth looking into what worked for them. See:

  1. https://www.israel21c.org/faith-based-nonprofit-triples-altruistic-kidney-donations/
  2. https://bmcnephrol.biomedcentral.com/articles/10.1186/s12882-018-0923-4
Comment by MichaelStJules on EA for Jews - Proposal and Request for Comment · 2021-03-28T16:10:25.644Z · EA · GW

I suspect the kind of outreach we'd want to do for secular Jews is basically the kind of outreach EA already does or whatever would work best for atheists and secular people generally (or keeping in mind average differences in political views if any, but then it seems it might be better to divide along political lines), and messages targeted towards religious Jews that don't appeal to the average atheist would not appeal to the average secular Jew, either, and may even be off-putting. Or, maybe those who identify most with Jewish culture, regardless of religious views, would still find religious messaging appealing, so they'll self-select?

So, I'm not sure if it would be good to have a group with some explicitly religious public messaging trying to do outreach to secular Jews. It might be better to just have a group with religious messaging focusing on religious groups, and/or a group without religious messaging focusing on (cultural or ethnic) Jews more generally (or just secular Jews, if there will be one for religious Jews).

I don't say this with much familiarity with these communities, though.

Comment by MichaelStJules on [Job Ad] Help us make this Forum better · 2021-03-26T14:48:34.840Z · EA · GW

Might be better to clarify in the title that this is a job posting. I thought this was a request for feedback on the EA Forum.

Comment by MichaelStJules on Formalising the "Washing Out Hypothesis" · 2021-03-25T19:39:25.447Z · EA · GW

Another weakness of the model is that it doesn’t seem particularly appropriate for modelling some types of longtermist interventions. In particular, it’s not ideal for capturing the dynamics of interventions that aim to push the world into “attractor states” (states of the world such that once the world enters that state, it tends to stay in that state for an extremely long time). Since these are possibly the best candidates for interventions that manage to avoid the “washing out” trap, it would be useful to explore other models to understand these interventions in more depth.

 

It's worth noting that Tarsney's The epistemic challenge to longtermism, which you mention, deals explicitly with attractor states, and so I think better captures this stronger case for longtermism.

Comment by MichaelStJules on Against neutrality about creating happy lives · 2021-03-19T06:45:02.643Z · EA · GW

Ralph Bader, here, has a rather interesting and novel defence of it: https://homeweb.unifr.ch/BaderR/Pub/Asymmetry (R. Bader).pdf. Another strategy is to say you have no reason not to create the miserable child, but you have reason to end it's life once it starts existing; this doesn't help with scenarios where you can't end the life.

Ya, this is interesting. Bader's approach basically is premised on the fact that you'd want to end the life of a miserable child, and you'd want to do it as soon as possible, and ensuring this as soon as possible (in theory, not in practice) basically looks like not bringing them into existence in the first place. You could do this with the amount of badness in general, too, e.g. intensity of experiences, as I described in point 2 here until the end of the comment for suffering specifically.

The second approach you mention seems like it would lead to dynamic inconsistency or a kind of money pump, which seems similar to Bader's point (from this comment):

if people decide to have a child they know will be forever miserable because they don't count the harm ahead of time, once the child is born (or the decision to have the child is made), the parent(s) may decide to euthanize (abort, etc.) them for the child's sake. And then, they could do this [have a child expected to be miserable and then euthanize/abort them] again and again and again, knowing they'll change their minds at each point, because at each point, although they might recognize the harm, they don't count it until after the decision is made.

The reason they might do this is because they recognize some benefit to having the child at all, and do not anticipate the need to euthanize/abort them until after the child "counts". Euthanizing/aborting the child could be costly and outweigh the initial benefits of having the child in the first place, so it seems best to not have the child in the first place. You might respond that not having the child is therefore in the parents' interests, given expectations about how they will act in the future and this has nothing to do with the child's interests, so can be handled with a symmetric person-affecting view. However, this is only true because they're predicting they will take the child's interests into account. So, they already are taking the child's interests into account when deciding whether or not to have them at all, just indirectly.

And I can see some person-affecting views approaching mere/benign addition and the repugnant conclusion similarly. You bring the extra people with marginally good lives into existence to get A+, since it's no worse than A (or better, by benign addition instead of mere addition), but then you're compelled to redistribute welfare after the fact, and this puts you in an outcome you'd find significantly worse than had you not brought the extra people into existence in the first place. You should predict that you will want to redistribute welfare after the fact when deciding whether or not to bring the extra people into existence at all.

Comment by MichaelStJules on Against neutrality about creating happy lives · 2021-03-19T05:37:51.624Z · EA · GW

I wrote some more about this here in reply to Jack.

Comment by MichaelStJules on Against neutrality about creating happy lives · 2021-03-19T05:21:37.157Z · EA · GW

I am also interested by the claim in this paper that the repugnant conclusion afflicts all population axiologies, including person-affecting views, although I haven't actually read through the paper yet to understand it completely

I'd just check the definition of the Extended very repugnant conclusion (XVRC) on p. 19. Roughly, tiny changes in welfare (e.g. pin pricks, dust specks) to an appropriate base population can make up for the addition of any number of arbitrarily bad lives and the foregoing of any number of arbitrarily good lives.  The base population depends on the magnitude of the change in welfare, and the bad and good lives.

The claim of the paper is that basically all theories so far have led to the XVRC.

It's possible to come up with theories that don't. Take Meacham's approach, and instead of using the sum of harms, use the maximum individual harm (and the counterpart relations should be defined to minimize the max harm in the world).

Or do something like this for pairwise comparisons only, and then extend using some kind of voting method, like beatpath, as discussed in Thomas's paper on the asymmetry.

This is similar to the view the animal rights ethicist Tom Regan described here:

Given that these conditions are fulfilled, the choice concerning who should be saved must be decided by what I term the harm principle. Space prevents me from explaining that principle fully here (see The Case, chapters 3 and 8, for my considered views). Suffice it to say that no one has a right to have his lesser harm count for more than the greater harm of another. Thus, if death would be a lesser harm for the dog than it would be for any of the human survivors—(and this is an assumption Singer does not dispute)—then the dog’s right not to be harmed would not be violated if he were cast overboard. In these perilous circumstances, assuming that no one’s right to be treated with respect has been part of their creation, the dog’s individual right not to be harmed must be weighed equitably against the same right of each of the individual human survivors.

To weigh these rights in this fashion is not to violate anyone’s right to be treated with respect; just the opposite is true, which is why numbers make no difference in such a case. Given, that is, that what we must do is weigh the harm faced by any one individual against the harm faced by each other individual, on an individual, not a group or collective basis, it then makes no difference how many individuals will each suffer a lesser, or who will each suffer a greater, harm. It would not be wrong to cast a million dogs overboard to save the four human survivors, assuming the lifeboat case were otherwise the same. But neither would it be wrong to cast a million humans overboard to save a canine survivor, if the harm death would be for the humans was, in each case, less than the harm death would be for the dog.

These approaches all sacrifice the independence of irrelevant alternatives or transitivity.

 

Another way to "avoid" it is to recognize gaps in welfare, so that the smallest change in welfare (in one direction from a given level) allowed is intuitively large. For example, maybe there's a lexical threshold for sufficiently intense suffering, and a gap in welfare just before it. Suffering may be bearable to different degrees, but some kinds may just be completely unbearable, and the threshold could be where it becomes completely unbearable; see some discussion of thresholds here. Then people people past the threshold is extremely bad, no matter where they start, whether that's right next to the threshold, or from non-existence.

Or, maybe there's no gap, but just barely pushing people past that threshold is extremely bad anyway, and roughly as bad as bringing people into existence already past that threshold. I think a gap in welfare is functionally the same, but explains this better.

Comment by MichaelStJules on Name for the larger EA+adjacent ecosystem? · 2021-03-19T04:31:43.044Z · EA · GW

I think EA and Rationality are fine.

How would you define longtermism so that it isn't pretty much by definition EA? Like longtermism that isn't necessarily primarily motivated by consequences for people in the future? I think GPI may have explored some such views, but I think it's close enough to EA that we don't need a new term.

If we're including progress studies, why not international development, global health, AI safety, biosecurity, nuclear security, social movements, animal ethics, vegan studies, conflict and peace studies, transhumanism, futurism, philosophy of mind, etc.? Is progress studies more cause-neutral?

Comment by MichaelStJules on Against neutrality about creating happy lives · 2021-03-19T02:03:13.978Z · EA · GW

Plenty of theories avoid the RC and VRC, but this paper extends the VRC on p. 19. Basically, you can make up for the addition of an arbitrary number of arbitrarily bad lives instead of an arbitrary number of arbitrarily good lives with arbitrarily small changes to welfare to a base population, which depends on the previous factors.

For NU (including lexical threshold NU), this can mean adding an arbitrarily huge number of new people to hell to barely reduce the suffering for each person in a sufficiently large population already in hell. (And also not getting the very positive lives, but NU treats them as 0 welfare anyway.)

Also, related to your edit, epsilon changes could flip a huge number of good or neutral lives in a base population to marginally bad lives.

Comment by MichaelStJules on Why do so few EAs and Rationalists have children? · 2021-03-19T01:35:44.561Z · EA · GW

I'm personally not sure, but this is what I hear from others in this thread and elsewhere. I'd be thinking the EA Community fund, university groups, running EA fellowships, GWWC, TLYCS, EA orgs to take volunteers/interns. Maybe we are close to saturation with the people who would be sympathetic to EA, and we just need to make more people at this point, but I don't think this is the case, since there's still room for more local groups.

I've been the primary organizer for the EA club at my university for a couple years, and I think a few of the members would not have been into EA at all or nearly as much without me (no one else would have run it if I didn't when I did, after the previous presidents left the city), but maybe they would have found their way into EA eventually anyway, and there's of course a risk of value drift. This is less work than raising a child (maybe 5-10 hours/week EDIT: or is that similar to raising a child or more? Once they're in school, it might take less work?), has no financial cost, and I made close friends doing it. I think starting a local group where there isn't one (or running an otherwise fairly inactive one) can get you at least one new fairly dedicated EA per year, but I'm not sure how many dedicated EA person-years that actually buys you.

How likely is the child of an EA to be an EA in the long run? And does it lead to value drift for the parents?