Posts

Replaceability with differing priorities 2020-03-08T06:59:09.710Z · score: 17 (9 votes)
Biases in our estimates of Scale, Neglectedness and Solvability? 2020-02-24T18:39:13.760Z · score: 94 (41 votes)
[Link] Assessing and Respecting Sentience After Brexit 2020-02-19T07:19:32.545Z · score: 16 (5 votes)
Changes in conditions are a priori bad for average animal welfare 2020-02-09T22:22:21.856Z · score: 16 (10 votes)
Please take the Reducing Wild-Animal Suffering Community Survey! 2020-02-03T18:53:06.309Z · score: 24 (13 votes)
What are the challenges and problems with programming law-breaking constraints into AGI? 2020-02-02T20:53:04.259Z · score: 6 (2 votes)
Should and do EA orgs consider the comparative advantages of applicants in hiring decisions? 2020-01-11T19:09:00.931Z · score: 15 (6 votes)
Should animal advocates donate now or later? A few considerations and a request for more. 2019-11-13T07:30:50.554Z · score: 19 (7 votes)
MichaelStJules's Shortform 2019-10-24T06:08:48.038Z · score: 5 (3 votes)
Conditional interests, asymmetries and EA priorities 2019-10-21T06:13:04.041Z · score: 13 (14 votes)
What are the best arguments for an exclusively hedonistic view of value? 2019-10-19T04:11:23.702Z · score: 7 (4 votes)
Defending the Procreation Asymmetry with Conditional Interests 2019-10-13T18:49:15.586Z · score: 18 (14 votes)
Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests 2019-07-04T23:56:44.330Z · score: 10 (9 votes)

Comments

Comment by michaelstjules on What posts you are planning on writing? · 2020-03-27T22:38:19.741Z · score: 2 (1 votes) · EA · GW

Ok, makes sense!

In case you haven't seen it, this might be helpful to see what other critiques are out there already.

Comment by michaelstjules on MichaelStJules's Shortform · 2020-03-27T05:54:44.067Z · score: 3 (2 votes) · EA · GW

I've been thinking more lately about how I should be thinking about causal effects for cost-effectiveness estimates, in order to clarify my own skepticism of more speculative causes, especially longtermist ones, and better understand how skeptical I ought to be. Maybe I'm far too skeptical. Maybe I just haven't come across a full model for causal effects that's convincing since I haven't been specifically looking. I've been referred to this in the past, and plan to get through it, since it might provide some missing pieces for the value of research.

Suppose I have two random variables, and , and I want to know the causal effect of manipulating on , if any.


1. If I'm confident there's no causal relationship between the two, say due to spatial separation, I assume there is no causal effect, and conditional on the manipulation of to take value (possibly random), , is identical to , i.e. . (The notation is Pearl's do-calculus notation.)


2. If could affect , but I know nothing else,

a. I might assume, based on symmetry (and chaos?) for , that and are identical in distribution, but not necessarily literally equal as random variables. They might be slightly "shuffled" or permuted versions of each other (see symmetric decreasing rearrangements for specific examples of such a permutation). The difference in expected values is still 0. This is how I think about the effects of my every day decisions, like going to the store, breathing at particular times, etc. on future populations. I might assume the same for variables that depend on .

b. Or, I might think that manipulating just injects noise into , possibly while preserving some of its statistics, e.g. the mean or median. A simple case is just adding random symmetric noise with mean and median to . However, whether or not a statistic is preserved with the extra noise might be sensitive to the scale on which is measured. For example, if is real-valued, and is strictly increasing, then for the median, , but the same is not necessarily true for the expected value of , or for other variables that depend on .

c. Or, I might think that manipulating makes closer to a "default" distribution over the possible values of , often but not always uninformed or uniform. This can shift the mean, median, etc., of . For example, could be the face of the coin I see on my desk, and could be whether I flip the coin or not, with being not by default. So, if I do flip the coin and hence manipulate , this randomizes the value of , making my probability distribution for its value uniformly random instead of a known, deterministic value. You might think that some systems are the result of optimization and therefore fragile, so random interventions might return them to prior "defaults", e.g. naive systemic change or changes to ecosystems. This could be (like) regression to the mean.

I'm not sure how to balance these three possibilities generally. If I do think the effects are symmetric, I might go with a or b or some combination of them. In particular asymmetric cases, I might also combine c.


3. Suppose I have a plausible argument for how could affect in a particular way, but no observations that can be used as suitable proxies, even very indirect, for counterfactuals with which to estimate the size of the effect. I lean towards dealing with this case as in 2, rather than just making assumptions about effect sizes without observations.

For example, someone might propose a causal path through which affects with a missing estimate of effect size at at least one step along the path, but an argument to that this should increase the value of . It is not enough to consider only one such path, since there may be many paths from to , e.g. different considerations for how could affect , and these would need to be combined. Some could have opposite effects. By 2, those other paths, when combined with the proposed causal path, reduce the effects of on through the proposed path. The longer the proposed path, the more unknown alternate paths.

I think this is where I am now with speculative longtermist causes. Part of this may be my ignorance of the proposed causal paths and estimates of effect sizes, since I haven't looked too deeply at the justifications for these causes, but the dampening from unknown paths also applies when the effect sizes along a path are known, which is the next case.


4. Suppose I have a causal path through some other variable , , so that causes and causes , and I model both the effects of and , based on observations. Should I just combine the two for the effect of on ? In general, not in the straightforward way. As in 3, there could be another causal path, (and it could be longer, instead of with just a single intermediate variable).

As in case 3, you can think of as dampening the effect of , and with long proposed causal paths, we might expect the net effect to be small, consistently with the intuition that the predictable impacts on the far future decrease over time due to ignorance/noise and chaos, even though the actual impacts may compound due to chaos.


Maybe I'll write this up as a full post after I've thought more about it. I imagine there's been writing related to this, including in the EA and rationality communities.

Comment by michaelstjules on MichaelStJules's Shortform · 2020-03-27T04:19:16.526Z · score: 4 (3 votes) · EA · GW

I think EA hasn't sufficiently explored the use of different types of empirical studies from which we can rigorously estimate causal effects, other than randomized controlled trials (or other experiments). This leaves us either relying heavily on subjective estimates of the magnitudes of causal effects based on weak evidence, anecdotes, expert opinion or basically guesses, or being skeptical of interventions whose cost-effectiveness estimates don't come from RCTs. I'd say I'm pretty skeptical, but not so skeptical that I think we need RCTs to conclude anything about the magnitudes of causal effects. There are methods to do causal inference from observational data.

I think this has lead us to:

1. Underexploring the global health and development space. See John Halstead's and Hawke Hillebrandt's "Growth and the case against randomista development". I think GiveWell is starting to look beyond RCTs. There's probably already a lot of research out there they can look to.

2. Relying too much on guesses and poor studies in the effective animal advocacy space (especially in the past), for example overestimating the value of leafletting. I think things have improved a lot since then, and I thought the evidence presented in the work of Rethink Priorities, Charity Entrepreneurship and Founders Pledge on corporate campaigns was good enough to meet the bar for me to donate to support corporate campaigns specifically. Humane League Labs and some academics have done and are doing research to estimate causal effects from observational data that can inform EAA.

Comment by michaelstjules on MichaelStJules's Shortform · 2020-03-27T01:59:56.128Z · score: 2 (1 votes) · EA · GW

I also think that antifrustrationism in some sense overrides interests less than symmetric views. Consider the following two options for interests within one individual:

A. Interest 1 exists and is fully satisfied

B. Interest 1 exists and is not fully satisfied, and interest 2 exists and is (fully) satisfied.

A symmetric view would sometimes choose B, so that the creation of interests can take priority over interests that would exist regardless. In particular, the proposed benefit comes from satisfying an interest that would not have existed in the alternative, so it seems like we're overriding the interests the individual would have in A with a new interest, interest 2. For example, we make someone want something and satisfy that want, at the expense of their other interests.

On the other hand, consider:

A. Interest 1 exists and is partially unsatisfied

B. Interest 1 exists and is fully satisfied, and interest 2 exists and is partially unsatisfied.

In this case, antifrustrationism would sometimes choose A, so that the removal or avoidance of an otherwise unsatisfied interest can take priority over (further) satisfying an interest that would exist anyway. But in this case, if we choose A because of concerns for interest 2, at least interest 2 would exist in the alternative A, so the benefit comes from the avoidance of an interest that would have otherwise existed. In A, compared to B, I wouldn't say we're overriding interests, we're dealing with an interest, interest 2, that would have existed otherwise.

Some related writings, although not making the same point I am here:

Comment by michaelstjules on What posts you are planning on writing? · 2020-03-27T00:59:43.705Z · score: 3 (2 votes) · EA · GW

If you're using the formal mathematical definitions of the terms from this section of the 80,000 Hours article, then their product (before taking logs) has an interpretation in natural units, as good done / extra person or $, so if you reweight, this interpretation for the product will be lost. Are you interpreting the ITN terms differently?

Comment by michaelstjules on Max_Daniel's Shortform · 2020-03-25T23:31:36.510Z · score: 4 (2 votes) · EA · GW
For example, to me, the WHO taking until ~March 12 to call this a pandemic*, when the informed amateurs I listen to were all pretty convinced that this will be pretty bad since at least early March, is at least some evidence that trusting informed amateurs has some value over entirely trusting people usually perceived as experts.

Also, predicting that something will be pretty bad or will be a pandemic is not the same as saying it is now a pandemic. When did it become a pandemic according to the WHO's definition?

Expanding a quote I found on the wiki page in the transcript here from 2009:

Dr Fukuda: An easy way to think about pandemic – and actually a way I have some times described in the past – is to say: a pandemic is a global outbreak. Then you might ask yourself: “What is a global outbreak”? Global outbreak means that we see both spread of the agent – and in this case we see this new A(H1N1) virus to most parts of the world – and then we see disease activities in addition to the spread of the virus. Right now, it would be fair to say that we have an evolving situation in which a new influenza virus is clearly spreading, but it has not reached all parts of the world and it has not established community activity in all parts of the world. It is quite possible that it will continue to spread and it will establish itself in many other countries and multiple regions, at which time it will be fair to call it a pandemic at that point. But right now, we are really in the early part of the evolution of the spread of this virus and we will see where it goes.

But see also WHO says it no longer uses 'pandemic' category, but virus still emergency from February 24, 2020.

Comment by michaelstjules on AMA: "The Oxford Handbook of Social Movements" · 2020-03-25T06:06:18.185Z · score: 4 (2 votes) · EA · GW

I think some goals like ending factory farming and addressing climate change almost certainly depend on the animal protection movement and climate/environmentalist movement, respectively, outside EA. Would be interesting to know if they depend on other movements/ideologies. To pick one, ending factory farming.

Comment by michaelstjules on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-03-21T17:26:00.959Z · score: 2 (1 votes) · EA · GW

This might be relevant:

"The world destruction argument" by Simon Knutsson, with appendix here.

Comment by michaelstjules on Virtual EA Global: News and updates from CEA · 2020-03-21T17:13:41.508Z · score: 4 (2 votes) · EA · GW

Heres' the link: https://www.youtube.com/watch?v=EXbUgvlB0Zo

Also from here: https://forum.effectivealtruism.org/posts/rMLFZn7JzP4mXkCaq/ea-global-live-broadcast

Comment by michaelstjules on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-03-20T14:34:51.974Z · score: 2 (1 votes) · EA · GW

I had this in mind:

For example, very often more utility will be created if you abandon some people--even some entire groups of people--as lost causes and focus on creating more happy people instead.

A hedonium shockwave can involve a lot of killing, as you suggest.

Comment by michaelstjules on The Drowning Child and the Expanding Circle · 2020-03-19T21:45:40.388Z · score: 2 (1 votes) · EA · GW

In my case, an existential crisis drove me to altruism. I felt like my life had no purpose, my goals until then didn't matter and that it would be shameful to pursue whatever I felt like, ignoring the suffering of others. EA brought purpose to my life, and I'm happier for it.

Comment by michaelstjules on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T19:20:24.543Z · score: 2 (1 votes) · EA · GW

Should that be ? Just taking logarithms.

Comment by michaelstjules on AMA: "The Oxford Handbook of Social Movements" · 2020-03-18T16:04:21.250Z · score: 9 (4 votes) · EA · GW

Maybe change the title to "AMA: "The Oxford Handbook of Social Movements"" so it fits better on the front page of the EA Forum?

Comment by michaelstjules on AMA: "The Oxford Handbook of Social Movements" · 2020-03-18T07:53:58.853Z · score: 2 (1 votes) · EA · GW

What kinds of goals do they cover?

Comment by michaelstjules on AMA: "The Oxford Handbook of Social Movements" · 2020-03-18T07:53:06.254Z · score: 10 (3 votes) · EA · GW

I guess we can use the cause areas as goals:

1. Ending extreme poverty globally, by improving trade and foreign aid

2. Ending factory farming, by gaining popular support for legal and corporate reforms.

3. Avoiding extinction, from AI, pandemics or nuclear weapons

4. Addressing climate change, through carbon taxes or cap-and-trade, and clean tech

5. Improving the welfare of wild animals

etc.

Comment by michaelstjules on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T06:55:25.691Z · score: 3 (2 votes) · EA · GW

Do you have references/numbers for these views you can include here?

Comment by michaelstjules on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T06:52:12.090Z · score: 2 (1 votes) · EA · GW

This math problem is relevant, although maybe the assumptions aren't realistic. Basically, under certain assumptions, either our population has to increase without bound, or we go extinct.

EDIT: The main assumption is effectively that extinction risk is bounded below by a constant that depends only on the current population size, and not the time (when the generation happens). But you could imagine that even for a stable population size, this risk could be decreased asymptotically to 0 over time. I think that's basically the only other way out.

So, either:

1. We go extinct,

2. Our population increases without bound, or

3. We decrease extinction risk towards 0 in the long-run.

Of course, extinction could still take a long time, and a lot of (dis)value could happen before then. This result isn't so interesting if we think extinction is almost guaranteed anyway, due to heat death, etc..

Comment by michaelstjules on Opinion: Estimating Invertebrate Sentience · 2020-03-18T05:53:24.676Z · score: 2 (1 votes) · EA · GW

Some more evidence I find pretty compelling and might be surprising:

1. Fishes have friends.

2. Fishes get depressed.

3. Wild birds get PTSD.

Generally I find social behaviour and mental disorder pretty compelling.

Comment by michaelstjules on AMA: "The Oxford Handbook of Social Movements" · 2020-03-18T05:06:30.986Z · score: 5 (3 votes) · EA · GW

What are the main benefits and drawbacks of being tied to other social movements, political ideologies or even political parties? How should we think about tradeoffs here?

Comment by michaelstjules on AMA: "The Oxford Handbook of Social Movements" · 2020-03-18T05:04:07.502Z · score: 2 (1 votes) · EA · GW

What are the main factors in the success or failure of a movement?

How much popular but inactive support? How many activists?

Comment by michaelstjules on AMA: "The Oxford Handbook of Social Movements" · 2020-03-18T05:01:25.220Z · score: 2 (1 votes) · EA · GW

How should we think about possible tradeoffs between a clear and narrow focus vs a broader message? Should we tie specific narrow demands (e.g. animal welfare reforms) to a broader message (veganism/animal rights/antispecieism)?

Comment by michaelstjules on AMA: "The Oxford Handbook of Social Movements" · 2020-03-18T04:59:03.738Z · score: 7 (4 votes) · EA · GW

What level of demandingness is appropriate for a social movement? I'm wondering how demanding EA should be.

Comment by michaelstjules on Examples for impact of Working at EAorg instead of ETG · 2020-03-17T20:51:02.040Z · score: 4 (2 votes) · EA · GW
I guess your 28 and can thus still get into relatively different quantitative Finance.

26, but 2 years isn't a big difference. :)

But, how did you decide that it is best for you to dedicate your time to AAR? You could be working at GiveWell/Open Phil as a GR, or in OpenAI/MIRI in AI safety research (especially with your CS and Math background), you could also be working in ETG at the FAANG. Also 80khours no where seems to suggest that AAR of all the things are "high-impact-careers" nor does the EA survey say anything about it. In fact the survey talks about GR and AI safety.

So I'm choosing AAR over other causes due to my cause prioritization, which depends on both my ethical views (I'm suffering-focused) and empirical views (I have reservations about longtermist interventions, since there's little feedback, and I don't feel confident in any of their predictions and hence cost-effectiveness estimates). 80,000 Hours is very much pushing longtermism now. I'm more open to being convinced about suffering risks, specifically.

I'm leaning against a job consisting almost entirely of programming, since I came to not enjoy it that much, so I don't think I'd be motivated to work hard enough to make it to $200K/year in income. I like reading and doing research, though, so AI research and quantitative finance might still be good options, even if they involve programming.

And did you account for replaceability and other factors? If so, how did you arrive at these numbers?
(...)
So you hope to apply causal inference in AAR?

I didn't do any explicit calculations. The considerations I wrote about replaceability in my post and the discussion here have had me thinking that I should take ETG to donate to animal charities more seriously.

I think econometrics is not very replaceable in animal advocacy research now, and it could impact the grants made by OPP and animal welfare funds, as well as ACE's recommendations.

I'll try a rough comparison now. I think there's more than $20 million going around each year in effective animal advocacy, largely from OPP. I could donate ~1% ($200K) of that in ETG if I'm lucky. On the other hand, if do research for which I'd be hard to replace and that leads to different prioritization of interventions, I could counterfactually shift a good chunk of that money to (possibly far) more cost-effective opportunities. I'd guess that corporate campaigns alone are taking >20% of EEA's resources; good intervention research (on corporate campaigns or other interventions) could increase or decrease that considerably. Currently only a few people at Humane League Labs and a few (other) economists (basically studying the effects of reforms in California) have done or are doing this kind of econometrics and causal inference research. Maybe around the equivalent of 4 working on this full-time now. So my guess is that another person working on this could counterfactually shift > 1% of EAA funding in expectation to opportunities twice as cost-effective. This seems to beat ETG donating $200k/year.

Lastly I want to thank you from the heart for taking your time and effot to respond to me. Appreciate it brother.

Happy to help! This was useful for me, too. :)

(Oh, besides economics, I'm also considering grad school in philosophy, perhaps for research on population ethics, suffering-focused views and consciousness.)

Comment by michaelstjules on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-17T19:09:39.991Z · score: 17 (5 votes) · EA · GW

Which views, both ethical and empirical, do you think (should) lead most to prioritizing animals over other EA causes?

Comment by michaelstjules on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-17T19:07:42.261Z · score: 2 (1 votes) · EA · GW

Which views, both ethical and empirical, do you think (should) lead most to prioritizing global health and poverty over other causes?

Comment by michaelstjules on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T19:03:32.397Z · score: 2 (1 votes) · EA · GW

Do you lean more towards a preferential account of value, a hedonistic one, or something else?

How do you think tradeoffs between pleasure and suffering are best grounded according to a hedonistic view? It seems like there's no objective one-size-fits-all trade-off rate, since it seems like you could have different people have different preferences about the same quantities of pleasure and suffering in themselves.

Comment by michaelstjules on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T18:56:53.113Z · score: 2 (1 votes) · EA · GW

What new evidence would cause the biggest shifts in your priorities?

Comment by michaelstjules on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T18:49:19.701Z · score: 3 (2 votes) · EA · GW

What are your views on the prioritization of extinction risks vs other longtermist interventions/causes?

Comment by michaelstjules on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T18:48:34.491Z · score: 4 (2 votes) · EA · GW

How robust do you think the case is for any specific longtermist intervention? E.g. do new considerations constantly affect your belief in their cost-effectiveness, and by how much?

Comment by michaelstjules on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T18:46:52.709Z · score: 6 (4 votes) · EA · GW

Which ethical views do you have non-negligible credence in and, if true, would substantially change what you think ought to be prioritized, and how? How much credence do you have in these views?

Comment by michaelstjules on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T18:45:50.580Z · score: 3 (2 votes) · EA · GW

Which interventions/causes do you think are best to support/work on according to views in which extra people with good or great lives not being born is not at all bad (or far outweighed by other considerations)? E.g. different person-affecting views, or the procreation asymmetry.

Comment by michaelstjules on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-17T18:33:43.688Z · score: 10 (4 votes) · EA · GW

What kind of research do you think could change GiveWell's recommendations most?

Comment by michaelstjules on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-17T18:32:47.779Z · score: 2 (1 votes) · EA · GW

How much longer do you expect GiveWell's current top charities to remain top charities? How many more new top charities do you expect to see (each year or over the next few years, say)?

Comment by michaelstjules on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-17T18:17:39.810Z · score: 19 (7 votes) · EA · GW

What do you think of the state of evidence and research in the EAA movement now, and how it's changed over time?

Should EAA be using more sophisticated techniques in causal inference from observational data? Is there data out there we can use already for this? I have in mind Humane League Labs' upcoming study on cage-free campaigns and analyses of ballot initiatives in California. Can we do the same with attitudes or animal product consumption in response to other interventions, e.g. protests?

Do you think our allocation between narrower interventions and animal movement growth is right, both in terms of resources and research? Should we be going more in one direction over the other? I'm thinking there might be an analogy with Growth and the case against randomista development, by Hillebrandt and Halstead, with movement growth like economic growth, and narrow interventions like randomista (RCT-based) development.

I'm also worried about small sample sizes, as discussed in Gregory Lewis' post Reality is often underpowered.

Comment by michaelstjules on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-17T17:43:32.438Z · score: 13 (5 votes) · EA · GW

What kind of research do you think the EAA movement is missing most? Is anyone in the EAA movement (including at ACE) working on it now? What's the most important EAA research that you're not aware of really anyone working on?

What kind of research do you think could change ACE's recommendations most?

Comment by michaelstjules on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-17T17:40:17.962Z · score: 9 (4 votes) · EA · GW

Which interventions do you think are the best now? Which less well-studied or new interventions do you think could be competitive with them?

Comment by michaelstjules on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-17T17:37:50.813Z · score: 11 (5 votes) · EA · GW

How competitive are the different roles at ACE (and other EAA orgs, if you have an idea), including research, the different internships, etc.? How replaceable do you think people are?

Comment by michaelstjules on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-17T17:34:22.709Z · score: 11 (5 votes) · EA · GW

What do you think are the main bottlenecks and limiting factors in the EAA movement?

Comment by michaelstjules on Examples for impact of Working at EAorg instead of ETG · 2020-03-17T09:20:37.241Z · score: 6 (3 votes) · EA · GW
How did you end up choosing to go to DarwinAI? Why not something else like GR in GPR or FAAANG?

I'd say it was kind of decided for me, since those other options were ruled out at the time. I applied to internships at some EA orgs, but didn't have any luck. Then, I did a Master's in computational math. Then, I started working part-time at a machine learning lab at the university while I looked for full-time work. I applied to AI internships at the big tech companies, but didn't have any luck. I got my job at DarwinAI because I was working for two of its cofounders at the lab. I had no industry experience before that.

I'm currently trying to transition to effective animal advocacy research, reading more research, offering to review research before publication, applying to internships and positions at the orgs, and studying more economics/stats, one of the bottlenecks discussed here, with quantitative finance as a second choice, and back to deep learning in the industry as my third. I feel that EA orgs have been a bit weak on causal inference (from observational data), which falls under econometrics/stats.

Comment by michaelstjules on Replaceability with differing priorities · 2020-03-17T07:38:07.238Z · score: 6 (3 votes) · EA · GW

Thanks for the suggestion! I've updated the summary to include bullet points, and I'll try to remember to do this in the future, too.

Comment by michaelstjules on Examples for impact of Working at EAorg instead of ETG · 2020-03-15T15:33:27.671Z · score: 5 (3 votes) · EA · GW

Would OPP and the EA Funds grant more funding overall if new EA orgs are started or do they distribute a fixed amount of funding? New EA orgs will create more positions for EAs to fill.

Comment by michaelstjules on Examples for impact of Working at EAorg instead of ETG · 2020-03-15T15:28:41.658Z · score: 3 (2 votes) · EA · GW

From this older article:

This may apply, for example, to taking a job with Givewell, who likely follow a process more akin to ‘threshold hiring’.In this case, it seems likely that taking this job may increase the number of overall jobs by close to 1.

Not very good evidence, though, without word directly from GiveWell.

More on threshold hiring here, but no EA-specific examples.

Comment by michaelstjules on Examples for impact of Working at EAorg instead of ETG · 2020-03-15T15:03:14.900Z · score: 3 (2 votes) · EA · GW
And the 80khours article you cited on replaceability seems to be so off with its suggestions. 80khours are suggesting that "Often if you turn down a skilled job, the role simply won't be filled at all because there's no suitable substitute available". Whilst the only evidence I can find says completely otherwise: Carricks take on AI S&P, Peter representing RC, Open Phil's hiring round, Jon Behar's comments, EAF's hiring round.

New charities will sometimes be started to make more EA org positions, and they wouldn't get far if they didn't have people who were the right fit for them. Rethink Priorities and Charity Entrepreneurship are relatively new (although very funding-constrained, and this might be the bottleneck for their hiring and the bottleneck for starting new charities like them). Charity Entrepreneurship is starting many more EA orgs with their incubation program (incubated charities here). Maybe worth reaching out to them to see what their applicant pool is like?

I think there are also specific talent bottlenecks, see [1], [2], [3]. Actually, this last one comes from Animal Advocacy Careers, a charity incubated by Charity Entrepreneurship to meet the effective animal advocacy talent bottlenecks.

Btw, I think you have the wrong link for Carricks.

Comment by michaelstjules on Examples for impact of Working at EAorg instead of ETG · 2020-03-15T00:59:53.709Z · score: 5 (3 votes) · EA · GW

I suppose I'm not directly answering your question, but I think it might be pretty hard to answer well, if you want to try to account for replaceability properly, because many people can end up in different positions because of you taking or not taking a job at an EA org, and it wouldn't be easy to track them. I doubt anyone has tried to. See this and my recent post.

Comment by michaelstjules on Efforts to develop the caring emotion and intellectual virtues in people rank where on EA's priority list? · 2020-03-14T22:21:55.093Z · score: 2 (1 votes) · EA · GW

I think schools already try to do 2, but perhaps not well. I would like to see mandatory courses on critical thinking and logic in primary and secondary school, rather than just sneaking it into English and humanities courses. Maybe formal/symbolic logic could be part of the math curriculum, and it could be covered each year. Switching to Socratic method-style teaching might help, but I think it's harder to do.

Getting 1, including for nonhuman animals, might require a public shift in attitudes towards animals first. Humane education might be what you're looking for.

Schools try to do 1 for humans to some extent, but also perhaps not well. They have religion classes, and history and fiction can foster empathy. I went through a Catholic school system, although I was never really religious. Do they teach ethics in public schools?

I'm not optimistic that we could have much influence over these, though.

Comment by michaelstjules on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-03-14T01:36:25.600Z · score: 25 (9 votes) · EA · GW
Would you be interested in having a section on the website that is basically "Ways to be an EA while not being a utilitarian?" I say this as someone who is very committed to EA but very against utilitarianism. Fair enough if the answer is no, but if the answer is yes, I'd be happy to help out with drafting the section.

I think that would be better on another website, one specifically dedicated to EA and not utilitarianism. Possibly Utilitarianism.net could link to it. Maybe an article for https://www.effectivealtruism.org/ ?

On the nitpick, I agree that the wording is misleading. Bringing people into existence is not usually understood to "improve their welfare", since someone who doesn't exist has no welfare (not even welfare 0). It's probably better to say "benefit", although it's also a question for philosophy whether you can benefit someone by bringing them into existence.

Also, even "improve" isn't quite right to me if we're being person-affecting, since it suggests their welfare will be higher than before, but we only mean higher than otherwise.

Comment by michaelstjules on Quantifying lives saved by individual actions against COVID-19 · 2020-03-14T01:25:27.955Z · score: 4 (3 votes) · EA · GW

I don't think it's unlikely at all; I don't think that $100/day would be used for something nearly as cost-effective as bednets if it weren't being spent on healthcare. Hospitals and governments will spend what it takes to handle the coronavirus in patients, up to a pretty high limit per patient.

I think a more important concern might be limited medical resources and triaging, but that should go into the cost-effectiveness analysis model, and it's not something I should speculate about without expertise.

Comment by michaelstjules on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-03-10T15:53:37.817Z · score: 4 (3 votes) · EA · GW

I think using expected values is just one possible decision procedure, one that doesn't actually follow from utilitarianism and isn't the same thing as using utilitarianism as a decision procedure. To use utilitarianism as a decision procedure, you'd need to know the actual consequences of your actions, not just a distribution or the expected consequences.

Comment by michaelstjules on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-03-10T15:06:00.804Z · score: 4 (3 votes) · EA · GW

Classical utilitarianism, as developed by Bentham, was anti-speciesist, although some precursors and some theories that followed may not have been. Bentham already made the argument to include nonhuman animals in the first major work on utilitarianism:

Other animals, which, on account of their interests having been neglected by the insensibility of the ancient jurists, stand degraded into the class of things. ... The day has been, I grieve it to say in many places it is not yet past, in which the greater part of the species, under the denomination of slaves, have been treated ... upon the same footing as ... animals are still. The day may come, when the rest of the animal creation may acquire those rights which never could have been withholden from them but by the hand of tyranny. The French have already discovered that the blackness of skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor. It may come one day to be recognized, that the number of legs, the villosity of the skin, or the termination of the os sacrum, are reasons equally insufficient for abandoning a sensitive being to the same fate. What else is it that should trace the insuperable line? Is it the faculty of reason, or perhaps, the faculty for discourse?...the question is not, Can they reason? nor, Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?... The time will come when humanity will extend its mantle over everything which breathes...

Mill distinguished between higher and lower pleasures to avoid the charge that utilitarianism is "philosophy for swine", but still wrote, from that Wiki page section you cite,

Granted that any practice causes more pain to animals than it gives pleasure to man; is that practice moral or immoral? And if, exactly in proportion as human beings raise their heads out of the slough of selfishness, they do not with one voice answer 'immoral', let the morality of the principle of utility be for ever condemned.

The section also doesn't actually mention any theories for "Humans alone".

I'd also say that utilitarianism is often grounded with a theory of utility, in such a way that anything capable of having utility in that way counts. So, there's no legwork to do; it just follows immediately that animals count as long as they're capable of having that kind of utility. By default, utilitarianism is "non-speciesist", although the theory of utility and utilitarianism might apply differently roughly according to species, e.g. if only higher pleasures or rational preferences matter, and if nonhuman animals can't have these, this isn't "speciesist".

Comment by michaelstjules on What are the key ongoing debates in EA? · 2020-03-08T18:59:19.695Z · score: 24 (13 votes) · EA · GW

Normative ethics, especially population ethics, as well as the case for longtermism (which is somewhere between normative and applied ethics, I guess). Even the Global Priorities Institute has research defending asymmetries and against longtermism. Also, hedonism vs preference satisfaction or other values, and the complexity of value.

Consciousness and philosophy of mind, for example on functionalism/computationalism and higher-order theories. This could have important implications for nonhuman animals and artificial sentience. I'm not sure how much debate there is these days, though.