Comment by michaelplant on EA Survey 2018 Series: Cause Selections · 2019-01-19T18:29:53.126Z · score: 3 (2 votes) · EA · GW

Roger. Points taken.

Comment by michaelplant on EA Survey 2018 Series: Cause Selections · 2019-01-19T16:35:39.850Z · score: 5 (3 votes) · EA · GW

Another thing I'd be interested in seeing would be the percentage changes in support for causes year-on-year as that would indicate what the internal dynamics of the movement are. I'm (at least) partly motivated to see this because mental health, which I've written quite a lot on, may be the smallest top priority cause, but this is also the first time it's snuck into the list.

Comment by michaelplant on EA Survey 2018 Series: Cause Selections · 2019-01-19T16:24:32.415Z · score: 5 (3 votes) · EA · GW

Thanks for this. Were there any causes you considered adding beyond those stated? Those seems like the main causes EAs support, but it would be nice to include 'minor' ones to see what the community feeling is about those, e.g. wild animal suffering, education, social justice, immigration reform, etc.

Comment by michaelplant on A general framework for evaluating aging research. Part 1: reasoning with Longevity Escape Velocity · 2019-01-15T13:18:24.540Z · score: 2 (1 votes) · EA · GW
Yes, if the chance of death each year is constant it turns out that remaining life expectancy is around 1/chance of death

Can you explain this is the case? Sorry if this is obvious, but I'm not getting it and can't think offhand how to do the maths.

On population ethics, for totalists it then seems the dominating concern will be how valuable it is to have a population with longer lives, which puts the emphasis in a difference place from the value of keeping particular individuals alive longer.

Comment by michaelplant on A general framework for evaluating aging research. Part 1: reasoning with Longevity Escape Velocity · 2019-01-12T12:54:51.186Z · score: 4 (2 votes) · EA · GW

Thanks for writing this.

Can you explain in a bit more detail, and without complicated formalisation, why life expectancy after LEV is 1000. I note life expectancy is 1000 and the chance of death in 1 year is 1/1000. Is that a coincidence, or is life expectancy post-LEV just 1/annual chance of death?

I know you've said you're going to cover this later, but I want to flag how sensitive this is to population ethics. On totalism (the value of the outcome is the sum total of well-being of everyone who will ever life), it's good to create lives, so it's not necessarily a problem that there's a higher 'turnover' of lives, i.e. people die and other people replace them. Totalists will want to know how longevity affects the long run for everyone, not just those that get to live longer. By contrast, if you're a person-affecting deprivationism (there is no value creating new lives, but for those lives that count, the badness of death is the amount of well-being they would have had had they lived), life extension looks super important!

Comment by michaelplant on What Is Effective Altruism? · 2019-01-09T16:13:12.664Z · score: 11 (8 votes) · EA · GW

Relevant to this, in the following article MacAskill provides the following account of what EA is:

What Is Effective altruism?
As defined by the leaders of the movement, effective altruism is the use of evidence and reason to work out how to benefit others as much as possible and the taking action on that basis.11 So defined, effective altruism is a project rather than a set of normative commitments. It is both a research project—to figure out how to do the most good—and a practical project, of implementing the best guesses we have about how to do the most good. There are some defining characteristics of the effective altruist research project. The project is:
Maximizing. The point of the project is to try to do as much good as possible.
Science-aligned. The best means to figuring out how to do the most good is the scientific method, broadly construed to include reliance on both empirical observation and careful rigorous argument or theoretical models.
Tentatively welfarist. As a tentative hypothesis or a first approximation, goodness is about improving the welfare of individuals.
Impartial. Everyone’s welfare is to count equally

Also, you've accidentally posted the same thing three times, if you hadn't noticed already.

Comment by michaelplant on Cause profile: mental health · 2019-01-09T14:18:25.486Z · score: 2 (1 votes) · EA · GW

Hello Matthew and thanks for your points. I don't think it counts as bias if favour of X if you chose to do X because you thought X was best!

On the first, I haven't looked, but I wouldn't consider that to be the right evidence. It seems pretty plausible people could below hedonic/satisfaction neutrality and not want to kill themselves; I'd expect our evolutionary insight is to keep living even in such circumstances - those who committed suicide easily would have their genes removed from the pool.

On the second, I haven't, but I'd welcome someone doing that research.

On the third, I am familiar with that stuff and am in regular communication with the economists who write the big reports, e.g. the World Happiness Report. However, I tend to think that, given there are people working on the policy problem, and I don't have much to add there, but there isn't really anyone thinking about the EA-type questions of what the best things for individuals to do with their time and money, I do more by contributing to this latter issue.

Comment by michaelplant on The Global Priorities of the Copenhagen Consensus · 2019-01-07T23:42:55.186Z · score: 11 (9 votes) · EA · GW

I couldn't find a single mention of mental health. If someone finds something from them on this, please let me know!

Comment by michaelplant on Cause profile: mental health · 2019-01-07T23:17:50.190Z · score: 2 (1 votes) · EA · GW
On the 80k framework, if you have info on scale, tractability and neglectedness, there is no point calculating neglectedness

Are you using the two 'neglectedness' words differently? Why would you calculate X if you already knew X in general?

This being said, when we don't know much about cost-effectiveness, I still think neglectedness is a useful heuristic for cost-effectiveness. The fact that AI is 1000 times more neglected than climate change does seem like a very good reason that AI is a more promising cause to work on

I think that's right. One method is to use scale and/or neglectedness as (weak), independent heuristics for cost-effectiveness if you haven't or can't calculate cost-effectiveness. It's unclear how to use tractability as a heuristic without implicitly factoring in information about neglectedness or scale. Another (the other?) method, then, is to directly assess cost-effectiveness. Once you'd done that, you've incorporated the ITN stuff and it would be double-counting to appeal to them again ("I know X is more cost-effective than Y, but Y is more neglected" etc).

Comment by michaelplant on Cause profile: mental health · 2019-01-07T16:49:30.722Z · score: 3 (2 votes) · EA · GW

Thanks for all these great points (Derek sent these to me privately and I suggested it would be valuable for him to share them here for other interested parties). My brief replies, in order, to those comments that weren't just informative:

1. fair cop. I think I was lazily using those as I first compiled these numbers back in 2015 (at the start of my PhD).

2. agree it's unclear what these breakthrough drugs imply for EA

5. it makes sense to compare to GW because that's who our audience is. People who already think GW is irrelevant and focus on e.g. far future are unlikely to be interested in the analysis here.

6. yes, there are probably flaws in the SM analysis. I look forward to mine being made obsolete in due course. I note that my points on negative spillovers should cause us to downgrade the effectiveness of anti-poverty charities.

8. agree, but this applies to mental health intervention too: their effects could also be larger if we take spillovers into account, e.g. reduced strain on family who care for them.

9. As I'm sympathetic to person-affecting views, I'm not too concerned about the long-term anyway. Even if I were a long-termist, the problem with including indirect effects is that it tends to make the analysis incredibly 'hand-wavey' ("ah, saving lives speeds up growth, which is bad for climate change, etc.). I think it makes sense to calculate what can easily be calculated first. If you can't look anywhere else, at least look under the lamppost.

10. Probably correct. A better analysis would factor in how the LS of AMF recipients would change over their lives (presumably upwards and societal conditions improve)

11. I agree LS is not the ideal thing. If we had affect scores, I would say we use those, but we don't! ("slaves to the data" etc)

12. I also agree moving to affect would make mental health score better than poverty. I left that out because I thought the analysis was complicated enough already.

Comment by michaelplant on Cause profile: mental health · 2019-01-07T14:03:06.375Z · score: 16 (5 votes) · EA · GW

Hello Sanjay. I didn't do this because I think the idea of comparing causes by numerically assigned scores to I, N and T is of illusory helpfulness and I wish we would all stop doing it(!). What we care about is knowing the expected value of the dollar you would donate (or, more complicatedly, the hour you would spend). I've produced some numbers by doing cost-effectiveness estimates of a charity you could donate to. Given that's what we ultimately want, it's unclear what the positive value is of representing things via the INT approach. I have a thesis chapter/EA forum post forthcoming on this topic, but I'll make a couple of points here.

First, note that on the 80k framework the INT literally is a cost-effectiveness calculation and not, which is what Will uses in Doing Good Better, 3 independent heuristics which somehow combine to give a rough idea of cost effectiveness. Indeed, it's more confusing to do expected value the way 80k suggests, than how I did it, as their method requires redundant and arbitrary steps. 80k specify neglectedness as "% increase in resources/extra person or dollar". It is later defined as "How many people, or dollars, are currently being dedicated to solving the problem?" But, deciding what counts as dollars being dedicated to "solving the problem" is arbitrary, hence there cannot be a precise answer to this question.

Further, if I wanted to put mental health in 80k's framework, note that in addition to establishing an arbitrary neglectedness score, I'd have to ascertain solvability - found by asking "If we doubled direct effort on this problem, by what fraction of the remaining problem would we expect to solve?" How would I do that? I'd have to work out the total size of the problem, then assess how much of it would be solved by some given intervention. To do that, I'd need to work out the cost-effectiveness of a mental health intervention. But I've already done that, so I can only calculate the tractability/solvability number once I already have the information that is ultimately of interest to me.

I don't see how it's an improvement over the formula cost-effectiveness = effect/cost to say to say Cost-effectiveness = (effect/ % of a problem solved)/(-% of a problem solved / %increase in resources)/(% increase in resources /cost). As demonstrated, it's (at least sometimes) harder to calculate cost-effectiveness this latter way. If we really think scale is important to keep in mind, we could have a two-factor model, scale (value of solving whole problem) and solvability* (% of problem solved/cost).

Second, I don't see what the point is to take one ranking of scale/neglectedness/tractability for each of two causes and compare those. What does it tell us that X is more neglected/tractable/large that Y, if that is all we know about X and Y? By itself, it literally tells us nothing about the expected value of marginal resources to X vs Y. We only understand that once we've thought how scale, neglectedness and tractability combine to give us cost-effectiveness. To bring this out, imagine you and I are having a conversation.

Sanjay: "mental health is more neglected than poverty".

Michael: "and? That doesn't tell me which one has higher expected value".

S: "hmm. Poverty is bigger".

M: "again? So what? That doesn't tell me which one has higher expected value either".

S: "Okay, well, poverty is more tractable than mental health".

M: "and? So what? In fact, what do you mean by 'tractable'? if you mean 'has higher expected value', then you're just saying poverty is better than mental health health and I don't know how you factored in neglectedness and size when assessing tractability. If by tractability you mean 'if we doubled direct effort on this problem by, what fraction of the remaining problem would we expect to solve?' then I only know which cause you think has higher expected value when you give me precise scores of scale, neglectedness and tractability and tell me how you're combining those scores to give expected value"

S: Michael, why are you always so difficult? [curtain falls]

By analogy, if we want to know the speed of some object (speed = distance/time), knowing just the distance its traveled, or just the time it took, gives us absolutely no insight into its speed. Do objects which travel further tend to travel faster? Always travel faster?

Third, I don't think it even makes sense to talk about comparing causes as opposed to comparing interventions. What we're really doing when we do cause prioritisation is saying "there are problems of types A, B and C. I'm going to find the best intervention I can that tackles each of A, B and C. Then I'm going to compare the best item I've found in each 'bucket'." Given we can't give money to poverty (the abstract noun), but we can give to interventions that reduce poverty, we should just think in terms of interventions instead of causes.

Comment by michaelplant on Cause profile: mental health · 2019-01-02T20:00:25.987Z · score: 7 (5 votes) · EA · GW

Hello Joey,

I may have misunderstood your first comment, but if I had estimated the effects for GiveDirectly it would have been (on my best guess) less effective than the study showed. From the 2016 paper I inferred GD increased life satisfaction (LS) by 0.3/10 per person. In the Origins of Happiness, Clark et al find a doubling of income increase LS 0.12/10 by. IIRC (and I may not), the $750 transfer from GD is less than a doubling of household income. So the estimated effects would have been approx. 3 times smaller for GD.

Regarding StrongMinds' treatment, Reay et al. (2012) have a 2 year study of how much of the benefits are retained for interpersonal group therapy (which is what StrongMinds delivers). I agree it is more appropriate to use this than using the Wiles et. al (2016) model - which I interpret as a constant effect for 4 years and then nothing thereafter - as Wiles et al. is based on UK CBT, I think delivered individually. To account for this, in my spreadsheet, I do two estimates, one where I assume the treatment effect is constant as lasts only 4 years, another where 75% of the benefits are retained annually. This latter estimation method is taken from Halstead and Snowden's Founder's Pledge report on mental health where they also assess StrongMinds. It turns out the estimates give practically identical results so, in this case, the cost-effectiveness is not sensitive to how duration of effect is modeled.

I agree with you that the best current mental health charity is probably far less cost-effective, relative to whatever the best possible intervention is, than the best current development or physical health charities, on the grounds more effort has been put into the latter. (As you and I have discussed) I am optimistic about finding/developing even better ways to do provide mental health treatments. I didn't stress this point on the grounds the reader was probably more interested in current interventions than hypothetical interventions, but that could have been an error on my part.

Comment by michaelplant on Cause profile: mental health · 2019-01-02T12:53:29.712Z · score: 11 (5 votes) · EA · GW

First, it's unclear how many EAs are totalists or long-termists. I suppose this post is addressed at those who support global poverty and development, which is (from surveys) the majority of EAs. To support global poverty and development you could - this is not an exhaustive list - (a) be a person-affector or (b) be a totalist who is sceptical about the effectiveness of far-future stuff or (c) be a long-termist who think near-term interventions have strong long-term impacts, such that they are cost-competitive with X-risk.

Second, on why I'm sympathetic to person-affecting view, the short answer is because I find the following two concepts highly plausible.

First, the person-affecting restriction: an outcome can only be better or worse if it is better or worse for someone. (Parfit, Reasons and Persons attributes such a view to Narveson, explaining "On [Narveson's] view, it is not good that people exist because their lives contain happiness. Rather, happiness is good because it is good for people”)

Second, non-comparativism about existence: non-existence is neither better than, worse than, or equally good as, existence for someone. Why believe this? For the personal betterness relation to hold (i.e. for an outcome to be better for someone) the person needs to exist in both of those outcomes. If the person only exists in one outcome, there is no comparison to be made. By analogy, to say "X is taller than Y", X and Y need to have a height. If X or Y lack the property of height, they cannot stand in the relationship of "being taller than". It's confused to say "the Eiffel Tower is taller than nothing". "Nothing" lacks a height (rather than has a height of zero), thus the Eiffel tower's height is incomparable to the height of "nothing". If we're concerned with the personal betterness relationship, we are comparing two states of the person (i.e. the person needs to exist and have some good, bad, or neutral-making properties). A non-existent entity cannot stand in the personal betterness relationship with an existing person. There is no sensible comparison to be made; one cannot compare something with nothing.

Taken together, these two statements entail that creating new lives is incomparable in value to not creating them.

That's about the quickest answer I can give.

Comment by michaelplant on Cause profile: mental health · 2019-01-02T11:54:59.582Z · score: 8 (3 votes) · EA · GW

Yes, I had a few paragraphs on the potential indirect effects of treating mental health but decided to cut them out at last moment as (a) I wasn't sure how many people would be interested in them and (b) the whole analysis is just extremely handwavey.

It's possible that someone could think focusing on mental health/happiness now could have very long-run effects and would be justified primarily on the impact it would have to future people. This also applies to bednets, economic development etc. and it seems very hard to sensibly compare these things. My hunch is that if someone was taking this angle they would do more good by trying to get governments to measure policies by their SWB impact, rather than by treating more people for depression through developing world micro-interventions.

Cause profile: mental health

2018-12-31T12:09:02.026Z · score: 73 (37 votes)
Comment by michaelplant on How Effective Altruists Can Be Welcoming To Conservatives · 2018-12-22T14:42:10.254Z · score: 21 (18 votes) · EA · GW

I want to note a tension in this article. It was about being welcoming by, roughly, not assuming all people you speak to are from a certain group. However, while 'conservative' is a general term, the conservatives under discussion were clearly conservatives in the USA; in the UK, from where I write, there isn't much in the way of creationists, pro-lifers, or Trump supporters. As such, I would like to suggest that one way effective altruists can be welcoming is by not presuming everyone interested in effective altruism is an US citizen.

Comment by michaelplant on The asymmetry and the far future · 2018-12-22T14:31:52.880Z · score: 2 (1 votes) · EA · GW

Found this post again after many months. Don't those who endorse the asymmetry tend to think neutrality is 'greedy' in the sense that if you add a mix of happy and unhappy lives, such that future total welfare is positive, then the outcome has zero value? Your approach is the 'non-greedy' one where happy lives never contribute towards outcome value and unhappy lives always count against. On the greedy approach, I think it follows we have no reason to worry about the future unless it's negative. I think Bader supports something like the greedy version. I'm somewhat unsure on this.

Comment by michaelplant on Rethink Priorities Plans for 2019 · 2018-12-22T11:41:22.971Z · score: 12 (5 votes) · EA · GW

Very pleased to see this write up and hear the many valuable things Rethink Priorities is working on. I'll just comment one part. Seeing as you said you wanted to look into mental health and metrics for well-being, I should mention previous and current work done in this area.

Last month I put up a (lengthy) post, Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximsing Human Welfare, which discusses such issues in some detail. I'm now in the process of, with some others, starting an organisation to look into this, and would be very pleased to work with Rethink on this (particularly to avoid an unnecessary duplication of effort!). Sindy Li discussed issues with DALYs back on March 2017. I raised those same concerns, claiming EA was overlooking mental health and happiness (indeed referening the same study, Dolan and Metcalfe 2012), in June 2016.

Comment by michaelplant on Existential risk as common cause · 2018-12-06T09:51:24.697Z · score: 7 (7 votes) · EA · GW

I was surprised to see person-affecting views weren't on your list of exception, then I saw it was in the uncertainties section. FWIW, taking Gregory Lewis' model at face value - I raised some concerns in a comment replying to that post - he concludes it's $100,000 per life saved. If AMF is $3,500 per life saved then X-risk is a relatively poor buy (although perhaps tempting as a sort of 'hedge'). That would only speak to your use of money: a person-affector could still conclude they'd do more good with a career focused on X-risk than elsewhere.

Comment by michaelplant on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-12-01T11:47:33.737Z · score: 2 (1 votes) · EA · GW

First, I want to say that I do not endorse TRIA. This post wanted to look at applying the SWB approach given what people's moral views seem to be, rather than evaluate how good those views are. GiveWell staff and many EAs (implictly) endorse TRIA, hence I discussed it.

FWIW, I don't think the concern that TRIA ignores equality really hits the mark. If you think what matters is interest, then you weight by the strength of interest, and - adding some further theory - young children don't seem to have such strong interest in survival as older humans. I think there are deep problems with TRIA, but I don't think concerns about equality is one of them.

Comment by michaelplant on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-11-30T17:39:43.514Z · score: 2 (1 votes) · EA · GW

Indeed, many people are surprised the relationship with inequality is complicated. I don't work on this, but my understanding it matters whether you see inequality in your society as a sign of unfairness and the system being broken (Europe) or you see it as a sign of opportunity to succeed (developing world). I've heard researchers say that don't find such an affect of inequality in the US because really believe in the American Dream and thus don't mind it. As I say, I'm no expert on this but I'd be keen for someone to look into it in more detail.

On your questions 1) the effect will be due to social comparison. Unclear if secret cash transfers would be possible - people buying new roofs for their houses - and whether this would then reduce the increase to the recipients if they can't 'show off'.

2) there is evidence on unemployment. In areas where unemployment is really high (20+%), individuals who are unemployed don't show such a reduction in life satisfaction -there's not such a social penalty if everyone else is unemployed.

I'm pretty sceptical on basic income. I would rather use that money - which would be huge - to provide mental health treatment to everyone who needed it. People are atrocious as converting money into happiness.

Comment by michaelplant on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-11-30T17:31:40.183Z · score: 2 (1 votes) · EA · GW

FWIW, some disabilities you might seen coming - if you have a worsening health state, say - and I think the analysis didn't have data on what was causing the disability, so it's a bit hard to say.

Comment by michaelplant on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-11-30T17:30:18.264Z · score: 2 (1 votes) · EA · GW

Well, I confess I don't fully understand the paper and a further social scientist I've since spoken to had a different take on what the paper said altogether. I'll try to bring this up with a few more people.

Comment by michaelplant on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-11-30T17:26:24.077Z · score: 2 (1 votes) · EA · GW

Hello Larks. Glad your found it useful. On equality, it's going to turn on how you think equality should be understood. If you think we should give equal weight to the 'time-relative-interest-adjusted' value of people's life, you might think it is correct to believing saving a 25-year-old is better for that person than saving a 2-year-old is to that person.

FWIW, intuitions seem quite split on deprivationism vs TRIA about deaths. What people find weird about deprivationism is that there is some sharp point when someone starts to matter. Say someone begins to exist after 90 days after conception. Well, saving someone after 89 days would be morally unimportant, whereas saving them after 91 days would be hugely important. TRIA, by contrast, has a more gradually approach.

Comment by michaelplant on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-11-30T17:20:51.267Z · score: 3 (2 votes) · EA · GW

Hello Jasper, thanks for these.

On 1), I agree the correlation is only partial, which is why I said that we should use the LS data cautiously, keeping in mind when the two measures come apart. I think it would be worth writing up where they conflict and diverge.

In the case of mental health vs poverty, I think moving to affect measures from life satisfaction would would leave the priority ranking unchanged: Dolan and Metcalfe (2012) indicate mental health has a bigger impact on affect than LS, and Kahneman and Deaton (2008) that income has a bigger impact on LS than affect, hence if we ignore the negative externalities of the income transfers on LS, given StrongMinds seems 4x more cost-effective, this would only increase the comparative cost-effectiveness of mental health. I accept this is somewhat complication, and should be written up in greater depth.

It's also unfortunate that my claim here is hypothetical rather than based on actually affect measure of poverty alleviate vs mental health treatment. I'm currently talking to a couple of economists in the hope we can actually find this out!

2) You're quite right. I thought about getting into that but reckoned it was too complicated for an already long piece. I agree it would be worth thinking about someone's LS would arc of the course of their whole life. Another worthy research questions!

Comment by michaelplant on EA Community Building Grants Update · 2018-11-28T10:40:49.402Z · score: 14 (16 votes) · EA · GW

Hello Katie, thanks for writing this up. I'm very concerned, however, about the evaluation criteria:

The primary metric used to assess grants at the end of the first year is the number of group members who apply for internships or graduate programs in priority areas and reach at least the interview stage [...]
We used the 80,000 Hours list of priority paths as the basis for our list of accredited roles, but expanded it to be somewhat broader

The 80k's priority paths are basically X-risk, work at an EA org (does 'cause prioritisation' happen anywhere else?) and earn to give. As such, on your stated primary metric, it seems anyone who switched their career from doing nothing to working on any of animals, global poverty, mental health or climate change would not count as a valuable shift. In other words, CEA only counts a career change as worthwhile if it focuses on the far future.

Three points on this. First, if this is the case - and please tell me if I'm mistaken - I consider it to be deeply regrettable. It moves the EA community from being a morally inclusive one (I've written on this previously), which brings together individuals who do - according to their own lights - the most good they can, and seek to help each other, to a morally exclusive one, where there is a right answer, we need to fight over what it is, and CEA will financially reward you if you accept their answer. It seems the sort of thing that would lead to the split of the EA movement (i.e "if EA is just effective futurism, those of us who aren't effective futurists should go and do our own thing").

Second, I note that, according to the latest EA survey, 54% of those who identify as EA consider those four areas - animals welfare/rights, global poverty, mental health and climate change - to be the 'top cause'. Hence the evaluation metric is markedly not representative of the EA community (in the sense of matching the community's priorities). You might think the EA has just got it wrong, but there is still an oddness here, because you've stated

We hope to avoid these negative effects by providing after-the-fact assignment of credit for outcomes outside the scope of the primary success criteria, and by emphasising the criterion of ‘being a good representative of EA’

I submit that your evaluation criteria - using 80k's list - fails your own representativeness test, because it judges impact by a standard that would not be widely accepted among self-identifying effective altruists.

Third, given the evaluation criteria is quite sharply divergent from what people would expect, I would have appreciated if this had been flagged more clearly.

Comment by michaelplant on Narrative but Not Philosophical Argument Motivates Giving to Charity · 2018-11-27T20:13:46.243Z · score: 4 (3 votes) · EA · GW

Interesting. As someone who was massively influenced by Famine, Affluence and Morality my hunch is that philosophical arguments are very effective for a small group of people. I don't think this should obviously cause EA to switch to, and only to, narrative arguments: maybe those convinced by philosophical reasons end up being the more effective altruists.

Comment by michaelplant on Rationality vs. Rationalization: Reflecting on motivated beliefs · 2018-11-27T10:10:43.472Z · score: 7 (6 votes) · EA · GW

Hello Alex. Thanks for writing this up. I agree we should try, hard as that might be, to be honest with ourselves about our underlying motivations (which are often non-obvious anyway). I often worry about this in my own case.

That being said, I want to push back slightly on the case you've picked. To paraphrase, your example was "I think long-termism is actually true, but I'm going to have to sacrifice a lot to move from development economics to AI policy". Yet, if you hang around in the EA world long enough, the social pressure and incentives to conform to long-termist seem extremely strong: the EA leadership endorse it, there seem to be much greater status and job prospects for long-termists, and if you work on near-term causes people keep challenging you for having "weird beliefs" and treating you as an oddity (sadly, I speak from experience). As such, it's not at all obvious to me that your rationalisation example works here: there is a short-term cost to switching your career path but, over the longer-term, switching to long-termism plausibly benefits one's own welfare (assuming one hangs around with EAs a lot). Hence, this isn't a clear case of "I think this X is true but it's really going to cost me, personally, to believe X".

Comment by michaelplant on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-10-31T09:18:13.602Z · score: 0 (1 votes) · EA · GW

This could be true, but needs much more analysis. I've spoken about increasing opiates for pain relief elsewhere (e.g. my EAG 2017 talk, forum posts here on drug policy reform). It's not easy to compare systemic change - which is what opiate reform would be - to micro-interventions in general. Also I'm not aware of there being any data on the badness of chronic pain on a 1-10 LS scale to be able to make comparisons anyway.

Comment by michaelplant on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-10-31T09:14:21.228Z · score: 1 (2 votes) · EA · GW

Hello Guzey. Yes, this paper has caused quite a stir. It's very hard to understand (at least as a non economist) what the paper is saying as it's filled with jargon and formulae, and the argument seems to turn on statistical considerations that are outside my scope of expertise. I had to ask a couple of economists to explain it to me.

As I now understand it, the authors' main objection is to the use of 3-point scales. What you can infer from such scales depends on what you think the underlying distribution of the data is that's being allocated into those three categories. If you make very different assumptions (e.g. utility is unbounded and the points on the scale are not 'equal-interval', i.e. the same distance apart) you can reverse the results. Such a reminder is useful and it's important to examine these assumptions. That said, their argument is not specifically about happiness measures, but about the use of scales with only a few points, so it's somewhat confusing that that's where they've leveled their critique. It's increasing thought that 3-point measures are unreliable and the modern literature predominantly uses 10-point scales. The authors don't test the robustness of their claims against 10-point scales (where, I gather, they would be less plausible).

I don't think I've update my views on the basis of this article. I still don't fully get what the paper is doing, so I'm relying on those I consider my epistemic superiors in this domain - the economists I've spoken to - and they don't think this poses a problem (relies on odd assumptions, doesn't apply elsewhere, etc.).

Comment by michaelplant on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-10-30T15:41:26.269Z · score: 1 (1 votes) · EA · GW

Hello Jan.

I don't think I follow your post. One idea is that happiness functions like sound, where it takes twice as much of an increase in sound to cause the same increase in perceived loudness. This seems confused if we extend it to happiness. What are we supposed to say: it takes twice as much of increase in happiness to cause the same increase in perceived happiness? That would be crazy, because we only have one item, happiness, rather than two. If we labelled this on a graph, the axis would be the same.

An alternative is that it takes twice as much of an increase in brain activity (of a certain kind) to cause the same increase in perceived happiness. Okay. But people are reported their perceived happiness (or life satisfaction), not the brain states.

Comment by michaelplant on Forum moving to open beta this week · 2018-10-18T08:32:19.505Z · score: 4 (3 votes) · EA · GW

Hello. Good to see the new forum is up. Is it possible to post things on the beta? I can't see how I do that. If it isn't yet available, when will it be?

Comment by michaelplant on Near-Term Effective Altruism Discord · 2018-09-12T08:33:47.051Z · score: 5 (7 votes) · EA · GW

I don't find your objections here persuasive.

Yeah, this isn't good policy. It should be pretty clear that this is how groupthink happens, and you're establishing it as a principle. I get that you feel alienated because, what, 60% of people have a different point of view?

If you want to talk about how best to X, but you run into people who aren't interested in X, it seems fine to talk to other pro-Xers. It seems fine that FHI gathers people who are sincerely interested about the future of humanity. Is that a filter bubble that ought to be broken up? Do you see them hiring people who strongly disagree with the premise of their institution? Should CEA hire people who effective altruism, broadly construed, is just a terrible idea?

You're also creating the problem you're trying to solve in a different way. Whereas most "near-term EAs" enjoy the broad EA community perfectly well, you're reinforcing an assumption that they can't get along, that they should expect EA to "alienate" them, as they hear about your server

To be frank, I think this problem already exists. I've literally had someone laugh in my face because they thought my person-affecting sympathies were just idiotic, and someone else say "oh, you're the Michael Plant with the weird views" which I thought was, well, myopic coming from an EA. Civil discourse, take a bow.

Comment by michaelplant on Additional plans for the new EA Forum · 2018-09-09T09:23:12.921Z · score: 6 (6 votes) · EA · GW

On prizes 1) when would you plan to start them from (i.e. what are posts eligible for this) 2) have you thought much about extrinsic motivation crowding out intrinsic motivation? My worry is that by offering financial rewards, it changes how people will think about this e.g. "well, I'm probably not going to win anything, so I won't bother posting" or "there was some really good content this month, I'm going to hold onto mine"

Comment by michaelplant on EA Facebook Group Greatest Hits: Top 50 Posts by Total Reactions · 2018-08-22T22:37:13.197Z · score: 2 (2 votes) · EA · GW

This may be the best moment of my life :) (no. 2 was leading the time I was top of the EA forum karma list...)

Out of interest, could you out how many reactions these got? I'd be curious to see what the distribution of reactions is.

Comment by michaelplant on CEA on community building, representativeness, and the EA Summit · 2018-08-20T21:59:51.549Z · score: 6 (10 votes) · EA · GW

Longtermism is the view that most of the value of our actions lies in what happens in the future.

You mean 'in the far future', correct? Unless you believe in backwards causality, and excluding the value that occurs at the same moment you act, all the value of our actions is in the future. I presume by 'far future' you would mean actions affecting future people, as contrasted with presently existing people.

I do think that longtermism as a philosophical point of view is emerging as an intellectual consensus in the movement

Cards on the table, I am not a long-termist; I am sympathetic to person-affecting views in population ethics. Given the power CEA has in shaping the community, I think it's the case that any view CEA advocated would eventually become the consensus view: anyone who didn't find it appealing would eventually leave EA.

I just wanted to briefly clarify that I don't think CEA taking a view in favor of longtermism or even in favor of specific causes that are associated with longtermism is evidence against us being cause-impartial.

I don't think this can be true. If you're a longtermist, you can't also hold person-affecting views in population ethics (at least, narrow, symmetric person-affecting views), so taking the longtermist position requires ruling such views out of consideration. You might think you should rule out, as obviously false, such views in population ethics, but you should concede you are doing that. To be more accurate you could perhaps call it something like "possibilism cause impartiality - selecting causes based on impartial estimates of impact assuming we account for the welfare of everyone who might possibly exist" but then it would seem almost trivially true long-termist ought to follow (this might not be the right name, but I couldn't think of a better restatement off-hand).

Comment by michaelplant on The Ethics of Giving Part Three: Jeff McMahan on Whether One May Donate to an Ineffective Charity · 2018-08-11T18:47:01.651Z · score: 1 (1 votes) · EA · GW

I don't think McMahan would find what you call a 'solution' very appealing: McMahan doesn't think that morality is demanding in the way e.g. Singer does. Further, what you suggest ought to be the default position - morality is really demanding - is something only a small percentage of philosophers (although many EAs) believe is correct.

Comment by michaelplant on Problems with EA representativeness and how to solve it · 2018-08-06T16:02:58.249Z · score: 1 (1 votes) · EA · GW

I liked this solely for the pun. Solid work, James.

Comment by michaelplant on Problems with EA representativeness and how to solve it · 2018-08-06T16:01:42.830Z · score: 1 (1 votes) · EA · GW

I'm not sure I see which direction you're coming from. If you're a symmetric person-affector (i.e. reject the procreatve asymmetry, the view we're neutral about creating happy lives but agasinst creating unhappy lives) then you don't think there's value in creating future life, good or bad. So neither x-risks nor s-risks are a concern.

Maybe you're thinking 'don't those with person-affecting views care about those who are going to exist anyway?' the answer is Yes if you're a necessitarian (No if you're a presentist), but given that what we do changes who comes into existence necessitarianism (holds you value wellbeing of those that exist anyway) collapses, in practice, into presentism (holds you value wellbeing of those that exist right now).

Vollmer, the view that would be care about the quality of the long-term future, but not whether it happens, seems to be averagism.

Comment by michaelplant on Harvard EA's 2018–19 Vision · 2018-08-05T13:54:07.310Z · score: 3 (3 votes) · EA · GW

Not a comment on the content, but on the style of writing: I found it very hard to a read a document with so many endnotes - it was about half the scroll length - and gave up: it was too tricky to keep flicking down to the important content and then back up again.

Comment by michaelplant on Problems with EA representativeness and how to solve it · 2018-08-03T21:57:30.614Z · score: 4 (4 votes) · EA · GW

Should probably mention I have raised similiar concerns before in this post: 'the marketing gap and a plea for moral inclusivity'

Comment by michaelplant on Problems with EA representativeness and how to solve it · 2018-08-03T21:45:45.100Z · score: 14 (20 votes) · EA · GW

Thanks for writing this up. Attempting to read between the lines, I am also increasingly frustrated by the feeling that near-term projects are being squeezed out of EA. I've been asking myself when (I think it's a 'when' rather than 'if') EA will become so far-future heavy there's no point me participating. I give it 2 years.

There are perhaps a couple of bigger conversations to be had here. Are the different causes friends or enemies? Often it feels like the former, and this is deeply disappointing. We do compete over scarce resources (e.g. money) but we should be able to cooperate at a broader societal level (post forthcoming). Further, if/when would it make sense for those of us who feel irked by what seems to be an exclusionary, far-futurist, tilt that seems to be occuring to split off and starting doing our own thing?

Comment by michaelplant on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-10T13:31:55.493Z · score: 3 (2 votes) · EA · GW

The anonymous thing is mostly me realising that I hardly ever criticise people, wanting to practice, but knowing I'm going to make a ton of mistakes as I'm kinda new to this

I found this baffling. Rough analogy: "I hard ever punch people, so I thought I'd practise on you". You should criticise people if and when they merit criticism, not because you want to practise. I would have expected you caused a great deal of upset to Brendon (this would have upset me greatly), which, for the question benefit of 'practising criticism' does not seem justified. I urge you to refrain from this sort of thing in future. If you want to improve in a safer way, I suggest you write up your criticisms and then show them to someone else to collect feedback before deciding whether or not to post them.

Comment by michaelplant on Open Thread #40 · 2018-07-09T09:39:58.101Z · score: 1 (1 votes) · EA · GW

It seems you need the Grytics tool to do this. I can't work out to do it in facebook itself. Would also be interested to see this.

Comment by michaelplant on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-25T09:50:20.774Z · score: 1 (1 votes) · EA · GW

"to prove this argument I would have to present general information which may be regarded as having informational hazard"

I agree statements of this kind are very annoying, whether or not they're true.

Comment by michaelplant on A lesson from an EA weekend in London: pairing people up to talk 1 on 1 for 30 mins seems to be very useful · 2018-06-12T15:03:06.701Z · score: 0 (0 votes) · EA · GW

Yeah, I thought the ends of the scales might have been more extreme than we'd normally use. It's probably quite hard to get people to sensibly answer unfamiliar, tricky questions.

Comment by michaelplant on A lesson from an EA weekend in London: pairing people up to talk 1 on 1 for 30 mins seems to be very useful · 2018-06-12T13:43:12.645Z · score: 4 (4 votes) · EA · GW

Thanks for writing this up. Three questions

The numbers on how useful things are seem quite low to me. What did you write as the ends of the scale? I'm thinking in terms of net promoter scores where anything below a 9 or a 10 is considered neutral or bad.

Can you explain Hamming circles? I couldn't find out how they worked even after a quick google.

Did you ask people if there was anything they wanted to do on the weekend but didn't do? I'd be curious to see if people came up with anything.

Comment by michaelplant on EA Hotel with free accommodation and board for two years · 2018-06-11T12:04:48.383Z · score: 2 (2 votes) · EA · GW

irreversible and difficult to evaluate

This basically applies to everything as a matter of degree, so it looks impossible to put in a blanket rule. Suppose I raise £10 and send it to AMF. That's irreversible. Is it difficult to evaluate? Depends what you mean by 'difficult' and what the comparison class is.

Comment by michaelplant on Introducing Charity Entrepreneurship: an Incubation and Research Program for New Charities · 2018-06-06T19:09:00.018Z · score: 1 (1 votes) · EA · GW

I expect ~10 people to attend the camp although I do not expect 100% of them will start charities (I would guess ~60% would)

So you mean you expect 6 different charities to start, or that 6 people will be involved in starting a charity, possibly the same one(s)?

Comment by michaelplant on EA Hotel with free accommodation and board for two years · 2018-06-06T17:51:33.468Z · score: 0 (0 votes) · EA · GW

A potential spanner: how would you restrict this to EAs? Is that legal? I doubt you can refuse service to people on the basis of what would be considerd an irrelevant characteristic. Analogy: could you have a hotel only for people of a certain race or sex?

Comment by michaelplant on EA Hotel with free accommodation and board for two years · 2018-06-06T17:49:40.175Z · score: 0 (4 votes) · EA · GW

Furthermore, people have repeatedly brought up the argument that the first "bad" EA project in each area can do more harm than an additional "good" EA project, especially if you consider tail risks, and I think this is more likely to be true than not. E.g. the first political protest for AI regulation might in expectation do more harm than a thoughtful AI policy project could prevent. This provides a reason for EAs to be risk-averse. (Specifically, I tentatively disagree with your claims that "we’re probably at the point where there are more false negatives than false positives, so more chances can be taken on people at the low end", and that we should invest "a small amount".) Related: Spencer Greenberg's idea that plenty of startups cause harm.

I thought this was pretty vague and abstract. You should say why you expect this particular project to suck!

It seems plausible that most EAs who do valuable work won't be able to benefit from this. If they're students, they'll most likely be studying at a university outside Blackpool and might not be able to do so remotely

I also wonder what the target market is. EA doing remote work? EAs need really cheap accommodation for certain time?