Posts

Causal Network Model IV: Climate Catastrophe 2017-12-03T16:00:30.745Z · score: 3 (3 votes)
Causal Network Model III: Findings 2017-11-22T15:43:14.476Z · score: 7 (7 votes)
Causal Network Model II: Technical Guide 2017-11-19T21:16:39.361Z · score: 7 (7 votes)
Proposed methodology for leafleting study 2017-02-06T14:41:05.189Z · score: 7 (9 votes)

Comments

Comment by alex_barry on Yale Retreat Handover Doc · 2019-11-08T16:57:18.190Z · score: 2 (2 votes) · EA · GW

Thanks for taking the time to write this up and share it Jessica!I just also want to highlight a couple of other resources available for those planning retreats:

Often each doc is written from a fairly specific perspective, so it can be useful to look through a few different ones to get a feel for the different options available.(I think there are also some more floating around, but I am doing a bad job of tracking them down at the moment).


Comment by alex_barry on Effective Altruism Philippines · 2018-12-05T02:43:32.163Z · score: 3 (3 votes) · EA · GW

Hey Jeffrey,

Great to hear you are interested in starting an EA group! I hope your event today goes well, and apologies for the delayed response. I work on the CEA group team to provide support to EA groups. Here are some of my thoughts for new groups starting out:

It is key that anyone leading a local group has a solid understanding of effective altruism, so that they can answer questions from community members, and avoid potentially giving anyone a misleading impression of EA. This means having a level of knowledge at least equivalent to the EA handbook, or Doing Good Better. If you feel you don’t yet have this level of knowledge, then we recommend you take some time to grow your knowledge now, and start your group later. If you're not sure, we're happy to talk with you about what makes sense to do, just get in contact at groups@centreforeffectivealtruism.org.

We have collected together a common set of resources we expect to be of use to many groups in this Google drive folder and there are also resources hosted on the EA Hub as mentioned by Michal.

For guidance on group strategy also see this page on effectivealtruism.org which contains many links to other helpful resources, and for information about getting CEA funding for your group see here.

We also recommend new groups fill out this Google form, to help us at CEA keep track of the groups, and provide you with more personalised support.

Finally to get more regular information about running a group we recommend signing up for the monthly EA groups newsletter, as well as the group organisers’ Facebook group, and the group organisers’ Slack.

Comment by alex_barry on The Importance of EA Dedication and Why it Should Be Encouraged · 2018-05-08T13:11:40.503Z · score: 2 (2 votes) · EA · GW

I'm not quite sure what argument you are trying to make with this comment.

I interpreted your original comment as arguing for something like: "Although most of the relevant employees at central coordinator organisations are not sure about the sign of outreach, most EAs think it is likely to be positive, thus it is likely to in fact be positive".

Where I agree with first two points but not the conclusion, as I think we should consider the staff at the 'coordinator organizations' to be the relevant expert class and mostly defer to their judgement.

Its possible you were instead arguing "The increased concern about downside risk has also made it much harder to ‘use up’ your dedication" is not in fact a concern faced by most EAs, since they still think outreach is clearly positive, so this is not a discouraging factor.

I somewhat agree with this point, but based on your response to cafelow I do not think it is very likely to be the point you were trying to make.

Comment by alex_barry on The Importance of EA Dedication and Why it Should Be Encouraged · 2018-05-08T09:10:38.611Z · score: 1 (1 votes) · EA · GW

But should we not expect coordinator organizations to be the ones best placed to have considered the issue?

My impression is that they have developed their view over a fairly long time period after a lot of thought and experience.

Comment by alex_barry on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-04T16:45:42.086Z · score: 0 (0 votes) · EA · GW

Just to clarify, when I say that my sense is that there are two types of EA, I mean that I sense that there are two types of effective altruism, not that I sense that there are two types of effective altruists.

Ah I see. for some reason I got the other sense from reading your comment, but looking back at it I think that was just a failing of reading comprehension on my part.

I agree that the differences between global poverty and animal welfare are more matters of degree, but I also think they are larger than people seem to expect.

Comment by alex_barry on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-03T15:52:01.432Z · score: 2 (2 votes) · EA · GW

I am somewhat confused by the framing of this comment, you start by saying "there are two types of EA" but the points seem to all be about the properties of different causes.

I don't think there are 'two kinds' of EAs in the sense you could easily tell which group people were going to fall into in advance, but that all of your characteristics just follow as practical considerations resulting from how important people find the longtermist view. (But I do think "A longtermist viewpoint leads to very different approach" is correct.)

I'm also not sure how similar the global poverty and farm animal welfare groups actually are. There seem to be significant differences in terms of the quality of evidence used and how established they are as areas. Points 3, 4, 7, 9 and 10 seem to have pretty noticeable differences between global poverty and farm animal welfare.

Comment by alex_barry on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-02T15:59:23.307Z · score: 4 (4 votes) · EA · GW

As far as I can tell none of the links that look like this instead of http://effective-altruism.com work in the pdf version.

Comment by alex_barry on Please Take the 2018 Effective Altruism Survey! · 2018-04-25T21:04:00.281Z · score: 0 (0 votes) · EA · GW

I also missed it the first time through

Comment by alex_barry on Heuristics from Running Harvard and Oxford EA Groups · 2018-04-25T19:19:53.841Z · score: 1 (1 votes) · EA · GW

as people who aren't actually interested drop out.

This depends on what you mean by 'drop out'. Only around 10% (~5) of our committee dropped out during last year, although maybe 1/3rd chose not to rejoin the committee this year (and about another 1/3rd are graduating)

2) From my understanding, Cambridge viewed the 1 year roles as a way of being able to 'lock in' people to engage with EA for 1 year and create a norm of committee attending events.

This does not ring especially true to me, see my reply to Josh.

Comment by alex_barry on Heuristics from Running Harvard and Oxford EA Groups · 2018-04-25T19:17:57.706Z · score: 2 (2 votes) · EA · GW

To jump in as the ex-co-president of EA: Cambridge from last year:

I think the differences mostly come in things which were omitted from this post, as opposed to the explicit points made, which I mostly agree with.

There is a fairly wide distinction between the EA community in Cambridge and the EA: Cam committee, and we don't try to force people from the former into the latter (although we hope for the reverse!).

I largely view a big formal committee (ours was over 40 people last year) as an addition to the attempts to build a community as outlined in this post. A formal committee in my mind significantly improves the ability to get stuff done vs the 'conspirators' approach.

The getting stuff done can then translate to things such as an increased campus presence, and generally a lot more chances to get people into the first stage of the 'funnel'. Last year we ran around 8 events a week, with several of them aimed at engaging and on-boarding new interested people (Those being hosting 1 or 2 speakers a week, running outreach focused socials, introductionary discussion groups and careers workshops.) This large organisational capacity also let us run ~4 community focused events a week.

I think it is mostly these mechanisms that make the large committee helpful, as opposed to most of the committee members becoming 'core EAs' (I think conversion ratio is perhaps 1/5 or 1/10). There is also some sense in which the above allow us to form a campus presence that helps people hear about us, and I think perhaps makes us more attractive to high-achieving people, although I am pretty uncertain about this.

I think EA: Cam is a significant outlier in terms of EA student groups, and if a group is starting out it probably makes more sense to stick to the kind of advice given in this article. However I think in the long term Community + Big formal committee is probably better than just a community with an informal committee.

Comment by alex_barry on The person-affecting value of existential risk reduction · 2018-04-20T21:28:26.770Z · score: 3 (3 votes) · EA · GW

I'm surprised by your last point, since the article says:

Although it seems unlikely x-risk reduction is the best buy from the lights of the total view (we should be suspicious if it were), given $13000 per life year compares unfavourably to best global health interventions, it is still a good buy: it compares favourably to marginal cost effectiveness for rich country healthcare spending, for example.

This seems a far cry from the impression you seem to have gotten from the article. In fact your quote of "highly effective" is only used once, in the introduction, as a hypothetical motivation for crunching the numbers. (Since, a-priori, it could have turned out the cost effectiveness was 100 times higher, which would have been very cost effective).

On your first two points, my (admittedly not very justified) impression is the 'default' opinons people typically have is that almost all human lives are positive, and that animal lives are extremely unimportant compared to humans. Whilst one can question the truth of these claims, writing an article aimed at the majority seems reasonable.

It might be that actually within EA the average opinion is closer to yours, and in any case I agree the assumptions should have been clearly stated somewhere, along with the fact he is taking the symmetric as opposed to asymmetric view etc.

Comment by alex_barry on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-20T16:08:11.087Z · score: 1 (1 votes) · EA · GW

How could it explain that diabetics lived longer than healthy people?

If all of the sickest diabetics are switched to other drugs, then the only people taking metformin are the 'healthy diabetics', and it is possible that the average healthy diabetic lives longer than the average person (who may be healthy or unhealthy).

This would give the observed effective without metformin having any effect on longevity.

Comment by alex_barry on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-20T10:33:55.218Z · score: 1 (1 votes) · EA · GW

I'm not quite sure what this equation is meant to be calculating. If it is meant to be $ per life saved it should be something like:

Direct effects: (price of the experiment)/((probability of success)*(lives saved assuming e.g. 10% adoption))

(Note the division is very important here! You missed it in your comment, but it is not clear at all what you would be estimating without it.)

Your estimate of the indirect costs seems right to me, although in the case of:

growth of food consumption because of higher population

I would probably not include this level of secondary effect, since these people are also economically productive etc. so it being very hard to estimate.

Comment by alex_barry on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-19T15:23:44.025Z · score: 1 (1 votes) · EA · GW

I'm not saying you need to solve the problem, I'm saying you should take the problem into account in your cost calculations, instead of assuming it will be solved.

Comment by alex_barry on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T23:35:43.812Z · score: 1 (1 votes) · EA · GW

It probably should be analysed how the bulk price of metformin could be lowered. For example, global supply of vitamin C costs around 1 billion USD a year with 150 kt of bulk powder.

Yes but as I discuss above it needs to be turned into pills and distributed to people, for which a 2 cents per pill cost seems pretty low. If you are arguing for fortification of foods with metformin then presumably we would need to show extraordinary levels of safety, since we would be dosing the entire population at very variable levels.

In general I would find it helpful if you could try and keep your replies in the same comment - this basically seems to be an extension of your other comment about buying metformin in bulk and having it split in two makes it harder to keep track.

Comment by alex_barry on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T23:28:15.782Z · score: 1 (1 votes) · EA · GW

Yes, but 10kg of pure Metformin powder is not much good since it needs to be packaged into pills for easy consumption (since its needs to be taken in sub gram doses). Since you are not able to find pills for less than 2 cents (and even those only in India) I think you should not assume a lower price than that without good reason.

Presumably we run into some fundamental price to form, package and ship all the pills? I would be surprised if that could be gotten much below 1p per pill in developed countries. (although around 1p per pill is clearly possible since some painkillers are sold around that level)

Comment by alex_barry on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T15:28:34.144Z · score: 1 (1 votes) · EA · GW

I more meant it should be mentioned by the $0.24 figure e.g. something like:

"Under our model the direct cost effectiveness is $0.24 per life saved, but there is also an indirect cost of ~$12,000 per life saved from the cost of the metformin (as we will need to supply everyone with it for $3 trillion, but it will only save 250 million lives)."

Noticeably the indirect figure is actually more expensive than current global poverty charities, so under your model buying people metformin would not be an attractive intervention for EAs. This does not mean it would necessarily not be cost effective to fund the trial to 'unlock' the ability for others to buy the drugs, since it might be more efficient than e.g. other developed government use of money, but it does hammer home that the costs of the drugs is very non-negligible.

Comment by alex_barry on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T14:54:40.521Z · score: 1 (1 votes) · EA · GW

Even if the cost of Metformin is only 2 cents a day, giving to to 5 billion people every day for 80 years would cost about $3 trillion (0.02*365*80*5*10^9). Whilst the cost would (at least potentially) be distributed across the population, it also seems like something that should be mentioned as a cost of the policy.

Comment by alex_barry on Is Effective Altruism fundamentally flawed? · 2018-04-17T13:38:57.734Z · score: 0 (0 votes) · EA · GW

I was trying to keep the discussions of 'which kind of pain is morally relevant' and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further.

You're saying that, if we determined "total pain" by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it.

Given that you were initially arguing (with kblog etc.) for this definition of total pain, independent of any other identity considerations, this seems very relevant to that discussion.

Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).

But this seems extremely far removed from any day to day intuitions we would have about morality, no? If you flipped a coin to decide whether you should murder each person you met, (a very implementable approximation of this result) I doubt many would find this justified on the basis that someone in the future is going to be suffering much more than them.

I think there is no more absurdity to assigning each possible action an equal chance (assuming A1 and A2 are true) than there is in, say, flipping a coin between saving a million people on one island from being burned alive and saving one other person on another island from being burned alive.

The issue is this also applied to the case of deciding whether to set the island on fire at all

Comment by alex_barry on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-17T13:06:12.497Z · score: 1 (1 votes) · EA · GW

Sure, although I'm not sure how much time I will have to look it over. My email is alexbarry40@gmail.com.

Comment by alex_barry on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-17T10:47:41.389Z · score: 3 (3 votes) · EA · GW

Thanks for the reply. Despite my very negative tone I do think this is an important work, and doing good cost benefit analysis like these is very difficult.

Taking median date of the AI arrival like 2062 is not informative as in half cases it will not be here at 2062. The date of 2100 is taken as the date when it (or other powerful life-extending technology) almost sure will appear as a very conservative estimate.

I don't share the intuition that human level AI will rapidly cause the creation of powerful life-extending technology. This seems to be relying on a rapid takeoff scenario, which while plausible I don't think can be taken as anything like certain. I think if this is the argument it should be spelled out clearly.

With regards to the effectiveness of metformin, my argument is that you should include a discount factor of a half or so to include the probably that it does not pass the human level trial.

Given all uncertainty, the simplified model provides only an order of magnitude of the effect

My issue is that I don't see any arguments that the model is even likely to be accurate to within an order of magnitude.

I'm glad to here a more detailed model is in the works, as I said I think this is important work, but that makes getting it right all the more pivotal.

As the paper is already too long, we tried to outline the main arguments or provide links to the articles where detailed refutation is presented, as in case of Gavrilov, 2010, where the problem of overpopulation is analysed in detail. But it is obvious now that this points should be clarified.

I think if the intention is just to link to other articles with detailed refutations you should just do that and not attempt to summerise (or make it clear this is at most a very rough outline). However for two of the examples I listed no other article is linked.

Comment by alex_barry on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-16T23:23:45.331Z · score: 8 (8 votes) · EA · GW

Reading through this I have some pretty significant concerns.

First the model behind the "$0.24 for each life saved" figure seems very suspect:

  • The assumption of radical life extension technology being developed by 2100 is totally unsupported, with the one citation being to a survey of machine learning researchers which gave a 50% chance of AI reaching human level in all activities by 2062. It is unclear how this relates to the development of radical life extension technology however, something significantly out of reach of (current) human level ability
  • It assumes that Metformin would definitely extend human life expectancy by 1 year. However since much of the current evidence is from animal or cohort studies, it cannot be assumed that it definitely has an effect.
  • Even given all of the above it is not clear to me that the model actually provides a good estimate of the number of people likely to be saved. It is based on an extreme simplification (assuming 5 billion people are all born in 2020 and take Metformin for their entire lives) and as far as I can tell there is no attempt to justify its accuracy.

I am also unconvinced by the quality of argument elsewhere in the paper. For instance in the section "False arguments against badness of death" they list common arguments against the badness of death and then claim to refute them. However the responses are often extremely shallow and do not engage at all with the core of the argument. Here are some examples:

1) Stopping death will result in overpopulation. Only the number of births counts for overpopulation (Gavrilov & Gavrilova, 2010), and short-lived organisms like lemmings are the type of species that suffers from overpopulation.

2) Stopping death could result in stagnation, infinite totalitarianism, or other bad social outcomes. Our world changes so quickly that there is no time for such “stability” to take root.

3) Stopping death takes opportunity from non-born people, who would be born if resources were freed up by death of aging humans. The idea of an infinite universe where everything is possible kills the objection.

The paper contains many other arguments of a similar level of quality, and so although I largely agree with many of its conclusions, I find it generally very uncompelling.

Finally, as the most minor point there are quite a high number of grammatical issues.

Comment by alex_barry on The person-affecting value of existential risk reduction · 2018-04-16T00:15:18.909Z · score: 0 (0 votes) · EA · GW

Also, not sure why my comment was downvoted. I wasn't being rude (or, I think, stupid) and I think it's unhelpful to downvote without explanation as it just looks petty and feels unfriendly.

I didn't downvote, but:

In which case I'm not understanding your model. The 'Cost per life year' box is $1bn/EV. How is that not a one off of $1bn? What have I missed?

The last two sentences of this come across as pretty curt to me. I think there is a wide range in how people interpret things like these, so it is probably just a bit of a communication style mismatch. (I think I have noticed a myself having a similar reaction to a few of your comments before where I don't think you meant any rudeness).

I think it's unhelpful to downvote without explanation as it just looks petty and feels unfriendly.

I agree with this on some level, but I'm not sure I want there to be uneven costs to upvoting/downvoting content. I think there is also an unfriendliness vs. enforcing standards tradeoff where the marginal decisions will typically look petty.

Comment by alex_barry on Is Effective Altruism fundamentally flawed? · 2018-04-14T07:54:28.537Z · score: 0 (0 votes) · EA · GW

Yes, "switched" was a bit strong, I meant that by default people will assume a standard usage, so if you only reveal later that actually you are using a non-standard definition people will be surprised. I guess despite your response to Objection 2 I was unsure in this case whether you were arguing in terms of (what are at least to me) conventional definitions or not, and I had assumed you were.

To italicize works puts *s on either side, like *this* (when you are replying to a comment there is a 'show help' button that explains some of these things.)

Comment by alex_barry on The person-affecting value of existential risk reduction · 2018-04-13T22:33:52.666Z · score: 1 (1 votes) · EA · GW

If this isn't true, or consensus view amongst PAAs is "TRIA, and we're mistaken to our degree of psychological continuity", then this plausibly shaves off an order of magnitude-ish and plonks it more in the 'probably not a good buy' category.

It would also have the same (or worse) effect on other things that save lives (e.g. AMF) so it is not totally clear how much worse x-risk would look compared to everything else. (Although perhaps e.g. deworming would come out very well, if it just reduces suffering for a short-ish timescale. (The fact that it mostly effects children might sway things the other way though!))

Comment by alex_barry on Is Effective Altruism fundamentally flawed? · 2018-04-13T22:14:03.684Z · score: 0 (0 votes) · EA · GW

Some of your quotes are broken in your comment, you need a > for each paragraph (and two >s for double quotes etc.)

I know for most of your post you were arguing with standard definitions, but that made it all the more jarring when you switched!

I actually think most (maybe all?) moral theories can be baked into goodness/badness of sates of affairs. If you want incorporate a side-constraint you can just define any state of affairs in which you violate that constraint as being worse than all other states of affairs. I do agree this can be less natural, but the formulations are not incompatible.

In any case as I have given you plenty of other comment threads to think about I am happy to leave this one here - my point was just a call for clarity.

Comment by alex_barry on Is Effective Altruism fundamentally flawed? · 2018-04-13T21:46:19.785Z · score: 2 (2 votes) · EA · GW

On 'people should have a chance to be helped in proportion to how much we can help them' (versus just always helping whoever we can help the most).

(Again, my preferred usage of 'morally worse/better' is basically defined so as to mean one always 'should' always pick the 'morally best' action. You could do that in this case, by saying cases are morally worse than one another if people do not have chances of being helped in proportion to how badly off they are. This however leads directly into my next point... )

How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.

In your reply to JanBrauner you are very willing to basically completely sacrifice this principle in response to practical considerations, so it seems possibly you are not willing to trade off any amount of 'actually helping people' in favour of it, but then it seems strange you argue for it so forcefully.

As a separate point, this form of reasoning seems rather incompatible with your claims about 'total pain' being morally important, and also determined solely by whoever is experiencing the most pain. Thus, if you follow your approach and give some chance of helping people not experiencing the most pain, in the case when you do help them, the 'total pain' does not change at all!

For example:

  • Suppose Alice is experiencing 10 units of suffering (by some common metric)
  • 10n people (call them group B) are experiencing 1 units of suffering each
  • We can help exactly one person, and reduce their suffering to 0

In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of 'total pain' remains at 10 as Alice is not helped.

This means that n/(n+1) proportion of the time the 'total pain' remains unchanged. i.e. we can make the chance of actually effecting the thing you say is morally important arbitrarily small. It seems strange to say your morally is motivated by x if your actions are so distanced from it that your chance of actually effecting x can go to zero.

Finally I find the claim that this is actually the fairer or more empathetic approach unconvincing. I would argue that whatever fairness you gain by letting there be some chance you help the person experiencing the second-most suffering is outweighed by your unfairness to the person suffering the most.

Indeed, for another example:

  • Say a child (child A) is about to be tortured for the rest of their life, which you can prevent for £2.
  • However another child (child B) has just dropped their ice cream, which has slightly upset them (although not much, they are just a little sad). You could buy them another ice cream for £2, which would cheer them up.

You only have £2, so you can only help one of the children. Under your system you would give some (admittedly (hopefully!) very small) chance that you would help child B. However in the case that you rolled your 3^^^3 sided die and it come up in favour of B, as you started walking over to the ice cream van it seems like it would be hard to say you were acting in accordance with "reason and empathy".

(This was perhaps a needlessly emotive example, but I wanted to hammer home how completely terrible it could be to help the person not suffering the most. If you have a choice between not rolling a die, and rolling a die with a chance of terrible consequences, why take the chance?)

Comment by alex_barry on Is Effective Altruism fundamentally flawed? · 2018-04-13T20:43:32.040Z · score: 0 (0 votes) · EA · GW

So you're suggesting that most people aggregate different people's experiences as follows:

Well most EAs, probably not most people :P

But yes, I think most EAs apply this 'merchandise' approach weighed by conscious experience.

In regards to your discussion of moral theories, side constraints: I know there are a range of moral theories that can have rules etc. My objection was that if you were not in fact arguing that total pain (or whatever) is the sole determiner of what action is right then you should make this clear from the start (and ideally baked into what you mean by 'morally worse').

Basically I think sentences like:

"I don't think what we ought to do is to OUTRIGHT prevent the morally worse case"

are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using 'morally worse' in a nonstandard way (and possibly use a different term). I have the intuition that if you say "X is the morally relevant factor" then which actions you say are right will depend solely on how they effect X.

Hence if you say 'what is morally relevant is the maximal pain being experienced by someone' when I expect all I need to tell you abut actions for you to decide between them is how they effect the maximal pain being experienced by someone.

Obviously language is flexible but I think if you deviate from this without clear disclaimers it is liable to cause confusion. (Again, at least in EA circles).

I think your argument that people should have a chance to be helped in proportion to how much we could help them is completely separate from your point about Comparability, and we should keep the discussions separate to avoid the chance of confusion. I'll make a separate comment to discuss it,

Comment by alex_barry on The person-affecting value of existential risk reduction · 2018-04-13T12:11:56.181Z · score: 0 (0 votes) · EA · GW

Ah sorry yes you are right - I had misread the cost as £1 Billion total, not £1 Billion per year!

Comment by alex_barry on The person-affecting value of existential risk reduction · 2018-04-13T11:31:18.795Z · score: 1 (1 votes) · EA · GW

Edit: My comment is wrong - i had misread the price as £1 billion as a one-off, but it is £1 billion per year

I'm not quite able to follow what role annualising the risk plays in your model, since as far as I can tell you seem to calculate your final cost effectiveness in terms purely of the risk reduction in 1 year. This seems like it should undercount the impact 100-fold.

e.g. if I skip annualising entirely, and just work in century blocks I get:

  • still 247 Billion Life years at stake
  • 1% chance of x-risk, reduced to 0.99% by £1 billion project X.
  • This expected £ per year of life at 10^9/0.01%*247*10^9 = ~40, which is about 1/100 of your answer.

I might well have misunderstood some important part of your model, or be making some probability-related mistake.

Comment by alex_barry on The person-affecting value of existential risk reduction · 2018-04-13T11:09:17.652Z · score: 16 (16 votes) · EA · GW

Thanks for writing this up! This does seem to be an important argument not made often enough.

To my knowledge this has been covered a couple of times before, although not as thoroughly.

Once by Oxford Prioritization Project however they approached it from the other end, instead asking "what absolute percentage x-risk reduction would you need to get for £10,000 for it to be as cost effective as AMF" and finding the answer of 4 x 10^-8%. I think your model gives £10,000 as reducing x-risk by 10^-9%, which fits with your conclusion of close but not quite as good as global poverty.

Note they use 5% before 2100 as their risk, also do not consider QALYs, instead only looking at 'lives saved' which is likely bias them against AMF, since it mostly saves children.

We also calculated this as part of the Causal Networks Model I worked on with Denise Melchin at CEA over the summer. The conclusion is mentioned briefly here under 'existential effectiveness'.

I think our model was basically the same as yours, although we were explicitly interested in the chance of existential risk before 2050, and did not include probabilistic elements. We also tried to work in QALYs, although most of our figures were more bullish than yours. We used by default:

  • 7% chance of existential risk by 2050, which in retrospect seems extremely high, but I think was based on a survey from a conference.
  • The world population in 2050 will be 9.8 Billion, and each death will be worth -25 QALYs (so 245 billion QALYs at stake, very similar to yours)
  • For the effectiveness of research, we assumed that 10,000 researchers working for 10 years would reduce x-risk by 1% point (i.e. from 7% to 6%). We also (unreasonably) assumed each researcher year cost £50,000 (where I think the true number should be at least double that, if not much more).
  • Our model then had various other complicated effects, modelling both 'theoretical' and 'practical' x-risk based on government/industry willingness to use the advances, but these were second order and can mostly be ignored.

Ignoring these second order effects then, our model suggested it would cost £5 billion to reduce x-risk by 1% point, which corresponds to a cost of about £2 per QALY. In retrospect this should be at least 1 or 2 orders of magnitude higher (increasing researcher cost and decreasing x-risk possibility by and order of magnitude each).

I find your x-risk chance somewhat low, I think 5% before 2100 seems more likely. Your cost-per-percent to reduce x-risk also works out as much higher than the one we used, but seems more justified (ours was just pulled from the air as 'reasonable sounding').

Comment by alex_barry on Is Effective Altruism fundamentally flawed? · 2018-04-13T10:09:42.831Z · score: 1 (1 votes) · EA · GW

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

The argument is that if:

  • The amount of 'total pain' is determined by the maximum amount of suffering people experienced by any given person (Which I think is what you are arguing)
  • There could be an alien civilization containing a being experiencing more suffering than any human is capable of experiencing (you could also just use a human being tortured if you liked for a less extreme but clearly applicable case)
  • In this case, then the amount of 'total pain' is always at least that very large number, such that none of your actions can change it at all.
  • Thus (and you would disagree with this implication due to your adoption of the Pareto principle) since the level of 'total pain' is the morally important thing, all of your possible actions are morally equivalent.

As I mention I think you escape this basic formulation of the problem by your adoption of the Pareto principle, but a more complicated version causes the same issue:

This is essentially just applying the non-identity problem to the example above. (weirdly enough I think the best explanation I've seen of the non-identity problem is the second half of the 'the future' section of Derek Parfit wikipedia page )

The argument goes something like:

  • D1 If we adopt that 'total pain' is the maximal pain experienced by any person for whom we can effect how much pain their experience (an attempt to incorporate the Pareto principle into the definition for simplicity's sake).
  • A1 At some point in the far future there is almost certainly going to be someone experiencing extreme pain. (Even if humanity is wiped out, so most of the future has no one in it, that wiping out is likely to involve extreme pain for some).
  • A2 Due to chaotic nature of the world, and the strong dependence on birth timings of personal identity (if the circumstances of ones conception change even very slightly then your identity will almost certainly be completely different) any actions in the world now will within a few generations result in a completely different set of people existing.
  • C1 Thus by A1 the future is going to contain someone experiencing extreme pain, but by A2 exactly who this person is will vary with any different courses of action, thus by D1 the 'total pain' in all cases is uniformly vary high.

This is similar to the point made in JanBrauner, however I did not find your response to their comment particularly engaged the core point of the extreme unpredictability of the maximum pain caused by an act.

After your most recent comment I am generally unsure exactly what you are arguing for in terms of moral theories. When arguing on which form of pain is morally important you seem to make a strong case that one should consider the 'total pain' in a situation solely by whatever pain involved is most extreme. However when discussing moral recommendations you don't completely focus on this. Thus I'm not sure if this comments and its examples will miss the mark completely.

(There are also more subtle defenses, such as those relating to how much one cares about future people etc. which have thusfar been left out of the discussion).

Comment by alex_barry on Is Effective Altruism fundamentally flawed? · 2018-04-13T09:03:31.803Z · score: 0 (0 votes) · EA · GW

are you using "bad" to mean "morally bad?"

Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)

I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about converting each persons experience into 'utility' (or 'moral badness' etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don't know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more).

I bring this up since you are approaching this from a different angle than the usual, which makes peoples standard lines of reasoning seem more complex.

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

I'll discuss this in a separate comment since I think it is one of the strongest argument against your position.

I don't know much about the veil of ignorance, so I am happy to give you that it does not support total utilitarianism.

I believe it is not always right to prevent the morally worse case.

Then I am really not sure at all what you are meaning by 'morally worse' (or 'right'!). In light of this, I am now completely unsure of what you have been arguing the entire time.

Comment by alex_barry on Is Effective Altruism fundamentally flawed? · 2018-04-12T13:13:34.150Z · score: 1 (1 votes) · EA · GW

Thanks for getting back to me, I've read your reply to kblog, but I don't find your argument especially different to those you laid out previously (which given that I always thought you were trying to make the moral case should maybe not be surprising). Again I see why there is a distinction one could care about, but I don't find it personally compelling.

(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person experiencing that pain' (there is some change from discussing 'total pain' to 'badness' here, but I think it still fits with our usage).)

A couple of brief points in favour of the classical approach:

  • It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).
  • As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments.

One additional thing to note is that dropping the comparability of 'non-purely experientially determined' and 'purely experientially determined' experiences (henceforth 'Comparability') does not seem to naturally lead to a specific way of evaluating different situations or weighing them against each other.

For example, you suggest in your post that without Comparability the morally correct course of action would be to give each person a chance of being helped in proportion to their suffering, but this does not necessarily follow. One could imagine others who also disagreed with Comparability, but thought the appropriate solution was to always help the person suffering the most, and not care at all about anyone else. To take things to the opposite extreme, someone could also deny Comparability but think that the most important thing was minimizing the number of people suffering at all and not take into account intensity whatsoever (although they would likely justify rejecting Comparability on different grounds to you).

Comment by alex_barry on Compilation of 32 new(ish) 80,000 Hours research pieces for the effective altruist community · 2018-04-08T23:33:42.793Z · score: 0 (0 votes) · EA · GW

Huh, weirdly they seem to all work again now, they used to take me to the same page as any non-valid URl, e.g. https://80000hours.org/not-a-real-URL/

Comment by alex_barry on Compilation of 32 new(ish) 80,000 Hours research pieces for the effective altruist community · 2018-04-08T17:03:53.117Z · score: 0 (0 votes) · EA · GW

The links to 2, 4, 6 and 15 seem broken on the 80K end, I just get 'page not found' for each.

Link 30 also does not work, but that is just because it starts with an unnecessary "effective-altruism.com/" before the youtube link.

I checked and everything else seems to work.

Comment by alex_barry on UK Income Tax & Donations · 2018-04-08T16:54:45.653Z · score: 2 (2 votes) · EA · GW

Thanks for writing this! The interaction between donations and the reductions in personal allowance are interesting, and I would not have thought of them otherwise.

Comment by alex_barry on Review of CZEA "Intense EA Weekend" retreat · 2018-04-08T13:48:42.118Z · score: 0 (0 votes) · EA · GW

Some reservations I would have about the usefulness of a database vs lots of write-ups 'in context' like these is that I think how well activities work can depend heavily on the wider structure and atmosphere of the retreat, as well as the events that have come before. I would probably be happier with a classification of 2 or 3 different types of retreat, and the activities that seem to work best in each. (However we should not let perfect be the enemy of good here, and there is probably a number of things that work well across different retreat styles).

Your time costs seem largely similar to mine then (on the things we both did), I had not anticipated the large amount of time you spent on survey design etc. I don't think my time cost would change much if I included the talk prep, since I would be surprise if it totaled >10 hours.

Comment by alex_barry on A short review of the Effect Foundation in 2017 · 2018-04-07T14:33:47.765Z · score: 0 (0 votes) · EA · GW

Ah great, thanks for the response!

Comment by alex_barry on Review of CZEA "Intense EA Weekend" retreat · 2018-04-07T14:18:35.412Z · score: 3 (3 votes) · EA · GW

Thanks for writing this up,

For your impact review this seems likely to have some impact on the program of future years EA: Cambridge retreats. (In particular it seems likely we will include a version of the 'Explaining Concepts' activity, which we would not have done otherwise, as well as being an additional point in favour of CFAR stuff, and another call to think carefully about the space/mood we create).

I am also interested in the breakdown of how you spend the 200h planning time since i would estimate the EA: Cam retreat (which had around 45 attendees, and typically had 2 talks on at the same time) took me <100h (probably <2 weeks FTE). Part of this is likely efficiency gains since I worked on it alone, and I expect a large factor to be I put much much less effort into the program (<10 hours seems very likely).

Comment by alex_barry on Job opportunity at the Future of Humanity Institute and Global Priorities Institute · 2018-04-01T17:54:01.376Z · score: 1 (1 votes) · EA · GW

Ah that looks great thanks, I had not heard about that before!

Comment by alex_barry on Job opportunity at the Future of Humanity Institute and Global Priorities Institute · 2018-04-01T15:00:57.750Z · score: 8 (8 votes) · EA · GW

I think I agree with the comments on this post that job postings on the EA forum are not ideal, since if all the different orgs did it they would significantly clutter the forum.

The existing "Effective Altruism Job Postings" Facebook group and possibly the 80k job board should fulfill this purpose.

Comment by alex_barry on Is Effective Altruism fundamentally flawed? · 2018-03-31T16:08:00.627Z · score: 0 (0 votes) · EA · GW

Thanks for your reply - I'm extremely confused if you think there is no 'intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person" since (as has been discussed in these comments) if you view/define total pain as being measured by intensity-weighted number of experiences this gives a clear metric that matches consequentialist usage.

I had assumed you were arguing at the 'which is morally important' level, which I think might well come down to intuitions.

I hope you manage to work it out with kblog!

Comment by alex_barry on Is Effective Altruism fundamentally flawed? · 2018-03-29T23:13:16.104Z · score: 1 (3 votes) · EA · GW

(Posted as top-level comment as I has some general things to say, was originally a response here)

I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don't find it particularly compelling myself, but I can understand why others could find it important.

Overall I find this post confusing though, since the framing seems to be 'Effective Altruism is making an intellectual mistake' whereas you just actually seem to have a different set of moral intuitions from those involved in EA, which are largely incompatible with effective altruism as it currently practiced. Whilst you could describe moral differences as intellectual mistakes, this does not seem to be a standard or especially helpful usage.

The comments etc. then just seem to have mostly been people explaining why they don't find your moral intuition that 'non-purely experientially determined' and 'purely experientially determined' amounts of pain cannot be compared compelling. Since we seem to have reached a point where there seems to be a fundamental disagreement about considered moral values, it does not seem that attempting to change each others minds is very fruitful.

I think I would have found this post more conceptually clear if it had been structured:

  1. EA conclusions actually require an additional moral assumption/axiom - and so if you don't agree with this assumption then you should not obviously follow EA advice.

  2. (Optionally) Why you find the moral assumption unconvincing/unlikely

  3. (Extra Optionally) Tentative suggestions for what should be done in the absence of the assumption.

Where throughout the assumption is the commensuratabilitly of 'non-purely experientially determined' and 'purely experientially determined' experience.

In general I am not very sure what you had in mind as the ideal outcome of this post. I'm surprised if you thought most EAs agreed with you on your moral intuition, since so much of EA is predicated on its converse (as is much of established consequential thinking etc.). But equally I am not sure what value we can especially bring to you if you feel very sure in your conviction that the assumption does not hold.

Comment by alex_barry on Is Effective Altruism fundamentally flawed? · 2018-03-29T22:59:36.870Z · score: 0 (0 votes) · EA · GW

I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don't find it particularly compelling myself, but I can understand why others could find it important.

Overall I find this post confusing though, since the framing seems to be "Effective Altruism is making an intellectual mistake" whereas you just actually seem to have a different set of moral intuitions from those involved in EA, which are largely incompatible with effective altruism as it currently practiced. Whilst you could describe moral differences as intellectual mistakes, this does not seem to be a standard or especially helpful usage.

The comments etc. then just seem to have mostly been people explaining why they don't find your moral intuition that 'non-purely experientially determined' and 'purely experientially determined' amounts of pain cannot be compared compelling. Since we seem to have reached a point where there seems to be a fundamental disagreement about considered moral values, it does not seem that attempting to change each others minds is very fruitful.

I think I would have found this post more conceptually clear if it had been structured:

  1. EA conclusions actually require an additional moral assumption/axiom - and so if you don't agree with this assumption then you should not obviously follow EA advice.

  2. (Optionally) Why you find the moral assumption unconvincing/unlikely

  3. (Extra Optionally) Tentative suggestions for what should be done in the absence of the assumption.

Where throughout the assumption is the commensuratabilitly of 'non-purely experientially determined' and 'purely experientially determined' experience.

In general I am not very sure what you had in mind as the ideal outcome of this post. I'm surprised if you thought most EAs agreed with you on your moral intuition, since so much of EA is predicated on its converse (as is much of established consequential thinking etc.). But equally I am not sure what value we can especially bring to you if you feel very sure in your conviction that the assumption does not hold.

(Note I also made this as a top level comment so it would be less buried, so it might make more sense to respond (if you would like to) there)

Comment by alex_barry on Why not to rush to translate effective altruism into other languages · 2018-03-25T13:07:25.093Z · score: 2 (2 votes) · EA · GW

We may just be seeing upvote inflation if the EA forum now has more readers than before

Comment by alex_barry on A short review of the Effect Foundation in 2017 · 2018-03-25T12:43:39.818Z · score: 1 (1 votes) · EA · GW

Thanks for the writeup, I was not aware of the Effect Foundation before now.

After reading the above I am still not sure exactly what kind of outreach you perform. Could you give me a quick rundown of how you think you influenced the donations, and what you plan to continue doing going forwards?

Comment by alex_barry on Viewing Effective Altruism as a System · 2018-01-06T02:54:33.278Z · score: 1 (1 votes) · EA · GW

Thanks for writing this - it fits well with my experience of how a lot of people get increasingly involved with EA, bouncing between disparate programs by different orgs. This does unfortunately make evaluating impact much harder, but I think it is important to bear in mind when when designing resources for EA outreach or similar projects.

Comment by alex_barry on Cost Effectiveness of Mindfulness Based Stress Reduction · 2017-12-03T16:29:31.638Z · score: 2 (2 votes) · EA · GW

Thanks for the post, as a minor nitpick, shouldn't the maximal DALY cost of doing something for an hour a day be 1/16, since there are only 16 waking hours in a day and presumably the period whilst asleep does not contribute?

Comment by alex_barry on Causal Network Model III: Findings · 2017-11-24T17:58:45.829Z · score: 1 (1 votes) · EA · GW

Ah good point on the researcher salary, it was definitely just eyeballed and should be higher.

I think a reason I was happy to leave it low was as a fudge to take into account that the marginal impact of a researcher now is likely to be far greater than the average impact if there were 10,000 working on x-risk, but I should have clarified that as a separate factor.

In any case, even adjusting the cost of a researcher up to $500,000 a year and leaving the rest unchanged does not significantly change the conclusion, with the very rough calculation still giving ~$10 per QALY (but obviously leaves less wiggle room for skepticism about the efficacy of research etc.)