Posts

EA Survey bar chart plotter 2015-03-24T02:37:06.394Z · score: 6 (6 votes)

Comments

Comment by pappubahry on Saying 'AI safety research is a Pascal's Mugging' isn't a strong response · 2015-12-17T01:14:17.722Z · score: 0 (0 votes) · EA · GW

If I were debating you on the topic, it would be wrong to say that you think it's a Pascal's mugging. But I read your post as being a commentary on the broader public debate over AI risk research, trying to shift it away from "tiny probability of gigantic benefit" in the way that you (and others) have tried to shift perceptions of EA as a whole or the focus of 80k. And in that broader debate, Bostrom gets cited repeatedly as the respectable, mainstream academic who puts the subject on a solid intellectual footing.

(This is in contrast to MIRI, which as SIAI was utterly woeful and which in its current incarnation still didn't look like a research institute worthy of the name when I last checked in during the great Tumblr debate of 2014; maybe they're better now, I don't know.)

In that context, you'll have to keep politely telling people that you think the case is stronger than the position your most prominent academic supporter argues from, because the "Pascal's mugging" thing isn't going to disappear from the public debate.

Comment by pappubahry on Saying 'AI safety research is a Pascal's Mugging' isn't a strong response · 2015-12-16T14:37:24.773Z · score: 5 (5 votes) · EA · GW

The New Yorker writer got it straight out of this paper of Bostrom's (paragraph starting "Even if we use the most conservative of these estimates"). I've seen a couple of people report that Bostrom made a similar argument at EA Global.

Comment by pappubahry on Saying 'AI safety research is a Pascal's Mugging' isn't a strong response · 2015-12-15T14:32:37.880Z · score: 4 (4 votes) · EA · GW

I get what you're saying, but, e.g., in the recent profile of Nick Bostrom in the New Yorker:

No matter how improbable extinction may be, Bostrom argues, its consequences are near-infinitely bad; thus, even the tiniest step toward reducing the chance that it will happen is near-­infinitely valuable. At times, he uses arithmetical sketches to illustrate this point. Imagining one of his utopian scenarios—trillions of digital minds thriving across the cosmos—he reasons that, if there is even a one-per-cent chance of this happening, the expected value of reducing an existential threat by a billionth of a billionth of one per cent would be worth a hundred billion times the value of a billion present-day lives. Put more simply: he believes that his work could dwarf the moral importance of anything else.

While the most prominent advocate in the respectable-academic part of that side of the debate is making Pascal-like arguments, there's going to be some pushback about Pascal's mugging.

Comment by pappubahry on [link] GiveWell's 2015 recommendations are out! · 2015-11-21T04:16:20.664Z · score: 1 (1 votes) · EA · GW

I confess I'm a bit surprised no one else has linked this yet

Judging by GiveWell's Twitter and Facebook feeds, the post is mis-dated -- it only went live about 8 hours ago (at time of writing my comment), rather than 2 or 3 days ago.

Comment by pappubahry on Suggestions thread for questions for the 2015 EA Survey · 2015-05-14T11:43:41.374Z · score: 3 (3 votes) · EA · GW

I think this is referring to a common probability question, e.g., example 3 here.

Comment by pappubahry on The 2014 Survey of Effective Altruists: Results and Analysis · 2015-03-24T00:40:28.014Z · score: 1 (1 votes) · EA · GW

Thanks Peter! I'll make the top-level post later today.

How did you do that so quickly?

(I might have given the impression that I did this all during a weekend. This isn't quite right -- I spent 2-3 evenings, about 8 hours in total, going from the raw csv files to nice and compact .js function. Then I wrote the plotter on the weekend.)

I did this bit in Excel. If the money amounts were in column A, I insert three columns to the right: B for the currency (assumed USD unless otherwise specified), C for the min of the range given, D for the max. In column C, I started with =IF(ISNUMBER(A2), A2, "") and dragged that formula down the column. Then I went through line by line, reading off any text entries, and turning them into currency/min/max (if a single value was reported, I entered it as the min, and left the max blank). currency, tab, number, enter, currency, tab, number, tab, number, enter, currency, tab...

It's not a fun way to spend an evening (hence why I didn't do the lifetime donations as well), but it doesn't actually take that long.

Then: new column E for the AVERAGE(C2:D2) dragged down the column. Then I typed in the average currency conversions for 2013 into a new sheet and did a lookup (most users would use VLOOKUP I think, I used MATCH and OFFSET) to get my final USD numbers in column F.

Also, do you have the GitHub code for your plotter?

As a fierce partisan of the "_final_really_2" school of source control, I'm yet to learn how to GitHub. You can view the Javascript source easily enough though, and save it locally. (I suggest deleting the Google Analytics "i,s,o,g,r,a,m" script if you do this, or your browser might go looking for Google in your file system for a few seconds before plotting the graphs). The two scripts not in the HTML file itself are d3.min.js and ea_survey.data.js. (EDIT: can't be bothered fixing the markdown underscores here.)

A zip file with my ready-to-run CSV file and the R script to turn it into a Javascript function is here.

Comment by pappubahry on The 2014 Survey of Effective Altruists: Results and Analysis · 2015-03-23T08:19:23.079Z · score: 4 (4 votes) · EA · GW

I've made a bar chart plotter thing with the survey data: link.

Comment by pappubahry on The 2014 Survey of Effective Altruists: Results and Analysis · 2015-03-17T10:41:05.488Z · score: 3 (5 votes) · EA · GW

The first 17 entries in imdata.csv have some mixed-up columns, starting (at latest) from

Have you volunteered or worked for any of the following organisations? [Machine Intelligence Research Institute]

until (at least)

Over 2013, which charities did you donate to? [Against Malaria Foundation].

Some of this I can work out (volunteering at "6-10 friends" should obviously be in the friends column), but the blank cells under the AMF donations have me puzzled.

Comment by pappubahry on The 2014 Survey of Effective Altruists: Results and Analysis · 2015-03-17T10:16:35.457Z · score: 5 (7 votes) · EA · GW

Thanks for this, and thanks for putting the full data on github. I'll have a sift through it tonight and see how far I get towards processing it all (perhaps I'll decide it's too messy and I'll just be grateful for the results in the report!).

I have one specific comment so far: on page 12 of the PDF you have rationality as the third-highest-ranking cause. This was surprisingly high to me. The table in imdata.csv has it as "Improving rationality or science", which is grouping together two very different things. (I am strongly in favour of improving science, such as with open data, a culture of sharing lab secrets and code, etc.; I'm pretty indifferent to CFAR-style rationality.)

Comment by pappubahry on I am Samwise [link] · 2015-01-09T03:01:23.887Z · score: 1 (3 votes) · EA · GW

A hero means roughly what you'd expect - someone who takes personal responsibility for solving world problems. Kind of like an effective altruist.

What I understand about rationality 'heroes' is limited to what I've gleaned from Miranda's post, but to me it seems like earning to give fits much more naturally into a sidekick category than into a hero category.

Comment by pappubahry on Problems and Solutions in Infinite Ethics · 2015-01-04T09:04:07.904Z · score: 1 (1 votes) · EA · GW

Why doesn't it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldn't this lower your confidence in the theory?

I think my disagreement is mostly on (1) -- I expect that a correct moral theory would be horrendously complicated. I certainly can't reduce my moral theory to some simple set of principles: there are many realistic circumstances where my principles clash (individual rights versus greater good, say, or plenty of legal battles where it's not clear what a moral decision would be), and I don't know of any simple rules to decide what principles I deem more important in which situations. Certainly there are many realistic problems which I think could go either way.

But I agree that all other things equal, simplicity is a good feature to have, and enough simplicity might sometimes outweigh intuition. Perhaps, once future-me carefully consider enormous aggregative ethics problems, I will have an insight that allows a drastically simplified moral theory. The new theory would solve the repugnant conclusion (whatever I think 'repugnant' means in this future!). Applied to present-me's day-to-day problems, such a simplified theory will likely give slightly different answers to what I think today: maybe the uncertainty I have today about certain court cases would be solved by one of the principles that future-me thinks of.

But I don't think the answers will change a lot. I think my current moral theory basically gives appropriate answers (sometimes uncertain ones) to my problems today. There's wiggle-room in places, but there are also some really solid intuitions that I don't expect future-me to sacrifice. Rescuing the drowning child (at least when I live in a world without the ability to create large numbers of sentient beings!) would be one of these.

Comment by pappubahry on Problems and Solutions in Infinite Ethics · 2015-01-04T07:27:31.700Z · score: 0 (0 votes) · EA · GW

Maybe I've misinterpreted 'repugnant' here? I thought it basically meant "bad", but Google tells me that a second definition is "in conflict or incompatible with", and now that I know this, I'm guessing that it's the latter definition that you are using for 'repugnant'. But I'm finding it difficult to make sense of it all (it carries a really strong negative connotation for me, and I'm not sure if it's supposed to in this context -- there might be nuances that I'm missing), so I'll try to describe my position using other words.

If my moral theory, when applied to some highly unrealistic thought experiment (which doesn't have some clear analog to something more realistic), results in a conclusion that I really don't like, then:

  • I accept that my moral theory is not a complete and correct theory; and

  • this is not something that bothers me at all. If the thought experiment ever becomes relevant, I'll worry about how to patch up the theory then. In the meantime, I'll carry on trying to live by my moral theory.

Comment by pappubahry on Problems and Solutions in Infinite Ethics · 2015-01-04T06:12:40.827Z · score: 0 (0 votes) · EA · GW

If you find this implication repugnant, {you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you don't}.

I reject the implication inside the curly brackets that I added. I don't care what would happen to my moral theory if creating these large populations becomes possible; in the unlikely event that I'm still around when it becomes relevant, I'm happy to leave it to future-me to patch up my moral theory in a way that future-me deems appropriate.

As an analogy

I guess I could attach some sort of plausibility score to moral thought experiments. Rescuing a drowning child gets a score near 1, since rescue situations really do happen and it's just a matter of detail about how much it costs the rescuer. As applied to donating to charity, the score might have to be lowered a little to account for how donating to charity isn't an exact match for the child in the pond.

The Nazi officials case... seems pretty plausible to me? Like didn't that actually happen?

Something of a more intermediate case between the drowning child and creating large populations would be the idea of murdering someone to harvest their organs. This is feasible today, but irrelevant since no-one is altruistically murdering people for organs. I think it's reasonable for someone previously a pure utilitarian to respond with, "Alright, my earlier utilitarianism fails in this case, but it works in lots of other places, so I'll continue to use it elsewhere, without claiming that it's a complete moral theory." (And if they want to analyse it really closely and work out the boundaries of when killing one person to save others is moral and when not, then that's also a reasonable response.)

A thought experiment involving the creation of large populations gets a plausibility score near zero.

Comment by pappubahry on Problems and Solutions in Infinite Ethics · 2015-01-04T01:59:16.454Z · score: 3 (3 votes) · EA · GW

Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion 'not actually important', but the drowning child example 'important'?

Because children die of preventable diseases, but no-one creates arbitrarily large populations of people with just-better-than-nothing well-being.

Comment by pappubahry on Problems and Solutions in Infinite Ethics · 2015-01-03T08:36:22.971Z · score: 2 (2 votes) · EA · GW

OK -- I mean the hybrid theory -- but I see two possibilities (I don't think it's worth my time reading up on this subject enough to make sure what I mean matches exactly the terminology of the paper(s) you refer to):

  • In my hybridisation, I've already sacrificed some intuitive principles (improving total welfare versus respecting individual rights, say), by weighing up competing intuitions.

  • Whatever counter-intuitive implications my mish-mash, sometimes fuzzily defined hybrid theory has, they have been pushed into the realm of "what philosophers can write papers on", rather than what is actually important. The repugnant conclusion falls under this category.

Whichever way it works out, I stick resolutely to saving the drowning child.

Comment by pappubahry on Problems and Solutions in Infinite Ethics · 2015-01-03T03:08:00.042Z · score: 0 (2 votes) · EA · GW

If that procedure was followed consistently, it would disprove all moral theories.

I consider this a reason to not strictly adhere to any single moral theory.

Comment by pappubahry on Problems and Solutions in Infinite Ethics · 2015-01-02T15:01:24.707Z · score: 2 (2 votes) · EA · GW

Hopefully this is my last comment in this thread, since I don't think there's much more I have to say after this.

  1. I don't really mind if people are working on these problems, but it's a looooong way from effective altruism.

  2. Taking into account life-forms outside our observable universe for our moral theories is just absurd. Modelling our actions as affecting an infinite number of our descendants feels a lot more reasonable to me. (I don't know if it's useful to do this, but it doesn't seem obviously stupid.)

  3. Many-worlds is even further away from effective altruism. (And quantum probabilities sum to 1 anyway, so there's a natural way to weight all the branches if you want to start shooting people if and only if a photon travels through a particular slit and interacts with a detector, ....)

Comment by pappubahry on Problems and Solutions in Infinite Ethics · 2015-01-02T12:35:50.608Z · score: 1 (1 votes) · EA · GW

Letting the child drown in the hope that

a) there's an infinite number of life-forms outside our observable universe, and

b) that the correct moral theory does not simply require counting utilities (or whatever) in some local region

strikes me as far more problematic. More generally, letting the child drown is a reductio of whatever moral system led to that conclusion.

Comment by pappubahry on Problems and Solutions in Infinite Ethics · 2015-01-02T05:36:35.266Z · score: 1 (3 votes) · EA · GW

That is precisely the argument that I maintain is only a problem for people who want to write philosophy textbooks, and even then one that should only take a paragraph to tidy up. It is not an issue for altruists otherwise -- everyone saves the drowning child.

Comment by pappubahry on Problems and Solutions in Infinite Ethics · 2015-01-02T03:17:47.306Z · score: 0 (6 votes) · EA · GW

The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists

This topic came up on the 80k blog a while ago and I found it utterly ridiculous then and I find it utterly ridiculous now. The possibility of an infinite amount of happiness outside our light-cone (!) does not pose problems for altruists except insofar as they write philosophy textbooks and have to spend a paragraph explaining that, if mathematically necessary, we only count up utilities in some suitably local region, like the Earth. No-one responds to the drowning child by saying, "well there might be an infinite number of sentient life-forms out there, so it doesn't matter if the child drowns or I damage my suit". It is just not a consideration.

So I disagree very strongly with the framing of your post, since the bit I quoted is in the summary. The rest of your post is on the somewhat more reasonable topic of comparing utilities across an infinite number of generations. I don't really see the use of this (you don't need a fully developed theory of infinite ethics to justify a carbon tax; considering a handful of generations will do), and don't see the use of the post on this forum, but I'm open to suggestions of possible applications.

Comment by pappubahry on Open Thread 6 · 2014-12-15T06:19:46.199Z · score: 2 (2 votes) · EA · GW

The envelope icon next to "Messages" in the top-right (just below the banner) becomes an open envelope when you have a reply. (I think it turns a brighter shade of blue as well? I can't remember.) The icon returns to being a closed envelope after you click on it and presumably see what messages/replies you have.

Comment by pappubahry on Where are you giving and why? · 2014-12-12T05:45:16.140Z · score: 2 (2 votes) · EA · GW

My values align fairly closely with GiveWell's. If they continue to ask for donations then probably about 20% of my giving next year will go to them (as in the past two years). Apart from that:

GiveWell's preferred split across their recommended charities AMF/SCI/GD/EvAc (Evidence Action, which includes Deworm the World) is 67/13/13/7. Since most of the reasoning behind that split is how much money each charity could reasonably use, and I agree with GiveWell that bednets are really cost-effective, I won't be deviating much from GiveWell's recommendation.

Probably I will reduce GiveDirectly's prominence down to 5% or so (with increases for SCI and EvAc) -- I haven't studied GiveWell's latest numbers for the cost-effectiveness calculation closely, but their headline result has their effectiveness far lower than either deworming or bednets. So I'll continue donating a relatively small amount to GD in recognition of them being methodologically really great.

I haven't yet given much thought to GiveWell's other 'standout' charities, and whether it's correct to donate to them or not.

Comment by pappubahry on The new Animal Charity Evaluators recommendations are out · 2014-12-06T02:00:10.387Z · score: 2 (2 votes) · EA · GW

Relying on hoped-for compounding long-term benefits to make donation decisions is at least not a complete consensus (I certainly don't).

My understanding of your position is:

  • Human welfare benefits compound, though we don't know how much or for how long (and I am dubious, along with one of the commenters, about a compounding model for this).

  • Animal welfare benefits might compound if they're caused by human value changes.

In the case of ACE's recommendations, we have three charities which aim to structurally change human society. So we have short-term benefits which appear much larger than those from human-targeted charities, with possibly compounding and poorly researched long-term benefits, as compared to possibly compounding and poorly researched long-term benefits from human-targeted charities.

I would describe the paragraph of JPB's that you quote as highly relevant; at the very least it's useful even if not sufficient information to make a donation decision based on expected impact.

(For the record, I've yet to donate to animal welfare charities because I am a horrible speciesist, but I think the animal welfare wing of EA deserves to be much more prominent than it currently is.)

Comment by pappubahry on Open Thread 6 · 2014-12-02T11:50:35.972Z · score: 1 (1 votes) · EA · GW

My internal definition is "take a job (or build a business) so that you donate more than you otherwise would have" [1]. It's too minimalist a definition to work in every case (it'd be unreasonable to call someone on a $1mn salary who donates $1000 "earning to give", even if they wouldn't donate anything on $500k), but if you're the sort of person who considers "how much will I donate to charity" as an input into their choice of job, then I think the definition will work most of the time.

There probably needs to be a threshold amount donated for "earning to give" to be applied in an EA context, but I don't see the need for a progressive percentage scale for higher-income earners. If you're giving 10% of $1mn, then you're doing a lot more than me and my higher percentage of a lot less.

[1] That needs a bit of pedantic re-writing for it to perfectly match what I mean. e.g., I consider myself earning to give because if it wasn't for my pesky conscience, I'd negotiate a reduced salary for a four-day work week. It'd still be basically the same job, just a different contract... anyway I don't think this sort of pedantry is important here.

Comment by pappubahry on Why long-run focused effective altruism is more common sense · 2014-11-22T06:47:37.110Z · score: 3 (3 votes) · EA · GW

It seems like this comes down to a distinction between effective altruism, meaning altruism which is effective, and EA referring to a narrower group of organizations and ideas.

I'm happy to go with your former definition here (I'm dubious about putting the label 'altruism' onto something that's profit-seeking, but "high-impact good things" are to be encouraged regardless). My objection is that I haven't seen anyone make a case that these long-term ideas are cost-effective. e.g.,

My best guess is that these activities have a significantly larger medium term humanitarian impact than aid. I think this is a common view amongst intellectuals in the US. We probably all agree that it's not a clear case either way.

Has anyone tried to make this case, discussing the marginal impact of an extra technology worker? We'd agree that as a whole, scientific and technological progress are enormously important, and underpin the poverty-alleviation work that we're comparing these longer-term ideas to. But, e.g., if you go into tech and help create a gadget, and in an alternative world some sort of similar gadget gets released a little bit later, what is your impact?

The answer to that last question might be large in expectation-value terms (there's a small probability of you making a profoundly different sort of transformative gadget), but I'd like to see someone try to plug some numbers in before it becomes the main entry point for Effective Altruism.

Note that e.g. spending money to influence elections is a pretty common activity, it seems weird to be so skeptical.

When Ben wrote "smarter leaders", I interpreted it as some sort of qualitative change in the politicians we elect -- a dream that would involve changing political party structures so that people good at playing internal power games aren't rewarded, and instead we get a choice of more honest, clever, and dedicated candidates. If, on the other hand, electing smarter leaders it means donating to your preferred party's or candidate's get-out-the-vote campaign... well I would like to see the cost-effectiveness estimate.

(Ben might also be referring to EA's going into politics themselves, and... fair enough. I doubt it'll apply to more than a small minority of EA's, but he only spent a small minority of his post writing about it.)

there are many other technocratic policies in the same boat, where you'd expect money to be helpful.

I think this is reasonable, and expectation-value impact estimates should be fairly tractable here, since policy wonks have often done cost-benefit analyses (leaving only the question of how much marginal donated dollars can shift the probability of a policy being enacted).

Overall I still feel like these ideas, as EA ideas, are in an embryonic stage since they lack cost-effectiveness guestimates.

Comment by pappubahry on Why long-run focused effective altruism is more common sense · 2014-11-21T16:41:16.222Z · score: 10 (12 votes) · EA · GW

Moderate long-run EA doesn't look close to having fully formed ideas to me, and therefore it seems to me a strange way to introduce people to EA more generally.

you’ll want to make investments in technology

I don't understand this. Is there an appropriate research fund to donate to? Or are we talking about profit-driven capital spending? Or just going into applied science research as part of an otherwise unremarkable career?

and economic growth

Who knows how to make economies grow?

This will mean better global institutions, smarter leaders, more social science

What is a "better" global institution, and is there any EA writing on plans to make any such institutions better? (I don't mean this to come across as entirely critical -- I can imagine someone being a bureaucrat or diplomat at the next WTO round or something. I just haven't seen any concrete ideas floated in this direction. Is there a corner of EA websites that I'm completely oblivious to? A Facebook thread that I missed (quite plausible)?)

I have even less idea of how you plan to make better politicians win elections.

More social science I can at least understand: more policy-relevant knowledge --> hopefully better policy-making.

Underlying some of what you write is, I think, the idea that political lobbying or activism (?) could be highly effective. Or maybe going into the public service to craft policy. And that might well be right, and it would perhaps put this wing of EA, should it develop, comfortably within the sort of common-sense ideas that you say it would. (I say "perhaps" because the most prominent policy idea I see in EA discussions -- I might be biased because I agree with and read a lot of it -- is open borders, which is decidedly not mainstream.)

But overall I just don't see where this hypothetical introduction to EA is going to go, at least until the Open Philanthropy Project has a few years under its belt.

Comment by pappubahry on Open thread 5 · 2014-11-21T12:37:18.672Z · score: 1 (1 votes) · EA · GW

The bottom part of your diagram has lots of boxes in it. Further up, "poverty alleviation is most important" is one box. If there was as much detail in the latter as there is in the former, you could draw an arrow from "poverty alleviation" to a lot of other boxes: economic empowerment, reducing mortality rates, reducing morbidity rates, preventing unwanted births, lobbying for lifting of trade restrictions, open borders (which certainly doesn't exclusively belong below your existential risk bottleneck), education, etc. There could be lots of arrows going every which way in amongst them, and "poverty alleviation is most important" would be a bottleneck.

Similarly (though I am less familiar with it), if you start by weighting animal welfare highly, then there are lots of options for working on that (leafleting, lobbying, protesting, others?).

I agree that there's some real sense in which existential risk or far future concerns is more of a bottleneck than human poverty alleviation or animal welfare -- there's a bigger "cause-distance" between colonising Mars and working on AI than the "cause-distance" between health system logistics and lobbying to remove trade restrictions. But I think the level of detail in all those boxes about AI and "insight" overstates the difference.

Comment by pappubahry on Open thread 5 · 2014-11-19T12:43:10.779Z · score: 0 (4 votes) · EA · GW

I haven't seen a downvote here that I've agreed with, and for the moment I'd prefer an only-upvote system. I don't know where I'd draw the line on where downvoting is acceptable to me (or what guidelines I'd use); I just know I haven't drawn that line yet.

Comment by pappubahry on Open thread 5 · 2014-11-18T02:55:36.674Z · score: 0 (0 votes) · EA · GW

Previous thread

Comment by pappubahry on Kidney donation is a reasonable choice for effective altruists and more should consider it · 2014-11-18T01:24:09.650Z · score: 7 (7 votes) · EA · GW

Yes, I agree with that, and it's worth someone making that point. But I think in general it is too common a theme in EA discussion to compare some possible altruistic endeavour (here kidney donation) to perfectly optimal behaviour, and then criticise the endeavour as being sub-optimal -- Ryan even words it as "causing net harm"!

In reality we're all sub-optimal, each in our own many ways. If pointing out that kidney donation is sub-optimal (assuming all the arguments really do hold!) nudges some possible kidney donors to actually donate more of their income, then great. But I still think that there are people who would consider donating a kidney but who wouldn't donate an extra half-month's salary instead.

Comment by pappubahry on Kidney donation is a reasonable choice for effective altruists and more should consider it · 2014-11-17T06:48:07.889Z · score: 0 (0 votes) · EA · GW

How long would it take to create $2k of value? That's generally 1-2 weeks of work. So if kidney donation makes you lose more than 1-2 weeks of life, and those weeks constitute funds that you would donate, or voluntary contributions that you would make, then it's a net negative activity for an effective altruist.

This can't be the right comparison to make if the 1-2 weeks of life is lost decades from now. The (foregone) altruistic opportunities in 2060 are likely to cost much more than $2000 per 15 DALY's averted.

I think the basic shape of your argument still holds, based on foregone income that you could donate today, but a slightly shorter retirement doesn't look like it makes much difference to one's total altruism (especially if you leave donations to charity in your will).

Comment by pappubahry on Kidney donation is a reasonable choice for effective altruists and more should consider it · 2014-11-17T06:24:14.659Z · score: 0 (0 votes) · EA · GW

That's an unfair comparison.

But it might be a relevant comparison for many people. i.e., I expect that there are people who would be willing to forego some income to donate a kidney (and they may not need to do this, depending on the availability of paid medical leave), but who wouldn't donate all of that income if they kept both kidneys.

Comment by pappubahry on Open Thread 4 · 2014-11-15T03:23:13.001Z · score: 1 (1 votes) · EA · GW

I don't understand what you're pointing us to in that link. The main part of the text tells us that ties are usually broken in swing states by drawing lots (so if you did a full accounting of probabilities and expectation values, you'd include some factors of 1/2, which I think all wash out anyway), and that the probability of a tie in a swing state is around 1 in 10^5.

The second half of the post is Randall doing his usual entertaining thing of describing a ridiculously extreme event. (No-one who argues that a marginal vote is valuable for expectation-value reasons thinks that most of the benefit comes from the possibility of ties in nine states.)

Perhaps some of those details are interesting, but it doesn't look to me like it changes anything of what's been debated in this thread.

Comment by pappubahry on Open Thread 4 · 2014-11-09T10:05:08.047Z · score: 1 (1 votes) · EA · GW

My main response is that this is worrying about very little -- it doesn't take much time to choose who to vote for once or twice every few years.

But in particular,

2) The risk you incur in going to the place where you vote (a non-trivial likelihood of dying due to unusual traffic that day).

is an overstated concern at least for the US (relative risk around 1.2 of dying on the road on election day compared to non-election days) and Australia (relative risk around 1.03 +/- error analysis I haven't done).

Comment by pappubahry on Open Thread 4 · 2014-11-06T14:28:11.201Z · score: 1 (1 votes) · EA · GW

That's OK, even if I had perceived it as an attack, I've thought enough about this topic for it not to bother me!

Comment by pappubahry on Open Thread 4 · 2014-11-06T12:15:56.938Z · score: 0 (0 votes) · EA · GW

As I said to Peter in our long thread, "Eh whatevs". :P

I don't think I can make anything more than a very weak defence of avoiding DAF's in this situation (the defence would go: "They seem kinda weird from a signalling perspective"). I'm terrible at finance stuff, and a DAF seems like a finance-y thing, and so I avoid them.

Comment by pappubahry on Open Thread 4 · 2014-11-06T06:27:52.881Z · score: 1 (1 votes) · EA · GW

Probability that they'll need my money soon:

GAVI: ~0%

AMF: ~50%

SCI: ~100%

You might say "well there's a 50-percentage-point difference at each of those two steps" and think I'm being inconsistent in donating to AMF and not GAVI. But if I try some expectation-value-type calculation, I'll be multiplying the impact of AMF's work by 50% and getting something comparable to SCI, but getting something close to zero for GAVI.

Comment by pappubahry on Open Thread 4 · 2014-11-06T04:29:05.356Z · score: 1 (1 votes) · EA · GW

AMF is far more likely to need the money soon than GAVI.

Comment by pappubahry on Open Thread 4 · 2014-11-06T03:53:56.227Z · score: 1 (1 votes) · EA · GW

Presumably they've already factored in the relative strength of bednets.

I don't think this is relevant to GiveWell's decision not to recommend AMF.... Immunisations are super-cost-effective, but GiveWell don't make a recommendation in this area because GAVI or UNICEF or whoever already have committed funding for this.

I've got two choices if I want to donate all my donation money this year:

  • Donate to AMF, which is likely higher impact, but maybe my money won't be spent for a couple of years.

  • Donate somewhere else, likely lower impact.

I think an AMF donation looks a pretty decent option here. I would say that the EA-controversial part of my thinking is the insistence on donating all my donation money this year, rather than using a donor-advised fund (to which I say, "Eh, whatevs...").

Comment by pappubahry on Open Thread 4 · 2014-11-05T02:51:39.396Z · score: 0 (0 votes) · EA · GW

It's some kind of balancing act between supporting GiveWell-recommended charities as a way of supporting GiveWell, and recognising that our best guess is that bednets are substantially more cost-effective than deworming/cash transfers. (Pending the forthcoming update....)

Comment by pappubahry on Open Thread 4 · 2014-11-05T01:35:04.901Z · score: 1 (1 votes) · EA · GW

About a quarter of my donations this year will go to AMF. I'd feel a bit weird holding on to the money instead of donating it.

Comment by pappubahry on Should Giving What We Can change its Pledge? · 2014-10-23T12:13:33.985Z · score: 5 (5 votes) · EA · GW

The healthcare thing was just an example (though, despite the FAQ on this topic that Owen brought up below, I would still feel dishonest withdrawing from a pledge for this reason). It's the lock-in thing that I just don't feel comfortable with.

I ramped up my donations after discovering GiveWell, and at the time it looked like it cost ~$500 to save a life. Now they reckon it's roughly ten times that amount. The overwhelming moral case for donating today feels around ten times weaker to me than it did in 2009. If the cost per life saved(-equivalent) rises even further in the coming decade, I might decide that I'm only going to chip in a few percent of my income to MSF, say.

Basically I feel more comfortable donating and being an example of someone who donates to cost-effective charities, rather than publicly pledging.

Comment by pappubahry on Should Giving What We Can change its Pledge? · 2014-10-23T05:24:18.688Z · score: 5 (5 votes) · EA · GW

I'm not a GWWC member, because I don't want to lock myself in to a pledge. (I've been comfortably over 10% for a few years, and expect that to continue, but I could imagine, e.g., needing expensive medical care in the future and cutting out my donations to pay for that.) For that reason I wouldn't take the pledge in either its current or its proposed form.

Comment by pappubahry on Effective Altruism is a Question (not an ideology) · 2014-10-18T13:36:38.419Z · score: 2 (2 votes) · EA · GW

The point of that word being there is to reduce the strength of the claim: you're focused on being effective, you're trying hard to be effective, but to say that you are effective is different.

I don't really want to reduce the strength of my claim though[1] -- if I have to be pedantic, I'll talk about being effective in probabilistic expectation-value terms. If donating to our best guesses of the most cost-effective charities we can find today doesn't qualify as "effective", then I don't think there's much use in the word, either to describe an -ism or an -ist. It'd be more accurate to call it "hopefully effective altruism", but I don't think it's much of a sacrifice to drop the "hopefully".

[1] At an emotional level, I have a bit of a I've donated a quarter of my salary to the best charities I could find for the last five years, stop trying to take my noun phrase away reaction as well.

Comment by pappubahry on Effective Altruism is a Question (not an ideology) · 2014-10-18T13:20:09.486Z · score: 1 (1 votes) · EA · GW

Thanks for mentioning that you run EA Melbourne -- I think this difference in perspective is what's driving our -ism/-ist disagreement that I talk about in my earlier comment. I've never been to an EA meetup group (I moved away from Brisbane earlier in February, missing out by about half a year on the new group that's just starting there...), and I'd wondered what EA "looked like" in these contexts. If a lot of it is just meeting up every few weeks for a chat about EA-ish topics, then I agree that "effective altruist" is a dubious term if applied to everyone there.

Is it the core idea though? None of the introductions I linked to above mention anything about what one "should" do.

Perhaps a different phrasing would be a little better, but however it's worded, moral beliefs and/or moral reasoning motivated most of what I see in the EA movement today -- totally fundamental to everything, even if it's not always explicitly stated. Certainly what keeps me sending out donations every month or so is the internal conviction that it's the right thing to do.

Maybe this is another difference of perspective thing? Like if many of the EA people you see are more passive consumers of EA material, instead of structuring their lives/finances around it, then the fundamental moral motivation of introductions to EA seems absent? I don't know.

Certainly I find the idea of this (persuade others to do good with their resources) being a core motivating philosophy of my life very off-putting.

I see the core motivating philosophy of my life as trying to do good with my resources. Some no doubt see persuading others as an important part of their resources (I mostly fail at it), but to me EA most fundamentally is about maximising one's own impact, in whichever ways one can.

Comment by pappubahry on Effective Altruism is a Question (not an ideology) · 2014-10-18T03:06:22.093Z · score: 1 (1 votes) · EA · GW

Pretty passively.... Like I'll send some money GiveWell's way later this year to help find effective giving opportunities, but it doesn't feel inside of me as though I'm aspiring to something here. The GiveWell staff might aspire to find those better giving opportunities; I merely help them a bit and hope that they succeed.

I also think that describing ourselves primarily as having a never-ending aspiration is selling us short if we're actually achieving stuff.

Comment by pappubahry on Effective Altruism is a Question (not an ideology) · 2014-10-17T16:19:41.149Z · score: 5 (5 votes) · EA · GW

I disagree with a bit of the intro and part one.

You can easily say that Effective Altruism answers a question. The question is, "What should I do with my life?" and the answer is, "As much good as possible (or at least a decent step in that direction)." Only if you take that answer as a starting premise can you then say that EA asks the question, "How do I do the most good?"

Conversely, you can just as easily say that feminism doesn't ask whether men and women should be equal (that they should be is the starting premise), it asks how society is structurally unequal and how we might re-make society so that it becomes equal.

So I don't see EA as necessarily in some different category than the (other) ideologies that you list.

In part one, I just... don't really see a big issue with -ism versus -ist, at least not one any near as large as you're claiming exists. “Can I [x] and still be a member of the Effective Altruism movement?” seems about as natural a question to ask as “Can I [x] and still be an Effective Altruist?” As long as there's an EA movement that's in any way demanding of its followers, it provokes the same sort of questions regardless of whether we call ourselves followers of Effective Altruism or Effective Altruists. Insofar as there's a problem, I think it's the "impudence" that you mention of calling this movement Effective Altruism in the first place.

(If someone comes up with a better term for EA followers, I'll be happy to adopt it -- I don't see it as a big issue. In the meantime I'll occasionally call myself an "EA" if it makes sense to do so in context.)

Alternative descriptors include “aspiring effective altruist”, “interested in Effective Altruism”, “member of the Effective Altruism movement”… What do you think of those options?

"Aspiring effective altruist" doesn't describe me: I don't aspire to anything more than what I'm currently doing, which is donating a decent-sized fraction of my salary to charity. I plateaued in my journey towards an idealised EA several years ago.

"Interested in Effective Altruism" is far too weak.

"Member of the Effective Altruism movement" is something I'd happy to call myself.

Comment by pappubahry on One month in - it's time for more introductions · 2014-10-11T12:34:01.977Z · score: 9 (9 votes) · EA · GW

I didn't make an introduction comment in the last post, so I suppose I should do one here. I'm David Barry -- one of the migrated posts from the old blog is authored by the user David_Barry, but I signed up my usual Internet handle before thinking about the account that had already been made for me. I live in Perth, where I moved for work earlier this year, having previously lived in Brisbane.

I always used to think I'd become a physicist one day, but what was supposed to be a PhD went badly for too long and I escaped with a Master's. I've now been working in mining geostatistics for almost six years, and donating a chunk of my salary to GiveWell-recommended charities for five years.

I don't do much actively in EA apart from the donations I send out roughly once a month. Occasionally I'll knuckle down and work through cost-effectiveness calcu-guestimates, but mostly I just like skimming the EA Facebook group and this forum, occasionally chipping in.

Comment by pappubahry on Open Thread 2 · 2014-10-09T01:53:49.210Z · score: 0 (0 votes) · EA · GW

I was too lazy to specify that I was talking about the world as it is.

A couple might have a third (or first, or...) child, or they might not. I can accept that the two possibilities lead to slightly different total or average utilities, but as I said, I am not utilitarian on this point. I think we just allow people to choose how many children they have, and we build the rest of ethics around that.

Comment by pappubahry on Open Thread 2 · 2014-10-08T08:49:09.449Z · score: 3 (3 votes) · EA · GW

To me, the decision (freely made) to have children is morally neutral -- I am not utilitarian on this topic.

Birth rates usually fall substantially as female education levels rise and women become more empowered generally. I would be happier about the world if countries that currently have high birth rates see those birth rates fall thanks to better education levels etc. The sort of drastic fall in birth rates seen in, e.g., South Korea and Iran, are caused by large society-wide changes, and I don't think it's likely that as an outside donor I can do anything to help bring about similar society-wide change in, e.g., Nigeria.

But improved access to contraceptives and family planning information help at least some couples choose to have fewer children, and that is something that I would plausibly donate towards. (I don't know what sort of cost-per-unwanted-birth-averted figure I'd need to prefer a donation to, say, Marie Stopes over a donation to SCI, but it's something I would carefully consider if I did see those figures.)

I can't think of any realistic cases where I would pay for extra people to be born.