Posts

Comments

Comment by benmillwood on EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge? · 2019-06-17T18:30:54.738Z · score: 4 (6 votes) · EA · GW

I don't think that "going silent" or failing to report donations is indication that people are not meeting the pledge. Nowadays I don't pay GWWC as an organisation much / any attention, but I'm still donating 10% a year (and then some).

To be honest I haven't read closely enough to understand where you do and don't account for "quiet pledge-keepers" in your analysis, but I at least think stuff like this is just plain wrong:

total number of people ceasing reporting donations (and very likely ceasing keeping the pledge)

Comment by benmillwood on Amazon Smile · 2019-06-15T22:19:10.033Z · score: 1 (1 votes) · EA · GW

I couldn't find The Clear Fund when I looked just now. Would be interested in someone confirming that it's still there.

Comment by benmillwood on A general framework for evaluating aging research. Part 1: reasoning with Longevity Escape Velocity · 2019-06-07T14:30:38.120Z · score: 1 (1 votes) · EA · GW

If you want to look up the maths elsewhere, it may be helpful to know that a constant, independent chance of death (or survival) per year is modelled by a negative binomial distribution.

Comment by benmillwood on Evidence Action is shutting down No Lean Season · 2019-06-07T14:23:30.241Z · score: 6 (5 votes) · EA · GW

Sounds like the fact there was already substantial doubt over whether the program worked was a key part of the decision to shut it down. That suggests that if the same kind of scandal had affected a current top charity, they would have worked harder to continue the project.

Comment by benmillwood on There's Lots More To Do · 2019-06-06T03:20:16.824Z · score: 4 (3 votes) · EA · GW

I actually think even justifying yourself only to yourself, being accountable only to yourself, is probably still too low a standard. No-one is an island, so we all have a responsibility to the communities we interact with, and it is to some extent up to those communities, not the individuals in isolation, what that means. If Ben Hoffman wants to have a relationship with EAs (individually or collectively), it's necessary to meet the standards of those individuals or the community as a whole about what's acceptable.

Comment by benmillwood on There's Lots More To Do · 2019-06-06T03:06:44.135Z · score: 5 (3 votes) · EA · GW

When you say "you don't need to justify your actions to EAs", then I have sympathy with that, because EAs aren't special, we're no particular authority and don't have internal consensus anyway. But you seem to be also arguing "you don't need to justify your actions to yourself / at all". I'm not confident that's what you're saying, but if it is I think you're setting too low a standard. If people aren't required to live in accordance with even their own values, what's the point in having values?

Comment by benmillwood on Could the crowdfunder to prosecute Boris Johnson be a high impact donation opportunity? · 2019-06-06T01:48:44.727Z · score: 0 (4 votes) · EA · GW

It's odd to call Boris an opponent of the government. He's a sitting MP - he's part of the state. To me this seems to be more about the courts being able to hold Parliament accountable.

Comment by benmillwood on Stories and altruism · 2019-05-20T09:17:49.597Z · score: 2 (2 votes) · EA · GW

I like the idea here a great deal, but I expect there's going to be a lot of variation in what creates what effect in whom. I wonder if there's better ways to come up with aggregate recommendations, so we can find out what seems to be consistent in its EA appeal, vs. what's idiosyncratic

Comment by benmillwood on Why isn't GV psychedelics grantmaking housed under Open Phil? · 2019-05-06T15:19:57.653Z · score: 6 (4 votes) · EA · GW

There's an unanswered question here of why Good Ventures makes grants that OpenPhil doesn't recommend, given that GV believes in the OpenPhil approach broadly. But I guess I don't find it that surprising that they do so. People like to do more than one thing?

Comment by benmillwood on Why isn't GV psychedelics grantmaking housed under Open Phil? · 2019-05-06T05:05:05.110Z · score: 12 (4 votes) · EA · GW

Have you attempted to contact GV or OpenPhil directly about this?

Comment by benmillwood on Political culture at the edges of Effective Altruism · 2019-04-14T12:17:17.199Z · score: 12 (10 votes) · EA · GW

I think this is only true with a very narrow conception of what the "EA things that we are doing" are. I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world.

That's all I believe constitutes "EA things" in your usage. Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally "the experts" on. If the global research community on poverty interventions came to the consensus "actually we think bednets are bad now" then EA orgs would need to listen to that and change course.

"Politicized" questions and values are no different, so we need to be open to feedback and input from external experts, whatever constitutes expertise in the field in question.

Comment by benmillwood on Does EA need an underpinning philosophy? Could sentientism be that philosophy? · 2019-03-30T18:24:19.746Z · score: 18 (6 votes) · EA · GW

Downvotes aren't primarily to help the person being downvoted. They help other readers, which after all there are many more of than writers. Creating an expectation that they should all be explained increases the burden on the downvoter significantly, making them less likely to be used and therefore less useful.

Comment by benmillwood on Apology · 2019-03-25T15:05:11.961Z · score: 10 (12 votes) · EA · GW

Just to remark on the "criminal law" point – I think it's appropriate to apply a different, and laxer, standard here than we do for criminal law, because:

  • the penalties are not criminal penalties, and in particular do not deprive anyone of anything they have a right to, like their property or freedom – CEA are free to exclude anyone from EAG who in their best judgement would make it a worse event to attend,
  • we don't have access to the kinds of evidence or evidence-gathering resources that criminal courts do, so realistically it's pretty likely that in most cases of misconduct or abuse we won't have criminal-standard evidence that it happened, and we'll have to either act despite that or never act at all. Some would defend never acting at all, I'm sure (or acting in only the most clear-cut cases), but I don't think it's the mainstream view.
Comment by benmillwood on Apology · 2019-03-25T14:04:17.369Z · score: 29 (13 votes) · EA · GW
And this is a clear case in which I would have first-person authority on whether I did anything wrong.

I think this is the main point of disagreement here. Generally when you make sexual or romantic advances on someone and those advances make them uncomfortable, you're often not aware of the effect that you're having (and they may not feel safe telling you), so you're not the authority on whether you did something wrong.

Which is not to say that you're guilty because they accused you! It's possible to behave perfectly reasonably and for people around you to get upset, even to blame you for it. In that scenario you would not be guilty of doing anything wrong necessarily. But more often it looks like this:

  • someone does something inappropriate without realizing it,
  • impartial observers agree, having heard the facts, that it was inappropriate,
  • it seems clearly-enough inappropriate that the offender had a moral duty to identify it as such in advance and not do it.

Then they need to apologize and do what's necessary to prevent it happening again, including withdrawing from the community if necessary.

Comment by benmillwood on Apology · 2019-03-25T13:53:22.214Z · score: 24 (13 votes) · EA · GW

If I heard that a lot of people were feeling uncomfortable following interactions with me, I think it's likely that I would apologize and back off before understanding why they felt that way, and perhaps without even understanding what behaviour was at issue.

I'd trust someone else's judgement comparably with or more than my own, particularly when there were multiple other someones, because I'm aware of many cases where people were oblivious to the harm their own behaviour was causing (and indeed, I don't always know how other people feel about the way I interact with them and put a lot of effort into giving them opportunities to tell me). Obviously I'd apply some common sense to accusations that e.g. I knew to be factually wrong.

In the abstract, which of these do you think happens more often?

  • Someone makes people uncomfortable without being aware that they are doing so. Other people inform them.
  • Someone doesn't make anyone feel uncomfortable (above the base rate of awkward social interactions). People erroneously tell them that they are doing so.

Now, the second is probably somewhat more likely than I've made it sound, but the first just seems way more ordinary to me. So my outside view is that the most likely reason for people to tell you that you're making others uncomfortable is that you are actually doing that. You're entitled to play this off against what you know of the inside view, but I think it would be pretty weird to just dismiss it entirely.

Comment by benmillwood on Will companies meet their animal welfare commitments? · 2019-02-04T19:44:32.053Z · score: 5 (4 votes) · EA · GW

This is a relatively minor issue, perhaps, but the graph you show from the EggTrack report seems to have its "n=" numbers wrong. Looking at the report itself, the graph has the same values as (and immediately follows) another one which only includes the reported-against commitments, so I'm betting they just copied the numbers from that one accidentally.

(I haven't yet tried to contact CIWF about this and probably won't get around to it, but I'll update this post if I do)

Comment by benmillwood on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-12T09:58:59.664Z · score: 3 (3 votes) · EA · GW

What was the largest amount that any individual got matched on GT? Given that this year there were only 15 seconds of matching funds, can one person get through enough forms in time to give a lot?

Comment by benmillwood on Should donor lottery winners write reports? · 2019-01-11T19:22:19.876Z · score: 1 (1 votes) · EA · GW

I think 2-10x is the wrong average multiplier across lottery winners (though, in fairness, you didn't explicitly claim it was an average). In order to make good grants to new small high-risk things, you need to hear about them, and I suspect most lottery participants don't have the necessary networks and don't have special access to significant private information – after all, private information doesn't spread well.

Concretely I'm suggesting that the median lottery participant doesn't get any benefit at all from the ability to use private information.

Comment by benmillwood on Should donor lottery winners write reports? · 2019-01-10T15:37:08.375Z · score: 5 (4 votes) · EA · GW

We can imagine three categories of grants:

A. Publically justifiable

B. Privately justifiable

C. Unjustifiable :)

I agree reports like Adam's will move people from B to A, but I think they will also move people from C to A, by forcing them to examine their choices more carefully and hold themselves to a higher standard.

This model prompts two possible sources of disagreement: you could disagree about the relative proportions of people moving from B vs. from C, or you could disagree about how bad it is to have a mix of B and C vs. more A.

To address the second question, if you think that B is 2-10x more valuable than A, then even if donations in category C are worthless (leaving aside the chance they are net negative), an equal mix of B and C is better than just A, and towards the 10x end of that spectrum, you can justify up to 90% C and 10% B.

But let's return to that parenthetical – could more C donations be net negative, even aside from opportunity cost? I think this risk is underexamined. I suspect most projects won't directly do harm, but well-funded blunders are more visible and reputationally damaging.

Comment by benmillwood on Should donor lottery winners write reports? · 2019-01-07T16:52:57.255Z · score: 3 (3 votes) · EA · GW

Or because their best granting opportunity can't be justified with publically-available knowledge, or has other weird optics / reputational concerns.

Comment by benmillwood on How High Contraceptive Use Can Help Animals? · 2019-01-07T15:10:38.602Z · score: 12 (5 votes) · EA · GW

So, I'm instinctively creeped out by any attempt to reduce the number of humans, and my initial reaction to this idea was basically "yikes". Having taken time to reflect and read the report, I've come around a little, in that improving access to contraception seems hard to oppose even if you're broadly in favour of more humans rather than less (though note that it's often opposed by some religious groups).

That said, I still think there's greater potential for extreme negative reactions to this idea than you appreciate. In particular, white wealthy people targeting low-income countries with the explicit aim of reducing their population has a chance of tripping people's "eugenics sirens" and drawing comparisons with the long and racist history of compulsory sterilizations. I'm not saying I would agree with those comparisons – it seems very clear that your motivations are different, and the ethnicity of your target group is coincidental / irrelevant – but I don't think that everyone would believe in your good faith as much as I do; some compulsory or semi-coercive sterilization was done covertly and in the guise of helping the recipients, so some may feel obliged to be especially wary of anything superficially similar.

You briefly addressed reputational risk in this passage:

The intervention is middling in terms of reputational and field building
effects, because there is no significant risk of turning people off animal
advocacy or vegetarianism if the organization wouldn’t be promoted as a
directly animal-focused charity.

Bluntly, this comes across as dishonest. Aren't you worried that people might discover your true motivations aren't the same as your apparent ones, and distrust animal advocates in future?

Comment by benmillwood on Public policy push for effective altruism · 2019-01-07T14:08:28.650Z · score: 1 (1 votes) · EA · GW

In the UK, there is the All-Party Parliamentary Group for Future Generations, although I'm not sure how much they actually do.

Comment by benmillwood on Is The Hunger Site worth it? · 2018-11-30T14:54:07.741Z · score: 1 (1 votes) · EA · GW

Also, if you do this, please come back and tell us what you discovered :)

Comment by benmillwood on Why EAs in particular are good people to start charities · 2018-06-16T13:21:37.264Z · score: 0 (2 votes) · EA · GW

On what grounds do you expect EAs to have better personal ability?

Something I've been idly concerned about in the past is the possibility that EAs might be systematically more ambitious than equivalently competent people, and thus at a given level of ambition, EAs would be systematically less competent. I don't have a huge amount of evidence for this being borne out in practice, but it's one not-so-implausible way that EA charity founders might be worse than average at the skills needed to found charities.

Comment by benmillwood on Three levels of cause prioritisation · 2018-06-03T07:47:44.368Z · score: 1 (1 votes) · EA · GW

I think this framing is a good one, but I don't immediately agree with the conclusion you make about which level to prioritize.

Firstly, consider the benefits we expect from a change in someone's view at each level. Do most people stand to improve their impact the most by choosing the best implementation within their cause area, or switching to an average implementation in a more pressing cause area? I don't think this is obvious, but I lean to the latter.

Higher levels are more generalizable: cross-implementation comparisons are only relevant to people within that cause, whereas cross-cause comparisons are relevant to everyone who shares approximately the same values, so focusing on lower levels limits the size of the audience that can benefit from what you have to say.

Low-level comparisons tend to require domain-specific expertise, which we won't be able to have across a wide range of domains.

I also think there's just a much greater deficit of high-quality discussion of the higher levels. They're virtually unexamined by most people. Speaking personally, my introduction to EA was approximately that I knew I was confused about the medium-level question, so I was directly looking for answers to that: I'm not sure a good discussion of the low-level question would have captured me as effectively.

Comment by benmillwood on Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was · 2018-05-25T14:12:28.086Z · score: 1 (3 votes) · EA · GW

I don't think you should update too much on people being unkind on the internet :)

Comment by benmillwood on A case for developing Aldehyde Stabilized Cryopreservation into a medical procedure (1/4) · 2018-05-13T09:33:26.840Z · score: 8 (8 votes) · EA · GW

There are many, many possible altruistic targets. I think to be suitable for the EA forum, a presentation of an altruistic goal should include some analysis of how it compares with existing goals, or what heuristics lead you to believe it's worthy of particular attention.

Comment by benmillwood on Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply · 2018-05-13T09:06:49.046Z · score: 1 (1 votes) · EA · GW

I think it's sort of bizarre to suggest that out of 25,000 vegetarians, one is responsible for the shed being closed, and the others did nothing at all. Why privilege the "last" decision to not purchase a chicken? It makes more sense to me that you'd allocate the "credit" equally to everyone who chose not to eat meat.

The first 24,999 needed to not buy a chicken in order for the last one to be in a position for their choice to make a difference.

Comment by benmillwood on Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift · 2018-05-13T08:36:40.235Z · score: 4 (3 votes) · EA · GW

It's not enough to place a low level of trust in your future self for commitment devices to be a good idea. You also have to put a high level of trust in your current self :)

That is, if you believe in moral uncertainty, and believe you currently haven't done a good job of figuring out the "correct" way of thinking about ethics, you may think you're likely to make mistakes by committing and acting now, and so be willing to wait, even in the face of a strong chance your future self won't even be interested in those questions anymore.

Comment by benmillwood on Syllabus for Course on Effective Altruism · 2018-05-09T09:06:01.020Z · score: 0 (0 votes) · EA · GW

I think on balance there's a strong chance you're right, but there IS a lose-lose outcome, where the consumer pressure drives the companies to fire all their sweatshop employees and move to a place where they can get people from a different, less needy origin (that maybe has different labour laws, or in some other ways pacifies many of the consumer activists).

Comment by benmillwood on Empirical data on value drift · 2018-05-09T08:27:13.491Z · score: 2 (4 votes) · EA · GW

First of all, thanks for this post -- I think it's really valuable to get a realistic sense of how these beliefs play out over the long term.

Like others in the comments, though, I'm a little critical of the framing and skeptical of the role of commitment devices. In my mind, we can view commitment devices as essentially being anti-cooperative with our future selves. I think we should default to viewing these attempts as suspicious, similarly to how we would caution against acting anti-cooperatively towards any other non-EAs.

Implicit is the assumption that if we change, it must be for "bad" reasons. It's natural enough -- clearly we can't think of any good reasons, otherwise we would already have changed -- but it lacks imagination. We may learn of a reason why things are not as we thought. Limiting your options according to your current knowledge or preferences means limiting your ability to flourish if the world turns out to be very different from your expectations.

More abstractly, imagine that you heard about someone who believed that doing X was a really good idea, and then three years later, believed that doing X was not a good idea. Without any more details, who do you think is most likely to be correct?

(At the same time, I think we're all familiar with failing to achieve goals because we failed to commit to them, even as we knew they were worth it, so there can be value in binding yourself. It's also good signalling, of course. But such explanations or justifications need to be strong enough to overcome the general prior based on the above argument.)

Comment by benmillwood on Empirical data on value drift · 2018-05-09T07:46:08.867Z · score: 0 (0 votes) · EA · GW

But this also means if you donate 50% and spend 50% of your free time effectively (like I try to do), you would be a 100% EA

If you gave 60% of your income would that make you a 110% EA? If so, I think that mostly just highlights that this metric should not be taken too seriously. (I was going to criticize it on more technical grounds, but I think to do so would be to give legitimacy to the idea that people should compare their own "numbers" with each other, which seems likely to be to be a bad idea)

Comment by benmillwood on How fragile was history? · 2018-02-09T17:48:26.106Z · score: 0 (0 votes) · EA · GW

Not that it's obviously terribly important to the historical chaos discussion, but I think siblings aren't a great natural model. Siblings differ by at least (usually more than) nine months, which you can imagine affecting them biologically, via the physiology of the mother during pregnancy, or via the medical / material conditions of their early life. They also differ in social context -- after all, one of them has one more older sibling, while the other has one more younger one. Two agents interacting may exaggerate their differences over time, or perhaps they sequentially fill particular roles in the eyes of the parents, which leads to differences in treatment. So I think there are lots of sources of sibling difference that aren't present in hypothetical genetic reshuffles.

(That said, the coinflip on sex seems pretty compelling.)

Comment by benmillwood on The almighty Hive will · 2018-02-09T16:40:45.660Z · score: 6 (6 votes) · EA · GW

I would be interested in funding this.

Comment by benmillwood on #GivingTuesday: Counter-Factual Donation Matching is the Lowest-Hanging Fruit in Effective Giving · 2017-12-16T16:38:03.988Z · score: 3 (3 votes) · EA · GW

For the benefit of future readers: Giving Tuesday happened, and the matching funds were exhausted within about 90 seconds. In total of ~$370k in donations we matched ~$46k, or about 13%, which was lower than hoped. William wrote up a lessons-learned document as a Google doc.

Comment by benmillwood on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-26T09:46:31.964Z · score: 2 (2 votes) · EA · GW

Can't help but feel this thoughtful and comprehensive critique of negative utilitarianism is wasted on being buried deep in the comments of a basically unrelated post :)

Promote to its own article?

Comment by benmillwood on Against neglectedness · 2017-11-25T08:07:49.880Z · score: 0 (0 votes) · EA · GW

I'm going to write a relatively long comment making a relatively narrow objection to your post. Sorry about that, but I think it's a particularly illustrative point to make. I disagree with these two points against the neglectedness framing in particular:

  1. that it could divide by zero, and this is a serious problem
  2. that it splits a fraction into unnecessarily conditional parts (the "dragons in Westeros" problem).

Firstly in response to (1), this is a legitimate illustration that the framework only applies where it applies, but it seems like in practice like it isn't an obstacle. Specifically, the framing works well when your proposed addition is small relative to the existing resource, and it seems like that's true of most people in most situations. I'll come back to this later.

More importantly, I feel like (2) misses the point of what the framework was developed for. The goal is to get a better handle on what kinds of things to look for when evaluating causes. So the fact that the fraction simplifies to "good done per additional resource" is sort of trivial – that's the goal, the metric we're trying to optimize. It's hard to measure that directly, so the value added by the framework is the claim that certain conditionalizations of the metric (if that's the right word) yield questions that are easier to answer, and answers that are easier to compare.

That is, we write it as "scale times neglectedness times solvability" because we find empirically that those individual factors of the metric tend to be more predictable, comparable and measurable than the metric as a whole. The applicability of the framework is absolutely contingent on what we in-practice discover to be the important considerations when we try to evaluate a cause from scratch.

So while there's no fundamental reason why neglectedness, particularly as measured in the form of the ratio of percentage per resource, needs to be a part of your analysis, it just turns out to be the case that you can often find e.g. two different health interventions that are otherwise very comparable in how much good they do, but with very different ability to consume extra resources, and that drives a big difference in their attractiveness as causes to work on.

If ever you did want to evaluate a cause where the existing resources were zero, you could just as easily swap the bad cancellative denominator/numerator pair with another one, say the same thing in absolute instead of relative terms, and the rest of the model would more or less stand up. Whether that should be done in general for evaluating other causes as well is a judgement call about how these numbers vary in practice and what situations are most easily compared and contrasted.

Comment by benmillwood on Against neglectedness · 2017-11-25T07:37:50.001Z · score: 0 (0 votes) · EA · GW

To clarify, this only applies if everyone else is picking interventions at random, but you're still managing to pick the best remaining one (or at least better than chance).

It also seems to me like it applies across causes as well as within causes.

Comment by benmillwood on Against neglectedness · 2017-11-25T07:31:36.233Z · score: 0 (0 votes) · EA · GW

The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on

This was an insightful comment for me, and the argument does seem correct at first glance. I guess the reason I'd still disagree is because I observe people thinking about within-cause choices very differently from how they think about across-cause choices, so they're more rational in one context than the other. A key part of effective altruism's value, it seems to me, is the recognition of this discrepancy and the argument that it should be eliminated.

in which case more people working on a field would indicate that it was more worth working on.

I think if you really believe people are rational in the way described, more people working on a field doesn't necessarily give you a clue as to whether more people should be working on it or not, because you expect the number of people working on it to roughly track the number of people who ought to work on it -- you think the people who are not working on it are also rational, so there must be circumstances under which that's correct, too.

Comment by benmillwood on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-18T15:54:23.863Z · score: 5 (5 votes) · EA · GW

I believe him. Moreover it's not that hard to find people in history who have knowingly and deliberately endured hideous conditions because they thought it was necessary for some principle they held, so I don't even think he's that rare.

Comment by benmillwood on Effective Altruism London - Strategic Plan & Funding Proposal 2018 · 2017-11-18T09:20:23.577Z · score: 0 (0 votes) · EA · GW

Is "Part 3. Specific lessons on running a large local community" still on the way?

Comment by benmillwood on Can we apply start-up investing principles to non-profits? · 2017-07-23T16:02:19.677Z · score: 2 (2 votes) · EA · GW

In "For-profit investing typically does not have massive negative returns, but non-profit investing can", I understand this to only be true in the sense that for-profit investing is only concerned with financial returns, whereas non-profit investing is concerned with returns of all kinds.

For-profit investing can still have negative externalities, of course, it's just that the shareholders aren't really obliged to care about them :)

Comment by benmillwood on The Philanthropist’s Paradox · 2017-07-23T15:43:15.923Z · score: 0 (0 votes) · EA · GW

It's worth pointing out that if time just advances forever, so that your current time is just "T seconds after the starting point", then it is simultaneously true that:

  • time is infinite
  • every instant has a finite past (and an infinite future)

The second point in particular means that even though time is infinite, you still can't wait an infinite amount of time and then do something. I think that's what MichaelStJules was getting at.

Your mixed strategy has its own paradox, though – suppose you decide that one strategy is better than another if it "eventually" does more total good – that is, there's a point in time after which "total amount of good done so far" exceeds that of the other strategy for the rest of eternity. You have to do something like this because it doesn't usually make sense to ask which strategy achieved the most good "after infinite time" because infinite time never elapses.

Anyway, suppose you have that metric of "eventual winner". Then your strategy can always be improved by reducing the fraction you donate, because the exponential growth of the investment will eventually outpace the linear reduction in donations. But as soon as you reduce the fraction to zero, you no longer get any gains at all. So you have the odd situation where no fraction is optimal – for any strategy, there is always a better one.

In a context of infinite possible outcomes and infinite possible choice pathways, this actually isn't that surprising. You might as well be surprised that there's no largest number. And perhaps that applies just as well to the original philanthropist's paradox – if you permit yourself an infinite time horizon to invest over, it's just not surprising that there's no optimal moment to "cash in".

As soon as you start actually encoding your beliefs that the time horizon is in fact not infinite, I'm willing to bet you start getting some concrete moments to start paying your fund out, and some reasonable justifications for why those moments were better than any other. To the extent that the conclusion "you should wait until near the end of civilization to donate" is still a counterintuitive one, I claim it's just because of our (correct) intuition that investing is not always better than donating right now, even in the long run. That's the argument that Ben Todd and Sanjay made.

Comment by benmillwood on Donating To High-Risk High-Reward Charities · 2017-02-25T13:46:43.737Z · score: 0 (0 votes) · EA · GW

One thing which makes me more confident that object-level risk[1] is important in for-profit investing, but expect it to be less central in charitable work, is that I'm more confident that for-profit risk is priced correctly, or at least not way out of line with what it should be. It seems more plausible to me that there are low-risk high-return charitable opportunities, because people are generally worse at identifying and saturating those opportunities. (Although per GiveWell's post on Broad market efficiency I now believe this effect is much less striking than I first guessed).

[1] I'm not sure this is a correct application of "object-level", but I mean actual risk that a given investment will succeed or fail, rather than the "meta" risk that we'll fail to analyse its value correctly. I'm not super confident the distinction is meaningful.

Comment by benmillwood on [CEA Update] Updates from January 2017 · 2017-02-25T13:10:26.669Z · score: 1 (1 votes) · EA · GW

That tiny little Alwaleed Philanthropies footnote tacked onto the end sounds like a big deal to me. Not only is it a pretty significant amount itself, it seems like it might also start conversations about evidence-based philanthropy among cultures and communities that EA hasn't traditionally had much of a foothold in.

Comment by benmillwood on Should the Oxford Prioritisation Project look for lego bricks? · 2017-02-25T12:31:02.089Z · score: 0 (0 votes) · EA · GW

It's hard to recommend this over playing the lottery upfront, so that you actually know whether your research will be directly used or not.

If you think it's important that the research is done even if the money is not used, would you recommend just doing the research project even with merely a hypothetical £10k that isn't actually donated at the end?

Comment by benmillwood on Should the Oxford Prioritisation Project look for lego bricks? · 2017-02-25T12:29:30.424Z · score: 0 (0 votes) · EA · GW

Another key reason against looking for "lego bricks" that I don't think you addressed is that marginal thinking is much more generalizable. You're publishing all your research work, and if I come along afterwards with £1k or £100k, a conclusion you made based on marginal thinking is much more likely to be useful to me than one tailored to your exact donation size.

My guess is that the value of your research in how it informs and influences others may even exceed the value of the £10k directly: if that's modestly likely to be true, it seems a strong recommendation to avoid "exact fit" opportunities.

I guess strictly speaking this kind of motivation falls out of scope for your project, which aims simply to find the best way to spend the £10k. But it's certainly a reason I'm glad you made this choice :)

Comment by benmillwood on Should the Oxford Prioritisation Project look for lego bricks? · 2017-02-25T12:23:32.246Z · score: 1 (1 votes) · EA · GW

As a piece of presentational feedback, I found it a little frustrating to have a title like this one, and yet not have the term "lego brick" specifically and directly explained until something like the third paragraph :)

Comment by benmillwood on Introducing the Oxford Prioritisation Project blog · 2017-02-25T11:54:38.619Z · score: 0 (0 votes) · EA · GW

The name "Oxford Prioritisation Project" has an unhelpful acronym collision :)

Do you have a standard abbreviated form that avoids it? Maybe OxPri, following the website address?

edit: I've found this issue addressed in other comments, and the official answer is apparently "oxprio".

Comment by benmillwood on Anonymous EA comments · 2017-02-20T16:42:47.933Z · score: 2 (2 votes) · EA · GW

"Keep doing the good work you know how to do, if you don't see any better options" still sounds implicitly dismissive to me. It sounds like you believe there are better options, and only a lack of knowledge or vision is keeping this person from identifying them.

Breaking up fistfights and intervening in heroin overdoses to me sound like things that have small-to-moderate chances of preventing catastrophic, permanent harm to the people involved. I don't know how often opportunities like that come up, but is it so hard to imagine they outstrip a GWWC pledger on an average or even substantially above-average salary?