BenMillwood's Shortform 2019-08-29T17:31:56.643Z · score: 1 (1 votes)


Comment by benmillwood on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-13T21:11:15.676Z · score: 22 (10 votes) · EA · GW

I strongly agree with both this specific sentiment and the general attitude that generates sentiments like this.

However, I think it's worth pointing out that you don't have to agree with the Labour Party's current positions, or think that it's doing a good job, to be a good (honest) member. I think as long as you sincerely wish the party to perform well in elections or have more influence, even if you hope to achieve that by nudging its policy platform or general strategy in a different direction from the current one, then I wouldn't think you were being entryist or dishonest by joining.

(I feel like this criterion is maybe a bit weak and there should be some ideological essence of the Labour Party that you should agree with before joining, but I'm not sure it would be productive to pin down exactly what it was and I expect it strongly overlaps with "I want the Labour Party to do well" anyway)

Comment by benmillwood on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-13T21:05:10.539Z · score: 8 (8 votes) · EA · GW

I actually thought the "of course I'd rather you'd stay a member" part was odd, since nowhere in the post up to that point had you said anything to indicate that you supported Labour yourself. The post doesn't say anything about whether Labour itself is good or bad, or whether that should factor into your decision to join it at all, but in this comment it sounds like those are crucial questions for whether this step is right or not.

Comment by benmillwood on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-09-15T05:48:42.699Z · score: 15 (15 votes) · EA · GW

Yeah I think you have to view this exercise as optimizing for one end of the correctness-originality spectrum. Most of what is submitted is going to be uncomfortable admitting in public because it's just plain wrong, so if this exercise is to have any value at all, it's in sifting through all the nonsense, some of it pretty rotten, in the hope of finding one or two actually interesting things in there.

Comment by benmillwood on Movement Collapse Scenarios · 2019-09-03T17:26:40.583Z · score: 16 (9 votes) · EA · GW

GiveWell used to solicit external feedback a fair bit years ago, but (as I understand it) stopped doing so because it found that it generally wasn't useful. Their blog post External evaluation of our research goes some way to explaining why. I could imagine a lot of their points apply to CEA too.

I think you're coming at this from a point of view of "more feedback is always better", forgetting that making feedback useful can be laborious: figuring out which parts of a piece of feedback are accurate and actionable can be at least as hard as coming up with the feedback in the first place, and while soliciting comments can give you raw material, if your bottleneck is not on raw material but instead on which of multiple competing narratives to trust, you're not necessarily gaining anything by hearing more copies of each.

Certainly you won't gain anything for free, and you may not be able to afford the non-monetary cost.

Comment by benmillwood on BenMillwood's Shortform · 2019-08-29T17:31:56.773Z · score: 25 (11 votes) · EA · GW

Lead with the punchline when writing to inform

The convention in a lot of public writing is to mirror the style of writing for profit, optimized for attention. In a co-operative environment, you instead want to optimize to convey your point quickly, to only the people who benefit from hearing it. We should identify ways in which these goals conflict; the most valuable pieces might look different from what we think of when we think of successful writing.

  • Consider who doesn't benefit from your article, and if you can help them filter themselves out.
  • Consider how people might skim-read your article, and how to help them derive value from it.
  • Lead with the punchline – see if you can make the most important sentence in your article the first one.
  • Some information might be clearer in a non-discursive structure (like… bullet points, I guess).

Writing to persuade might still be best done discursively, but if you anticipate your audience already being sold on the value of your information, just present the information as you would if you were presenting it to a colleague on a project you're both working on.

Comment by benmillwood on Ask Me Anything! · 2019-08-29T17:25:24.780Z · score: 2 (2 votes) · EA · GW

Why would the whole community read it? You'd set out in the initial post, as Will has done, why people might or might not be interested in what you have to say, and only people who passed that bar would spend any real time on it. I don't think the bar should be that high.

Comment by benmillwood on Movement Collapse Scenarios · 2019-08-27T11:52:52.149Z · score: 40 (19 votes) · EA · GW

This is a question I consider crucial in evaluating the work of organizations, so it's sort of embarrassing I've never really tried to apply it to the community as a whole. Thanks for bringing that to light.

I think one thing uniting all your collapse scenarios is that they're gradual. I wonder how much damage could be done to EA by a relatively sudden catastrophe, or perhaps a short-ish series of catastrophes. A collapse in community trust could be a big deal: say there was a fraud or embezzlement scandal at CEA, OPP, or GiveWell. I'm not sure that would be catastrophic by itself, but perhaps if several of the organizations were damaged at once it would make people skeptical about the wisdom of reforming around any new centre, which would make it much harder to co-ordinate.

Another thing that I see as a potential risk is high-level institutions having a pattern of low-key misbehaviour that people start to see (wrongly, I hope) as an inevitable consequence of the underlying ideas. Suppose the popular perception starts to be "thinking about effectiveness in charity is all well and good, but it inevitably leads down a road of voluntary extinction / techno-utopianism / eugenics / something else low-status or bad". Depending on how bad the thing is, smart thoughtful people might start self-selecting out of the movement, and the remainder might mismanage perceptions of them even worse.

Comment by benmillwood on Open Thread #45 · 2019-07-28T16:42:06.130Z · score: 9 (5 votes) · EA · GW

Recent EA thinking on this is probably mostly:

Both are claiming to have done a lot of research, but I don't think either Founders' Pledge or Let's Fund have a GiveWell-like track record and I'm slightly nervous that we're repeating the mistake we (as a community) made when we recommended Cool Earth based on Giving What We Can's relatively cursory investigation into it, and then an only somewhat less cursory investigation suggested it wasn't much use.

Comment by benmillwood on EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge? · 2019-06-17T18:30:54.738Z · score: 4 (6 votes) · EA · GW

I don't think that "going silent" or failing to report donations is indication that people are not meeting the pledge. Nowadays I don't pay GWWC as an organisation much / any attention, but I'm still donating 10% a year (and then some).

To be honest I haven't read closely enough to understand where you do and don't account for "quiet pledge-keepers" in your analysis, but I at least think stuff like this is just plain wrong:

total number of people ceasing reporting donations (and very likely ceasing keeping the pledge)

Comment by benmillwood on Amazon Smile · 2019-06-15T22:19:10.033Z · score: 1 (1 votes) · EA · GW

I couldn't find The Clear Fund when I looked just now. Would be interested in someone confirming that it's still there.

Comment by benmillwood on A general framework for evaluating aging research. Part 1: reasoning with Longevity Escape Velocity · 2019-06-07T14:30:38.120Z · score: 1 (1 votes) · EA · GW

If you want to look up the maths elsewhere, it may be helpful to know that a constant, independent chance of death (or survival) per year is modelled by a negative binomial distribution.

Comment by benmillwood on Evidence Action is shutting down No Lean Season · 2019-06-07T14:23:30.241Z · score: 6 (5 votes) · EA · GW

Sounds like the fact there was already substantial doubt over whether the program worked was a key part of the decision to shut it down. That suggests that if the same kind of scandal had affected a current top charity, they would have worked harder to continue the project.

Comment by benmillwood on There's Lots More To Do · 2019-06-06T03:20:16.824Z · score: 4 (3 votes) · EA · GW

I actually think even justifying yourself only to yourself, being accountable only to yourself, is probably still too low a standard. No-one is an island, so we all have a responsibility to the communities we interact with, and it is to some extent up to those communities, not the individuals in isolation, what that means. If Ben Hoffman wants to have a relationship with EAs (individually or collectively), it's necessary to meet the standards of those individuals or the community as a whole about what's acceptable.

Comment by benmillwood on There's Lots More To Do · 2019-06-06T03:06:44.135Z · score: 5 (3 votes) · EA · GW

When you say "you don't need to justify your actions to EAs", then I have sympathy with that, because EAs aren't special, we're no particular authority and don't have internal consensus anyway. But you seem to be also arguing "you don't need to justify your actions to yourself / at all". I'm not confident that's what you're saying, but if it is I think you're setting too low a standard. If people aren't required to live in accordance with even their own values, what's the point in having values?

Comment by benmillwood on Could the crowdfunder to prosecute Boris Johnson be a high impact donation opportunity? · 2019-06-06T01:48:44.727Z · score: 0 (4 votes) · EA · GW

It's odd to call Boris an opponent of the government. He's a sitting MP - he's part of the state. To me this seems to be more about the courts being able to hold Parliament accountable.

Comment by benmillwood on Stories and altruism · 2019-05-20T09:17:49.597Z · score: 2 (2 votes) · EA · GW

I like the idea here a great deal, but I expect there's going to be a lot of variation in what creates what effect in whom. I wonder if there's better ways to come up with aggregate recommendations, so we can find out what seems to be consistent in its EA appeal, vs. what's idiosyncratic

Comment by benmillwood on Why isn't GV psychedelics grantmaking housed under Open Phil? · 2019-05-06T15:19:57.653Z · score: 6 (4 votes) · EA · GW

There's an unanswered question here of why Good Ventures makes grants that OpenPhil doesn't recommend, given that GV believes in the OpenPhil approach broadly. But I guess I don't find it that surprising that they do so. People like to do more than one thing?

Comment by benmillwood on Why isn't GV psychedelics grantmaking housed under Open Phil? · 2019-05-06T05:05:05.110Z · score: 12 (4 votes) · EA · GW

Have you attempted to contact GV or OpenPhil directly about this?

Comment by benmillwood on Political culture at the edges of Effective Altruism · 2019-04-14T12:17:17.199Z · score: 12 (10 votes) · EA · GW

I think this is only true with a very narrow conception of what the "EA things that we are doing" are. I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world.

That's all I believe constitutes "EA things" in your usage. Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally "the experts" on. If the global research community on poverty interventions came to the consensus "actually we think bednets are bad now" then EA orgs would need to listen to that and change course.

"Politicized" questions and values are no different, so we need to be open to feedback and input from external experts, whatever constitutes expertise in the field in question.

Comment by benmillwood on Does EA need an underpinning philosophy? Could sentientism be that philosophy? · 2019-03-30T18:24:19.746Z · score: 18 (6 votes) · EA · GW

Downvotes aren't primarily to help the person being downvoted. They help other readers, which after all there are many more of than writers. Creating an expectation that they should all be explained increases the burden on the downvoter significantly, making them less likely to be used and therefore less useful.

Comment by benmillwood on Apology · 2019-03-25T15:05:11.961Z · score: 10 (12 votes) · EA · GW

Just to remark on the "criminal law" point – I think it's appropriate to apply a different, and laxer, standard here than we do for criminal law, because:

  • the penalties are not criminal penalties, and in particular do not deprive anyone of anything they have a right to, like their property or freedom – CEA are free to exclude anyone from EAG who in their best judgement would make it a worse event to attend,
  • we don't have access to the kinds of evidence or evidence-gathering resources that criminal courts do, so realistically it's pretty likely that in most cases of misconduct or abuse we won't have criminal-standard evidence that it happened, and we'll have to either act despite that or never act at all. Some would defend never acting at all, I'm sure (or acting in only the most clear-cut cases), but I don't think it's the mainstream view.
Comment by benmillwood on Apology · 2019-03-25T14:04:17.369Z · score: 29 (13 votes) · EA · GW
And this is a clear case in which I would have first-person authority on whether I did anything wrong.

I think this is the main point of disagreement here. Generally when you make sexual or romantic advances on someone and those advances make them uncomfortable, you're often not aware of the effect that you're having (and they may not feel safe telling you), so you're not the authority on whether you did something wrong.

Which is not to say that you're guilty because they accused you! It's possible to behave perfectly reasonably and for people around you to get upset, even to blame you for it. In that scenario you would not be guilty of doing anything wrong necessarily. But more often it looks like this:

  • someone does something inappropriate without realizing it,
  • impartial observers agree, having heard the facts, that it was inappropriate,
  • it seems clearly-enough inappropriate that the offender had a moral duty to identify it as such in advance and not do it.

Then they need to apologize and do what's necessary to prevent it happening again, including withdrawing from the community if necessary.

Comment by benmillwood on Apology · 2019-03-25T13:53:22.214Z · score: 25 (14 votes) · EA · GW

If I heard that a lot of people were feeling uncomfortable following interactions with me, I think it's likely that I would apologize and back off before understanding why they felt that way, and perhaps without even understanding what behaviour was at issue.

I'd trust someone else's judgement comparably with or more than my own, particularly when there were multiple other someones, because I'm aware of many cases where people were oblivious to the harm their own behaviour was causing (and indeed, I don't always know how other people feel about the way I interact with them and put a lot of effort into giving them opportunities to tell me). Obviously I'd apply some common sense to accusations that e.g. I knew to be factually wrong.

In the abstract, which of these do you think happens more often?

  • Someone makes people uncomfortable without being aware that they are doing so. Other people inform them.
  • Someone doesn't make anyone feel uncomfortable (above the base rate of awkward social interactions). People erroneously tell them that they are doing so.

Now, the second is probably somewhat more likely than I've made it sound, but the first just seems way more ordinary to me. So my outside view is that the most likely reason for people to tell you that you're making others uncomfortable is that you are actually doing that. You're entitled to play this off against what you know of the inside view, but I think it would be pretty weird to just dismiss it entirely.

Comment by benmillwood on Will companies meet their animal welfare commitments? · 2019-02-04T19:44:32.053Z · score: 5 (4 votes) · EA · GW

This is a relatively minor issue, perhaps, but the graph you show from the EggTrack report seems to have its "n=" numbers wrong. Looking at the report itself, the graph has the same values as (and immediately follows) another one which only includes the reported-against commitments, so I'm betting they just copied the numbers from that one accidentally.

(I haven't yet tried to contact CIWF about this and probably won't get around to it, but I'll update this post if I do)

Comment by benmillwood on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-12T09:58:59.664Z · score: 3 (3 votes) · EA · GW

What was the largest amount that any individual got matched on GT? Given that this year there were only 15 seconds of matching funds, can one person get through enough forms in time to give a lot?

Comment by benmillwood on Should donor lottery winners write reports? · 2019-01-11T19:22:19.876Z · score: 1 (1 votes) · EA · GW

I think 2-10x is the wrong average multiplier across lottery winners (though, in fairness, you didn't explicitly claim it was an average). In order to make good grants to new small high-risk things, you need to hear about them, and I suspect most lottery participants don't have the necessary networks and don't have special access to significant private information – after all, private information doesn't spread well.

Concretely I'm suggesting that the median lottery participant doesn't get any benefit at all from the ability to use private information.

Comment by benmillwood on Should donor lottery winners write reports? · 2019-01-10T15:37:08.375Z · score: 5 (4 votes) · EA · GW

We can imagine three categories of grants:

A. Publically justifiable

B. Privately justifiable

C. Unjustifiable :)

I agree reports like Adam's will move people from B to A, but I think they will also move people from C to A, by forcing them to examine their choices more carefully and hold themselves to a higher standard.

This model prompts two possible sources of disagreement: you could disagree about the relative proportions of people moving from B vs. from C, or you could disagree about how bad it is to have a mix of B and C vs. more A.

To address the second question, if you think that B is 2-10x more valuable than A, then even if donations in category C are worthless (leaving aside the chance they are net negative), an equal mix of B and C is better than just A, and towards the 10x end of that spectrum, you can justify up to 90% C and 10% B.

But let's return to that parenthetical – could more C donations be net negative, even aside from opportunity cost? I think this risk is underexamined. I suspect most projects won't directly do harm, but well-funded blunders are more visible and reputationally damaging.

Comment by benmillwood on Should donor lottery winners write reports? · 2019-01-07T16:52:57.255Z · score: 3 (3 votes) · EA · GW

Or because their best granting opportunity can't be justified with publically-available knowledge, or has other weird optics / reputational concerns.

Comment by benmillwood on How High Contraceptive Use Can Help Animals? · 2019-01-07T15:10:38.602Z · score: 12 (5 votes) · EA · GW

So, I'm instinctively creeped out by any attempt to reduce the number of humans, and my initial reaction to this idea was basically "yikes". Having taken time to reflect and read the report, I've come around a little, in that improving access to contraception seems hard to oppose even if you're broadly in favour of more humans rather than less (though note that it's often opposed by some religious groups).

That said, I still think there's greater potential for extreme negative reactions to this idea than you appreciate. In particular, white wealthy people targeting low-income countries with the explicit aim of reducing their population has a chance of tripping people's "eugenics sirens" and drawing comparisons with the long and racist history of compulsory sterilizations. I'm not saying I would agree with those comparisons – it seems very clear that your motivations are different, and the ethnicity of your target group is coincidental / irrelevant – but I don't think that everyone would believe in your good faith as much as I do; some compulsory or semi-coercive sterilization was done covertly and in the guise of helping the recipients, so some may feel obliged to be especially wary of anything superficially similar.

You briefly addressed reputational risk in this passage:

The intervention is middling in terms of reputational and field building
effects, because there is no significant risk of turning people off animal
advocacy or vegetarianism if the organization wouldn’t be promoted as a
directly animal-focused charity.

Bluntly, this comes across as dishonest. Aren't you worried that people might discover your true motivations aren't the same as your apparent ones, and distrust animal advocates in future?

Comment by benmillwood on Public policy push for effective altruism · 2019-01-07T14:08:28.650Z · score: 1 (1 votes) · EA · GW

In the UK, there is the All-Party Parliamentary Group for Future Generations, although I'm not sure how much they actually do.

Comment by benmillwood on Is The Hunger Site worth it? · 2018-11-30T14:54:07.741Z · score: 1 (1 votes) · EA · GW

Also, if you do this, please come back and tell us what you discovered :)

Comment by benmillwood on Why EAs in particular are good people to start charities · 2018-06-16T13:21:37.264Z · score: 0 (2 votes) · EA · GW

On what grounds do you expect EAs to have better personal ability?

Something I've been idly concerned about in the past is the possibility that EAs might be systematically more ambitious than equivalently competent people, and thus at a given level of ambition, EAs would be systematically less competent. I don't have a huge amount of evidence for this being borne out in practice, but it's one not-so-implausible way that EA charity founders might be worse than average at the skills needed to found charities.

Comment by benmillwood on Three levels of cause prioritisation · 2018-06-03T07:47:44.368Z · score: 1 (1 votes) · EA · GW

I think this framing is a good one, but I don't immediately agree with the conclusion you make about which level to prioritize.

Firstly, consider the benefits we expect from a change in someone's view at each level. Do most people stand to improve their impact the most by choosing the best implementation within their cause area, or switching to an average implementation in a more pressing cause area? I don't think this is obvious, but I lean to the latter.

Higher levels are more generalizable: cross-implementation comparisons are only relevant to people within that cause, whereas cross-cause comparisons are relevant to everyone who shares approximately the same values, so focusing on lower levels limits the size of the audience that can benefit from what you have to say.

Low-level comparisons tend to require domain-specific expertise, which we won't be able to have across a wide range of domains.

I also think there's just a much greater deficit of high-quality discussion of the higher levels. They're virtually unexamined by most people. Speaking personally, my introduction to EA was approximately that I knew I was confused about the medium-level question, so I was directly looking for answers to that: I'm not sure a good discussion of the low-level question would have captured me as effectively.

Comment by benmillwood on Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was · 2018-05-25T14:12:28.086Z · score: 1 (3 votes) · EA · GW

I don't think you should update too much on people being unkind on the internet :)

Comment by benmillwood on A case for developing Aldehyde Stabilized Cryopreservation into a medical procedure (1/4) · 2018-05-13T09:33:26.840Z · score: 8 (8 votes) · EA · GW

There are many, many possible altruistic targets. I think to be suitable for the EA forum, a presentation of an altruistic goal should include some analysis of how it compares with existing goals, or what heuristics lead you to believe it's worthy of particular attention.

Comment by benmillwood on Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply · 2018-05-13T09:06:49.046Z · score: 1 (1 votes) · EA · GW

I think it's sort of bizarre to suggest that out of 25,000 vegetarians, one is responsible for the shed being closed, and the others did nothing at all. Why privilege the "last" decision to not purchase a chicken? It makes more sense to me that you'd allocate the "credit" equally to everyone who chose not to eat meat.

The first 24,999 needed to not buy a chicken in order for the last one to be in a position for their choice to make a difference.

Comment by benmillwood on Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift · 2018-05-13T08:36:40.235Z · score: 4 (3 votes) · EA · GW

It's not enough to place a low level of trust in your future self for commitment devices to be a good idea. You also have to put a high level of trust in your current self :)

That is, if you believe in moral uncertainty, and believe you currently haven't done a good job of figuring out the "correct" way of thinking about ethics, you may think you're likely to make mistakes by committing and acting now, and so be willing to wait, even in the face of a strong chance your future self won't even be interested in those questions anymore.

Comment by benmillwood on Syllabus for Course on Effective Altruism · 2018-05-09T09:06:01.020Z · score: 0 (0 votes) · EA · GW

I think on balance there's a strong chance you're right, but there IS a lose-lose outcome, where the consumer pressure drives the companies to fire all their sweatshop employees and move to a place where they can get people from a different, less needy origin (that maybe has different labour laws, or in some other ways pacifies many of the consumer activists).

Comment by benmillwood on Empirical data on value drift · 2018-05-09T08:27:13.491Z · score: 2 (4 votes) · EA · GW

First of all, thanks for this post -- I think it's really valuable to get a realistic sense of how these beliefs play out over the long term.

Like others in the comments, though, I'm a little critical of the framing and skeptical of the role of commitment devices. In my mind, we can view commitment devices as essentially being anti-cooperative with our future selves. I think we should default to viewing these attempts as suspicious, similarly to how we would caution against acting anti-cooperatively towards any other non-EAs.

Implicit is the assumption that if we change, it must be for "bad" reasons. It's natural enough -- clearly we can't think of any good reasons, otherwise we would already have changed -- but it lacks imagination. We may learn of a reason why things are not as we thought. Limiting your options according to your current knowledge or preferences means limiting your ability to flourish if the world turns out to be very different from your expectations.

More abstractly, imagine that you heard about someone who believed that doing X was a really good idea, and then three years later, believed that doing X was not a good idea. Without any more details, who do you think is most likely to be correct?

(At the same time, I think we're all familiar with failing to achieve goals because we failed to commit to them, even as we knew they were worth it, so there can be value in binding yourself. It's also good signalling, of course. But such explanations or justifications need to be strong enough to overcome the general prior based on the above argument.)

Comment by benmillwood on Empirical data on value drift · 2018-05-09T07:46:08.867Z · score: 0 (0 votes) · EA · GW

But this also means if you donate 50% and spend 50% of your free time effectively (like I try to do), you would be a 100% EA

If you gave 60% of your income would that make you a 110% EA? If so, I think that mostly just highlights that this metric should not be taken too seriously. (I was going to criticize it on more technical grounds, but I think to do so would be to give legitimacy to the idea that people should compare their own "numbers" with each other, which seems likely to be to be a bad idea)

Comment by benmillwood on How fragile was history? · 2018-02-09T17:48:26.106Z · score: 1 (1 votes) · EA · GW

Not that it's obviously terribly important to the historical chaos discussion, but I think siblings aren't a great natural model. Siblings differ by at least (usually more than) nine months, which you can imagine affecting them biologically, via the physiology of the mother during pregnancy, or via the medical / material conditions of their early life. They also differ in social context -- after all, one of them has one more older sibling, while the other has one more younger one. Two agents interacting may exaggerate their differences over time, or perhaps they sequentially fill particular roles in the eyes of the parents, which leads to differences in treatment. So I think there are lots of sources of sibling difference that aren't present in hypothetical genetic reshuffles.

(That said, the coinflip on sex seems pretty compelling.)

Comment by benmillwood on The almighty Hive will · 2018-02-09T16:40:45.660Z · score: 6 (6 votes) · EA · GW

I would be interested in funding this.

Comment by benmillwood on #GivingTuesday: Counter-Factual Donation Matching is the Lowest-Hanging Fruit in Effective Giving · 2017-12-16T16:38:03.988Z · score: 3 (3 votes) · EA · GW

For the benefit of future readers: Giving Tuesday happened, and the matching funds were exhausted within about 90 seconds. In total of ~$370k in donations we matched ~$46k, or about 13%, which was lower than hoped. William wrote up a lessons-learned document as a Google doc.

Comment by benmillwood on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-26T09:46:31.964Z · score: 2 (2 votes) · EA · GW

Can't help but feel this thoughtful and comprehensive critique of negative utilitarianism is wasted on being buried deep in the comments of a basically unrelated post :)

Promote to its own article?

Comment by benmillwood on Against neglectedness · 2017-11-25T08:07:49.880Z · score: 0 (0 votes) · EA · GW

I'm going to write a relatively long comment making a relatively narrow objection to your post. Sorry about that, but I think it's a particularly illustrative point to make. I disagree with these two points against the neglectedness framing in particular:

  1. that it could divide by zero, and this is a serious problem
  2. that it splits a fraction into unnecessarily conditional parts (the "dragons in Westeros" problem).

Firstly in response to (1), this is a legitimate illustration that the framework only applies where it applies, but it seems like in practice like it isn't an obstacle. Specifically, the framing works well when your proposed addition is small relative to the existing resource, and it seems like that's true of most people in most situations. I'll come back to this later.

More importantly, I feel like (2) misses the point of what the framework was developed for. The goal is to get a better handle on what kinds of things to look for when evaluating causes. So the fact that the fraction simplifies to "good done per additional resource" is sort of trivial – that's the goal, the metric we're trying to optimize. It's hard to measure that directly, so the value added by the framework is the claim that certain conditionalizations of the metric (if that's the right word) yield questions that are easier to answer, and answers that are easier to compare.

That is, we write it as "scale times neglectedness times solvability" because we find empirically that those individual factors of the metric tend to be more predictable, comparable and measurable than the metric as a whole. The applicability of the framework is absolutely contingent on what we in-practice discover to be the important considerations when we try to evaluate a cause from scratch.

So while there's no fundamental reason why neglectedness, particularly as measured in the form of the ratio of percentage per resource, needs to be a part of your analysis, it just turns out to be the case that you can often find e.g. two different health interventions that are otherwise very comparable in how much good they do, but with very different ability to consume extra resources, and that drives a big difference in their attractiveness as causes to work on.

If ever you did want to evaluate a cause where the existing resources were zero, you could just as easily swap the bad cancellative denominator/numerator pair with another one, say the same thing in absolute instead of relative terms, and the rest of the model would more or less stand up. Whether that should be done in general for evaluating other causes as well is a judgement call about how these numbers vary in practice and what situations are most easily compared and contrasted.

Comment by benmillwood on Against neglectedness · 2017-11-25T07:37:50.001Z · score: 0 (0 votes) · EA · GW

To clarify, this only applies if everyone else is picking interventions at random, but you're still managing to pick the best remaining one (or at least better than chance).

It also seems to me like it applies across causes as well as within causes.

Comment by benmillwood on Against neglectedness · 2017-11-25T07:31:36.233Z · score: 0 (0 votes) · EA · GW

The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on

This was an insightful comment for me, and the argument does seem correct at first glance. I guess the reason I'd still disagree is because I observe people thinking about within-cause choices very differently from how they think about across-cause choices, so they're more rational in one context than the other. A key part of effective altruism's value, it seems to me, is the recognition of this discrepancy and the argument that it should be eliminated.

in which case more people working on a field would indicate that it was more worth working on.

I think if you really believe people are rational in the way described, more people working on a field doesn't necessarily give you a clue as to whether more people should be working on it or not, because you expect the number of people working on it to roughly track the number of people who ought to work on it -- you think the people who are not working on it are also rational, so there must be circumstances under which that's correct, too.

Comment by benmillwood on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-18T15:54:23.863Z · score: 5 (5 votes) · EA · GW

I believe him. Moreover it's not that hard to find people in history who have knowingly and deliberately endured hideous conditions because they thought it was necessary for some principle they held, so I don't even think he's that rare.

Comment by benmillwood on Effective Altruism London - Strategic Plan & Funding Proposal 2018 · 2017-11-18T09:20:23.577Z · score: 0 (0 votes) · EA · GW

Is "Part 3. Specific lessons on running a large local community" still on the way?

Comment by benmillwood on Can we apply start-up investing principles to non-profits? · 2017-07-23T16:02:19.677Z · score: 2 (2 votes) · EA · GW

In "For-profit investing typically does not have massive negative returns, but non-profit investing can", I understand this to only be true in the sense that for-profit investing is only concerned with financial returns, whereas non-profit investing is concerned with returns of all kinds.

For-profit investing can still have negative externalities, of course, it's just that the shareholders aren't really obliged to care about them :)