Comment by benmillwood on Will companies meet their animal welfare commitments? · 2019-02-04T19:44:32.053Z · score: 5 (4 votes) · EA · GW

This is a relatively minor issue, perhaps, but the graph you show from the EggTrack report seems to have its "n=" numbers wrong. Looking at the report itself, the graph has the same values as (and immediately follows) another one which only includes the reported-against commitments, so I'm betting they just copied the numbers from that one accidentally.

(I haven't yet tried to contact CIWF about this and probably won't get around to it, but I'll update this post if I do)

Comment by benmillwood on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-12T09:58:59.664Z · score: 3 (3 votes) · EA · GW

What was the largest amount that any individual got matched on GT? Given that this year there were only 15 seconds of matching funds, can one person get through enough forms in time to give a lot?

Comment by benmillwood on Should donor lottery winners write reports? · 2019-01-11T19:22:19.876Z · score: 1 (1 votes) · EA · GW

I think 2-10x is the wrong average multiplier across lottery winners (though, in fairness, you didn't explicitly claim it was an average). In order to make good grants to new small high-risk things, you need to hear about them, and I suspect most lottery participants don't have the necessary networks and don't have special access to significant private information – after all, private information doesn't spread well.

Concretely I'm suggesting that the median lottery participant doesn't get any benefit at all from the ability to use private information.

Comment by benmillwood on Should donor lottery winners write reports? · 2019-01-10T15:37:08.375Z · score: 5 (4 votes) · EA · GW

We can imagine three categories of grants:

A. Publically justifiable

B. Privately justifiable

C. Unjustifiable :)

I agree reports like Adam's will move people from B to A, but I think they will also move people from C to A, by forcing them to examine their choices more carefully and hold themselves to a higher standard.

This model prompts two possible sources of disagreement: you could disagree about the relative proportions of people moving from B vs. from C, or you could disagree about how bad it is to have a mix of B and C vs. more A.

To address the second question, if you think that B is 2-10x more valuable than A, then even if donations in category C are worthless (leaving aside the chance they are net negative), an equal mix of B and C is better than just A, and towards the 10x end of that spectrum, you can justify up to 90% C and 10% B.

But let's return to that parenthetical – could more C donations be net negative, even aside from opportunity cost? I think this risk is underexamined. I suspect most projects won't directly do harm, but well-funded blunders are more visible and reputationally damaging.

Comment by benmillwood on Should donor lottery winners write reports? · 2019-01-07T16:52:57.255Z · score: 3 (3 votes) · EA · GW

Or because their best granting opportunity can't be justified with publically-available knowledge, or has other weird optics / reputational concerns.

Comment by benmillwood on How High Contraceptive Use Can Help Animals? · 2019-01-07T15:10:38.602Z · score: 12 (5 votes) · EA · GW

So, I'm instinctively creeped out by any attempt to reduce the number of humans, and my initial reaction to this idea was basically "yikes". Having taken time to reflect and read the report, I've come around a little, in that improving access to contraception seems hard to oppose even if you're broadly in favour of more humans rather than less (though note that it's often opposed by some religious groups).

That said, I still think there's greater potential for extreme negative reactions to this idea than you appreciate. In particular, white wealthy people targeting low-income countries with the explicit aim of reducing their population has a chance of tripping people's "eugenics sirens" and drawing comparisons with the long and racist history of compulsory sterilizations. I'm not saying I would agree with those comparisons – it seems very clear that your motivations are different, and the ethnicity of your target group is coincidental / irrelevant – but I don't think that everyone would believe in your good faith as much as I do; some compulsory or semi-coercive sterilization was done covertly and in the guise of helping the recipients, so some may feel obliged to be especially wary of anything superficially similar.

You briefly addressed reputational risk in this passage:

The intervention is middling in terms of reputational and field building
effects, because there is no significant risk of turning people off animal
advocacy or vegetarianism if the organization wouldn’t be promoted as a
directly animal-focused charity.

Bluntly, this comes across as dishonest. Aren't you worried that people might discover your true motivations aren't the same as your apparent ones, and distrust animal advocates in future?

Comment by benmillwood on Public policy push for effective altruism · 2019-01-07T14:08:28.650Z · score: 1 (1 votes) · EA · GW

In the UK, there is the All-Party Parliamentary Group for Future Generations, although I'm not sure how much they actually do.

Comment by benmillwood on Is The Hunger Site worth it? · 2018-11-30T14:54:07.741Z · score: 1 (1 votes) · EA · GW

Also, if you do this, please come back and tell us what you discovered :)

Comment by benmillwood on Why EAs in particular are good people to start charities · 2018-06-16T13:21:37.264Z · score: 0 (2 votes) · EA · GW

On what grounds do you expect EAs to have better personal ability?

Something I've been idly concerned about in the past is the possibility that EAs might be systematically more ambitious than equivalently competent people, and thus at a given level of ambition, EAs would be systematically less competent. I don't have a huge amount of evidence for this being borne out in practice, but it's one not-so-implausible way that EA charity founders might be worse than average at the skills needed to found charities.

Comment by benmillwood on Three levels of cause prioritisation · 2018-06-03T07:47:44.368Z · score: 1 (1 votes) · EA · GW

I think this framing is a good one, but I don't immediately agree with the conclusion you make about which level to prioritize.

Firstly, consider the benefits we expect from a change in someone's view at each level. Do most people stand to improve their impact the most by choosing the best implementation within their cause area, or switching to an average implementation in a more pressing cause area? I don't think this is obvious, but I lean to the latter.

Higher levels are more generalizable: cross-implementation comparisons are only relevant to people within that cause, whereas cross-cause comparisons are relevant to everyone who shares approximately the same values, so focusing on lower levels limits the size of the audience that can benefit from what you have to say.

Low-level comparisons tend to require domain-specific expertise, which we won't be able to have across a wide range of domains.

I also think there's just a much greater deficit of high-quality discussion of the higher levels. They're virtually unexamined by most people. Speaking personally, my introduction to EA was approximately that I knew I was confused about the medium-level question, so I was directly looking for answers to that: I'm not sure a good discussion of the low-level question would have captured me as effectively.

Comment by benmillwood on Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was · 2018-05-25T14:12:28.086Z · score: 1 (3 votes) · EA · GW

I don't think you should update too much on people being unkind on the internet :)

Comment by benmillwood on A case for developing Aldehyde Stabilized Cryopreservation into a medical procedure (1/4) · 2018-05-13T09:33:26.840Z · score: 8 (8 votes) · EA · GW

There are many, many possible altruistic targets. I think to be suitable for the EA forum, a presentation of an altruistic goal should include some analysis of how it compares with existing goals, or what heuristics lead you to believe it's worthy of particular attention.

Comment by benmillwood on Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply · 2018-05-13T09:06:49.046Z · score: 1 (1 votes) · EA · GW

I think it's sort of bizarre to suggest that out of 25,000 vegetarians, one is responsible for the shed being closed, and the others did nothing at all. Why privilege the "last" decision to not purchase a chicken? It makes more sense to me that you'd allocate the "credit" equally to everyone who chose not to eat meat.

The first 24,999 needed to not buy a chicken in order for the last one to be in a position for their choice to make a difference.

Comment by benmillwood on Concrete Ways to Reduce Risks of Value Drift · 2018-05-13T08:36:40.235Z · score: 2 (2 votes) · EA · GW

It's not enough to place a low level of trust in your future self for commitment devices to be a good idea. You also have to put a high level of trust in your current self :)

That is, if you believe in moral uncertainty, and believe you currently haven't done a good job of figuring out the "correct" way of thinking about ethics, you may think you're likely to make mistakes by committing and acting now, and so be willing to wait, even in the face of a strong chance your future self won't even be interested in those questions anymore.

Comment by benmillwood on Syllabus for Course on Effective Altruism · 2018-05-09T09:06:01.020Z · score: 0 (0 votes) · EA · GW

I think on balance there's a strong chance you're right, but there IS a lose-lose outcome, where the consumer pressure drives the companies to fire all their sweatshop employees and move to a place where they can get people from a different, less needy origin (that maybe has different labour laws, or in some other ways pacifies many of the consumer activists).

Comment by benmillwood on Empirical data on value drift · 2018-05-09T08:27:13.491Z · score: 1 (3 votes) · EA · GW

First of all, thanks for this post -- I think it's really valuable to get a realistic sense of how these beliefs play out over the long term.

Like others in the comments, though, I'm a little critical of the framing and skeptical of the role of commitment devices. In my mind, we can view commitment devices as essentially being anti-cooperative with our future selves. I think we should default to viewing these attempts as suspicious, similarly to how we would caution against acting anti-cooperatively towards any other non-EAs.

Implicit is the assumption that if we change, it must be for "bad" reasons. It's natural enough -- clearly we can't think of any good reasons, otherwise we would already have changed -- but it lacks imagination. We may learn of a reason why things are not as we thought. Limiting your options according to your current knowledge or preferences means limiting your ability to flourish if the world turns out to be very different from your expectations.

More abstractly, imagine that you heard about someone who believed that doing X was a really good idea, and then three years later, believed that doing X was not a good idea. Without any more details, who do you think is most likely to be correct?

(At the same time, I think we're all familiar with failing to achieve goals because we failed to commit to them, even as we knew they were worth it, so there can be value in binding yourself. It's also good signalling, of course. But such explanations or justifications need to be strong enough to overcome the general prior based on the above argument.)

Comment by benmillwood on Empirical data on value drift · 2018-05-09T07:46:08.867Z · score: 0 (0 votes) · EA · GW

But this also means if you donate 50% and spend 50% of your free time effectively (like I try to do), you would be a 100% EA

If you gave 60% of your income would that make you a 110% EA? If so, I think that mostly just highlights that this metric should not be taken too seriously. (I was going to criticize it on more technical grounds, but I think to do so would be to give legitimacy to the idea that people should compare their own "numbers" with each other, which seems likely to be to be a bad idea)

Comment by benmillwood on How fragile was history? · 2018-02-09T17:48:26.106Z · score: 0 (0 votes) · EA · GW

Not that it's obviously terribly important to the historical chaos discussion, but I think siblings aren't a great natural model. Siblings differ by at least (usually more than) nine months, which you can imagine affecting them biologically, via the physiology of the mother during pregnancy, or via the medical / material conditions of their early life. They also differ in social context -- after all, one of them has one more older sibling, while the other has one more younger one. Two agents interacting may exaggerate their differences over time, or perhaps they sequentially fill particular roles in the eyes of the parents, which leads to differences in treatment. So I think there are lots of sources of sibling difference that aren't present in hypothetical genetic reshuffles.

(That said, the coinflip on sex seems pretty compelling.)

Comment by benmillwood on The almighty Hive will · 2018-02-09T16:40:45.660Z · score: 6 (6 votes) · EA · GW

I would be interested in funding this.

Comment by benmillwood on #GivingTuesday: Counter-Factual Donation Matching is the Lowest-Hanging Fruit in Effective Giving · 2017-12-16T16:38:03.988Z · score: 3 (3 votes) · EA · GW

For the benefit of future readers: Giving Tuesday happened, and the matching funds were exhausted within about 90 seconds. In total of ~$370k in donations we matched ~$46k, or about 13%, which was lower than hoped. William wrote up a lessons-learned document as a Google doc.

Comment by benmillwood on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-26T09:46:31.964Z · score: 2 (2 votes) · EA · GW

Can't help but feel this thoughtful and comprehensive critique of negative utilitarianism is wasted on being buried deep in the comments of a basically unrelated post :)

Promote to its own article?

Comment by benmillwood on Against neglectedness · 2017-11-25T08:07:49.880Z · score: 0 (0 votes) · EA · GW

I'm going to write a relatively long comment making a relatively narrow objection to your post. Sorry about that, but I think it's a particularly illustrative point to make. I disagree with these two points against the neglectedness framing in particular:

  1. that it could divide by zero, and this is a serious problem
  2. that it splits a fraction into unnecessarily conditional parts (the "dragons in Westeros" problem).

Firstly in response to (1), this is a legitimate illustration that the framework only applies where it applies, but it seems like in practice like it isn't an obstacle. Specifically, the framing works well when your proposed addition is small relative to the existing resource, and it seems like that's true of most people in most situations. I'll come back to this later.

More importantly, I feel like (2) misses the point of what the framework was developed for. The goal is to get a better handle on what kinds of things to look for when evaluating causes. So the fact that the fraction simplifies to "good done per additional resource" is sort of trivial – that's the goal, the metric we're trying to optimize. It's hard to measure that directly, so the value added by the framework is the claim that certain conditionalizations of the metric (if that's the right word) yield questions that are easier to answer, and answers that are easier to compare.

That is, we write it as "scale times neglectedness times solvability" because we find empirically that those individual factors of the metric tend to be more predictable, comparable and measurable than the metric as a whole. The applicability of the framework is absolutely contingent on what we in-practice discover to be the important considerations when we try to evaluate a cause from scratch.

So while there's no fundamental reason why neglectedness, particularly as measured in the form of the ratio of percentage per resource, needs to be a part of your analysis, it just turns out to be the case that you can often find e.g. two different health interventions that are otherwise very comparable in how much good they do, but with very different ability to consume extra resources, and that drives a big difference in their attractiveness as causes to work on.

If ever you did want to evaluate a cause where the existing resources were zero, you could just as easily swap the bad cancellative denominator/numerator pair with another one, say the same thing in absolute instead of relative terms, and the rest of the model would more or less stand up. Whether that should be done in general for evaluating other causes as well is a judgement call about how these numbers vary in practice and what situations are most easily compared and contrasted.

Comment by benmillwood on Against neglectedness · 2017-11-25T07:37:50.001Z · score: 0 (0 votes) · EA · GW

To clarify, this only applies if everyone else is picking interventions at random, but you're still managing to pick the best remaining one (or at least better than chance).

It also seems to me like it applies across causes as well as within causes.

Comment by benmillwood on Against neglectedness · 2017-11-25T07:31:36.233Z · score: 0 (0 votes) · EA · GW

The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on

This was an insightful comment for me, and the argument does seem correct at first glance. I guess the reason I'd still disagree is because I observe people thinking about within-cause choices very differently from how they think about across-cause choices, so they're more rational in one context than the other. A key part of effective altruism's value, it seems to me, is the recognition of this discrepancy and the argument that it should be eliminated.

in which case more people working on a field would indicate that it was more worth working on.

I think if you really believe people are rational in the way described, more people working on a field doesn't necessarily give you a clue as to whether more people should be working on it or not, because you expect the number of people working on it to roughly track the number of people who ought to work on it -- you think the people who are not working on it are also rational, so there must be circumstances under which that's correct, too.

Comment by benmillwood on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-18T15:54:23.863Z · score: 5 (5 votes) · EA · GW

I believe him. Moreover it's not that hard to find people in history who have knowingly and deliberately endured hideous conditions because they thought it was necessary for some principle they held, so I don't even think he's that rare.

Comment by benmillwood on Effective Altruism London - Strategic Plan & Funding Proposal 2018 · 2017-11-18T09:20:23.577Z · score: 0 (0 votes) · EA · GW

Is "Part 3. Specific lessons on running a large local community" still on the way?

Comment by benmillwood on Can we apply start-up investing principles to non-profits? · 2017-07-23T16:02:19.677Z · score: 0 (0 votes) · EA · GW

In "For-profit investing typically does not have massive negative returns, but non-profit investing can", I understand this to only be true in the sense that for-profit investing is only concerned with financial returns, whereas non-profit investing is concerned with returns of all kinds.

For-profit investing can still have negative externalities, of course, it's just that the shareholders aren't really obliged to care about them :)

Comment by benmillwood on The Philanthropist’s Paradox · 2017-07-23T15:43:15.923Z · score: 0 (0 votes) · EA · GW

It's worth pointing out that if time just advances forever, so that your current time is just "T seconds after the starting point", then it is simultaneously true that:

  • time is infinite
  • every instant has a finite past (and an infinite future)

The second point in particular means that even though time is infinite, you still can't wait an infinite amount of time and then do something. I think that's what MichaelStJules was getting at.

Your mixed strategy has its own paradox, though – suppose you decide that one strategy is better than another if it "eventually" does more total good – that is, there's a point in time after which "total amount of good done so far" exceeds that of the other strategy for the rest of eternity. You have to do something like this because it doesn't usually make sense to ask which strategy achieved the most good "after infinite time" because infinite time never elapses.

Anyway, suppose you have that metric of "eventual winner". Then your strategy can always be improved by reducing the fraction you donate, because the exponential growth of the investment will eventually outpace the linear reduction in donations. But as soon as you reduce the fraction to zero, you no longer get any gains at all. So you have the odd situation where no fraction is optimal – for any strategy, there is always a better one.

In a context of infinite possible outcomes and infinite possible choice pathways, this actually isn't that surprising. You might as well be surprised that there's no largest number. And perhaps that applies just as well to the original philanthropist's paradox – if you permit yourself an infinite time horizon to invest over, it's just not surprising that there's no optimal moment to "cash in".

As soon as you start actually encoding your beliefs that the time horizon is in fact not infinite, I'm willing to bet you start getting some concrete moments to start paying your fund out, and some reasonable justifications for why those moments were better than any other. To the extent that the conclusion "you should wait until near the end of civilization to donate" is still a counterintuitive one, I claim it's just because of our (correct) intuition that investing is not always better than donating right now, even in the long run. That's the argument that Ben Todd and Sanjay made.

Comment by benmillwood on Donating To High-Risk High-Reward Charities · 2017-02-25T13:46:43.737Z · score: 0 (0 votes) · EA · GW

One thing which makes me more confident that object-level risk[1] is important in for-profit investing, but expect it to be less central in charitable work, is that I'm more confident that for-profit risk is priced correctly, or at least not way out of line with what it should be. It seems more plausible to me that there are low-risk high-return charitable opportunities, because people are generally worse at identifying and saturating those opportunities. (Although per GiveWell's post on Broad market efficiency I now believe this effect is much less striking than I first guessed).

[1] I'm not sure this is a correct application of "object-level", but I mean actual risk that a given investment will succeed or fail, rather than the "meta" risk that we'll fail to analyse its value correctly. I'm not super confident the distinction is meaningful.

Comment by benmillwood on [CEA Update] Updates from January 2017 · 2017-02-25T13:10:26.669Z · score: 1 (1 votes) · EA · GW

That tiny little Alwaleed Philanthropies footnote tacked onto the end sounds like a big deal to me. Not only is it a pretty significant amount itself, it seems like it might also start conversations about evidence-based philanthropy among cultures and communities that EA hasn't traditionally had much of a foothold in.

Comment by benmillwood on Should the Oxford Prioritisation Project look for lego bricks? · 2017-02-25T12:31:02.089Z · score: 0 (0 votes) · EA · GW

It's hard to recommend this over playing the lottery upfront, so that you actually know whether your research will be directly used or not.

If you think it's important that the research is done even if the money is not used, would you recommend just doing the research project even with merely a hypothetical £10k that isn't actually donated at the end?

Comment by benmillwood on Should the Oxford Prioritisation Project look for lego bricks? · 2017-02-25T12:29:30.424Z · score: 0 (0 votes) · EA · GW

Another key reason against looking for "lego bricks" that I don't think you addressed is that marginal thinking is much more generalizable. You're publishing all your research work, and if I come along afterwards with £1k or £100k, a conclusion you made based on marginal thinking is much more likely to be useful to me than one tailored to your exact donation size.

My guess is that the value of your research in how it informs and influences others may even exceed the value of the £10k directly: if that's modestly likely to be true, it seems a strong recommendation to avoid "exact fit" opportunities.

I guess strictly speaking this kind of motivation falls out of scope for your project, which aims simply to find the best way to spend the £10k. But it's certainly a reason I'm glad you made this choice :)

Comment by benmillwood on Should the Oxford Prioritisation Project look for lego bricks? · 2017-02-25T12:23:32.246Z · score: 1 (1 votes) · EA · GW

As a piece of presentational feedback, I found it a little frustrating to have a title like this one, and yet not have the term "lego brick" specifically and directly explained until something like the third paragraph :)

Comment by benmillwood on Introducing the Oxford Prioritisation Project blog · 2017-02-25T11:54:38.619Z · score: 0 (0 votes) · EA · GW

The name "Oxford Prioritisation Project" has an unhelpful acronym collision :)

Do you have a standard abbreviated form that avoids it? Maybe OxPri, following the website address?

edit: I've found this issue addressed in other comments, and the official answer is apparently "oxprio".

Comment by benmillwood on Anonymous EA comments · 2017-02-20T16:42:47.933Z · score: 2 (2 votes) · EA · GW

"Keep doing the good work you know how to do, if you don't see any better options" still sounds implicitly dismissive to me. It sounds like you believe there are better options, and only a lack of knowledge or vision is keeping this person from identifying them.

Breaking up fistfights and intervening in heroin overdoses to me sound like things that have small-to-moderate chances of preventing catastrophic, permanent harm to the people involved. I don't know how often opportunities like that come up, but is it so hard to imagine they outstrip a GWWC pledger on an average or even substantially above-average salary?

Comment by benmillwood on Charity Science Effective Legacies · 2017-01-18T15:50:07.130Z · score: 0 (0 votes) · EA · GW

Minor point: this is an odd juxtaposition:

Studies suggest that the relationship between income and happiness is one of diminishing returns. So unless your loved ones are struggling to meet basic needs, an inheritance will probably do little to boost their overall well-being.

... and then ...

studies show it could boost your happiness to a degree similar to having your salary doubled

Feels like inconsistent messaging wrt how good it is to earn more.

Comment by benmillwood on CEA Staff Donation Decisions 2016 · 2016-12-24T11:08:40.051Z · score: 0 (0 votes) · EA · GW

This from Larissa surprised me a little:

I chose AMF and SCI because of the evidence behind the interventions. It is important to me to know that at least some of my donation is having a concrete impact.

AMF, sure, but isn't SCI famous for having a great deal of uncertainty in how life-changing it really is?

Comment by benmillwood on A new reference site: Effective Altruism Concepts · 2016-12-10T18:41:16.075Z · score: 2 (2 votes) · EA · GW

I agree that idealized ethical decision making content is irrelevant for most users so should probably be less prominent

I feel like one of the key advantages of the tree structure is that it's already not too prominent. I can see the motivations for demoting it even further, but it does feel like it's in the right place with respect to the overall structure of the concepts, and it's hard to see how to de-emphasise it without losing that.

Comment by benmillwood on A new reference site: Effective Altruism Concepts · 2016-12-10T18:37:18.159Z · score: 2 (2 votes) · EA · GW

To provide another perspective on UI issues (in descending order of importance in my eyes):

  • I agree that the content pages need a better way to return to their location in the main tree, although I'm not exactly sure what that would look like. Having content appear within the tree itself has downsides, like wasting page space on tree structure illustration (roughly speaking I imagine navigating the tree and reading content as separate activities, and I don't want them to interfere with each other). It's not inconceivable that you could make the content available within the tree and on separate pages, so that users could choose how/where to read it.
  • I think having "+" expand on mouse hover is a very bad idea. I should be able to move my mouse around on the page without causing radical structural changes to what is displayed. (Moreover, mouse-hover stuff doesn't tend to work so well with mobile).
  • The numbers serve some value to my eyes, but I'm not sure how much. I'd also consider having the numbers reflect the total number of children under each node, rather than just the number of immediate children. That gives you an idea of how much depth a particular subsection is covered in, and how much an undertaking it would be to read all of it, for example.
  • I agree that search is also important. You can do this the "dumb" way by just strapping a custom Google search to the page, or you can do something smarter that e.g. highlights which parts of the tree contain your search results (perhaps how many times, with totals at the parent nodes). This smarter search seems like a low priority, but once I came up with it I thought it was too cute not to share.
  • I disagree that clicking on + nodes is too hard, although I agree that it's intuitive to expect clicking on the text of the parents to have the same effect. A simple solution would be to have the first child of every parent be a summary of that parent, but I'm not convinced any solution is necessary.
Comment by benmillwood on A new reference site: Effective Altruism Concepts · 2016-12-10T18:18:07.149Z · score: 1 (1 votes) · EA · GW

Yeah, I see potential for this to be useful even if no-one uses it who isn't already familiar with the content: just structuring and categorising the information allows us to be clearer about which questions we can and can't answer, and be more aware of our conceptual gaps or weak points. I see that as a really useful and underrated clarifying tool, and I'm excited to see it develop further.

If the structuring and organizing of the content is a big part of its added value, that can be hard to preserve in a wiki or forum, which are often chaotic by nature. There's probably a trade-off between

  1. curation of content, particularly ensuring that content meets overarching goals and broad organizational principles, avoids duplication, self-contradiction, etc.
  2. quantity and depth of content, responsiveness to changes and developments, representation of a range of perspectives, and some sense of community-wide legitimacy

Broadly speaking, I'd guess that getting more people involved hurts (1) and helps (2). We already have a forum and a wiki, so maybe (2) is better served by existing resources, and your comparative advantage is (1). But I'm open-minded about the possibility that you can find a way to manage the tradeoff and maintain the structure despite an open contribution model.

Comment by benmillwood on Thoughts on the Reducetarian Labs MTurk Study · 2016-12-04T16:13:07.875Z · score: 0 (0 votes) · EA · GW

I'm excited to see more research like this produced, on this and other topics – are you able (both in terms of permission and in terms of capability) to tell us how much this study cost, both in terms of money and time?

Comment by benmillwood on The Best of EA in 2016: Nomination Thread · 2016-11-20T08:52:17.672Z · score: 1 (1 votes) · EA · GW

Givewell's suggested questions to ask are at their Do-It-Yourself Charity Evaluation Questions page.

Comment by benmillwood on .impact updates (1 of 3): New leadership, organizational overview and changes, LEAN · 2016-10-10T12:56:01.146Z · score: 0 (0 votes) · EA · GW

How would you compare LEAN with GWWC's chapters? Do you support GWWC chapters directly?

Comment by benmillwood on Review of EA Global 2016 · 2016-09-27T16:40:54.900Z · score: 2 (2 votes) · EA · GW

That would be true, except:

  • we may care about "fairness" of price-setting, or other non-economic motivations,
  • we may expect to be able to affect the price by objecting to it,
  • we may use reasonable pricing as a proxy for other forms of reasonableness.
Comment by benmillwood on All causes are EA causes · 2016-09-27T15:02:34.442Z · score: 2 (2 votes) · EA · GW

I note that this is a discussion about a view which we have essentially one person arguing for and already many people arguing against, and so in the interests of not burning Ian out, I suggest pro-status-quo people put more effort than usual into being concise, and perhaps consider letting existing threads play out a little before adding more balls to be juggled :)

I realise this might come across as patronising or unwelcoming or something. There's an unfortunate social norm that "organizational" often correlates with "authoritative". I explicitly disclaim authority on this matter, just trying to make some commons less tragic.

Comment by benmillwood on How to Measure and Optimize EA Marketing · 2016-09-02T16:58:09.514Z · score: 5 (5 votes) · EA · GW

This is a detailed and thorough article, thanks!

I would have found it easier to follow if you had included a bit more context at the start: who you are, why you are writing this article, and what audience it is primarily intended for, that sort of thing.

Comment by benmillwood on Should you switch away from earning to give? Some considerations. · 2016-08-27T04:11:44.663Z · score: 3 (3 votes) · EA · GW

On the other hand, they more often have enough spare money to fly halfway around the world to a conference.

Comment by benmillwood on June 2016 GiveWell board meeting · 2016-08-23T16:46:30.667Z · score: 0 (0 votes) · EA · GW

I looked closer at the copyright situation. The copyright footer on the page I quote accurately, but the "create page" dialog has this commentary:

Please note that all contributions to EA Wiki are considered to be released under the Creative Commons Attribution-ShareAlike 3.0 Unported (see EA Wiki:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here. You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

That sounds less equivocal about the permitted licenses, and sounds like I can't have a CC BY-NC-SA thing there.

Meanwhile, that EA Wiki:Copyrights link actually says:

Original content is available under the CC-0 licence unless otherwise noted. Logos, taglines, and other non-original content is owned by their respective entities, who may request removal by contacting an administrator.

which seems in contradiction with both the footer and the dialog :/

I'll find someone I can contact to ask for clarification, and I'll post again here if I make any progress.

Comment by benmillwood on June 2016 GiveWell board meeting · 2016-08-21T15:46:58.468Z · score: 1 (1 votes) · EA · GW

I have a program (pandoc) which converts Markdown syntax to MediaWiki, so I can overcome (1), though it's admittedly awkward having to potentially copy changes back and forth. I believe (2) is not an issue as long as the license is clearly stated: the wiki footer only says "Content is available under Creative Commons Attribution-ShareAlike 3.0 Unported unless otherwise noted." (emphasis mine)

I'm happy to create the page and put the content on it and ferry any edits back into pull requests for you, provided there's no other reason you don't want me to do so? (you can PM me if you want)

(Now that I'm talking about this, I think what I'd really want would be something where you had the brief notes, but you could click to expand individual brief notes to full transcript where available. But neither of our venues are natively capable of that, I think.)

Comment by benmillwood on Ideas for Future Effective Altruism Conferences: Open Thread · 2016-08-20T10:38:08.222Z · score: 1 (1 votes) · EA · GW

I think there's a risk that explicit computations might lead both your audience and yourself to overestimate your own confidence.

Moreover, doing them in a way that's well-calibrated to potential sources of risk and error is a skill, and I wouldn't want to suggest to people giving presentations either that they should make something well out of their field of expertise an important part of their talk, or that they shouldn't give a talk if they're unable to accurately compute EVs for the things they suggest.