Posts

BenMillwood's Shortform 2019-08-29T17:31:56.643Z

Comments

Comment by BenMillwood on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T15:20:38.248Z · EA · GW

Sure, precommitments are not certain, but they're a way of raising the stakes for yourself (putting more of your reputation on the line) to make it more likely that you'll follow through, and more convincing to other people that this is likely.

In other words: of course you don't have any way to reach probability 0, but you can form intentions and make promises that reduce the probability (I guess technically this is "restructuring your brain"?)

Comment by BenMillwood on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T14:29:41.119Z · EA · GW

Yeah, that did occur to me. I think it's more likely that he's telling the truth, and even if he's lying, I think it's worth engaging as if he's sincere, since other people might sincerely believe the same things.

Comment by BenMillwood on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T20:37:46.495Z · EA · GW

I downvoted this. I'm not sure if that was an appropriate way to express my views about your comment, but I think you should lift your pledge to second strike, and I think it's bad that you pledged to do so in the first place.

I think one important disanalogy between real nuclear strategy and this game is that there's kind of no reason to press the button, which means that for someone pressing the button, we don't really understand their motives, which makes it less clear that this kind of comment addresses their motives.

Consider that last time LessWrong was persuaded to destroy itself, it was approximately by accident. Especially considering the context of the event we're commemorating was essentially another accident, I think the most likely story for why one of the sites gets destroyed is not intentional, and thus not affected by precommitments to retaliate.

Comment by BenMillwood on Cultured meat predictions were overly optimistic · 2021-09-19T23:47:25.136Z · EA · GW

While I think it's useful to have concrete records like this, I would caution against drawing conclusions about the cultured meat community specifically unless we draw a comparison with other fields and find that forecast accuracy is better anywhere else. I'd expect that overoptimistic forecasts are just very common when people evaluate their own work in any field.

Comment by BenMillwood on The motivated reasoning critique of effective altruism · 2021-09-18T11:29:48.014Z · EA · GW

Another two examples off the top of my head:

Comment by BenMillwood on Three charitable recommendations for COVID-19 in India · 2021-05-08T15:40:05.208Z · EA · GW

GiveIndia says donations from India or the US are tax-deductible.

Milaap says they have tax benefits to donations but I couldn't find a more specific statement so I guess it's just in India?

Anyone know a way to donate with tax deduction from other jurisdictions? If 0.75x - 2x is accurate, it seems like for some donors that could make the difference.

(Siobhan's comment elsewhere here suggests that Canadian donors might want to talk to RCForward about this).

Comment by BenMillwood on AMA: Toby Ord @ EA Global: Reconnect · 2021-03-17T21:13:26.871Z · EA · GW

You've previously spoken about the need to reach "existential security" -- in order to believe the future is long and large, we need to believe that existential risk per year will eventually drop very close to zero. What are the best reasons for believing this can happen, and how convincing do they seem to you? Do you think that working on existential risk reduction or longtermist ideas would still be worthwhile for someone who believed existential security was very unlikely?

Comment by BenMillwood on Why EA groups should not use “Effective Altruism” in their name. · 2021-02-27T14:06:27.418Z · EA · GW

It seems plausible that reasonable people might disagree on whether student groups on the whole would benefit from being more or less conforming to the EA consensus on things. One person's "value drift" might be another person's "conceptual innovation / development".

On balance I think I find it more likely that an EA group would be co-opted in the way you describe than an EA group would feel limited from doing  something effective because they were worried it was too "off-brand", but it seems worth mentioning the latter as a possibility.

Comment by BenMillwood on Why EA groups should not use “Effective Altruism” in their name. · 2021-02-27T13:58:18.297Z · EA · GW

I think this post doesn't explicitly recognize a (to me) important upside of doing this, which applies to doing all things that other people aren't doing: potential information value.

This post exists because people tried something different and were thoughtful about the results, and now potentially many other people in similar situations can benefit from the knowledge of how it went. On the other hand, if you try it and it's bad, you can write a post about what difficulties you encountered so that other people can anticipate and avoid them better.

By contrast, naming your group Effective Altruism Erasmus wouldn't have led to any new insights about group naming.

Comment by BenMillwood on Deference for Bayesians · 2021-02-16T22:33:45.363Z · EA · GW

Bluntly I think a prior of 98% is extremely unreasonable. I think that someone who had thoroughly studied the theory, all credible counterarguments against it, had long discussions about it with experts who disagreed, etc. could reasonably come to a belief that strong. An amateur who has undertaken a simplistic study of the basic elements of the situation can't IMO reasonably conclude that all the rest of that thought and debate would have a <2% chance of changing their mind.

Even in an extremely empirically grounded and verifiable theory like physics, for much of the history of the field, the dominant theoretical framework has had significant omissions or blind spots that would occasionally lead to faulty results when applied to areas that were previously unknown. Economic theory is much less reliable. I think you're correct to highlight that economic data can be unreliable too, and it's certainly true that many people overestimate the size of Bayesian updates based on shaky data, and should perhaps stick to their priors more. But let's not kid ourselves about how good our cutting edge of theoretical understanding is in fields like economics and medicine – and let's not kid ourselves that nonspecialist amateurs can reach even that level of accuracy.

Comment by BenMillwood on Population Size/Growth & Reproductive Choice: Highly effective, synergetic & neglected · 2021-02-14T16:18:16.889Z · EA · GW

I agree with Halstead that this post seems to ignore the upsides of creating more humans. If you, like me, subscribe to a totalist population ethics, then each additional person who enjoys life, lives richly, loves, expresses themselves creatively, etc. -- all of these things make for a better world. (That said, I think that improving the lives of existing people is currently a better way to achieve that than creating more -- but I wouldn't say that creating more is wrong).

Moreover, I think this post misses the instrumental value of people, too. To understand the all-inclusive impact of an additional person on the environment, you surely have to also consider the chance that they become a climate researcher or activist, or a politician, or a worker in a related technical field; or even more indirectly, that they contribute to the social and economic environment that supports people who do those things. For sure, that social and economic environment supports climate damage as well, but deciding how these factors weigh up means (it seems to me) deciding whether human social and technological progress is good or bad for climate change, and that seems like a really tricky question, never mind all the other things it's good or bad for.

Comment by BenMillwood on Population Size/Growth & Reproductive Choice: Highly effective, synergetic & neglected · 2021-02-14T13:53:36.829Z · EA · GW

The only place where births per woman are not close to 2 is sub-saharan Africa. Thus, the only place where family planning could reduce emissions is sub-saharan Africa, which is currently a tiny fraction of emissions.

This is not literally true: family planning can reduce emissions in the developed world if the desired births per woman is even lower than the actual births per woman. But I don't dispute the substance of the argument: it seems relatively difficult to claim that there's a big unmet need for contraceptives elsewhere, and that should determine what estimates we use for emissions.

Comment by BenMillwood on Deference for Bayesians · 2021-02-14T13:41:40.595Z · EA · GW

I buy two of your examples: in the case of masks, it seems clear now that the experts were wrong before, and in "First doses first", you present some new evidence that the priors were right.

On nutrition and lockdowns, you haven't convinced me that the point of view you're defending isn't the one that deference would arrive at anyway: it seems to me like the expert consensus is that lockdowns work and most nutritional fads are ignorable.

On minimum wage and alcohol during pregnancy, you've presented a conflict between evidence and priors, but I don't feel like you resolved the conflict: someone who believed the evidence proved the priors wrong won't find anything in your examples to change their minds. For drinking during pregnancy, I'm not even really convinced there is a conflict: I suspect the heart of the matter is what people mean by "safe", what risks or harms are small enough to be ignored.

I think in general there are for sure some cases where priors should be given more weight than they're currently afforded. But it also seems like there are often cases where intuitions are bad, where "it's more complicated than that" tends to dominate, where there are always more considerations or open uncertainties than one can adequately navigate on priors alone. I don't think this post helps me understand how to distinguish between those cases.

Comment by BenMillwood on Where I Am Donating in 2016 · 2021-02-14T01:17:13.745Z · EA · GW

I don't know if this meets all the details, but it seems like it might get there: Singapore restaurant will be the first ever to serve lab-grown chicken (for $23)

Comment by BenMillwood on BenMillwood's Shortform · 2020-10-23T16:08:56.754Z · EA · GW

Hmm, I was going to mention mission hedging as the flipside of this, but then noticed the first reference I found was written by you :P

For other interested readers, mission hedging is where you do the opposite of this and invest in the thing you're trying to prevent -- invest in tobacco companies as an anti-smoking campaigner, invest in coal industry as a climate change campaigner, etc. The idea being that if those industries start doing really well for whatever reason, your investment will rise, giving you extra money to fund your countermeasures.

I'm sure if I thought about it for a bit I could figure out when these two mutually contradictory strategies look better or worse than each other. But mostly I don't take either of them very seriously most of the time anyway :)

Comment by BenMillwood on BenMillwood's Shortform · 2020-10-23T16:04:24.347Z · EA · GW

I don't buy your counterargument exactly. The market is broadly efficient with respect to public information. If you have private information (e.g. that you plan to mount a lobbying campaign in the near future; or private information about your own effectiveness at lobbying) then you have a material advantage, so I think it's possible to make money this way. (Trading based on private information is sometimes illegal, but sometimes not, depending on what the information is and why you have it, and which jurisdiction you're in. Trading based on a belief that a particular industry is stronger / weaker than the market perceives it to be is surely fine; that's basically what active investors do, right?)

(Some people believe the market is efficient even with respect to private information. I don't understand those people.)

However, I have my own counterargument, which is that the "conflict of interest" claim seems just kind of confused in the first place. If you hear someone criticizing a company, and you know that they have shorted the company, should that make you believe the criticism more or less? Taking the short position as some kind of fixed background information, it clearly skews incentives. But the short position isn't just a fixed fact of life: it is itself evidence about the critic's true beliefs. The critic chose to short and criticize this company and not another one. I claim the short position is a sign that they do truly believe the company is bad. (Or at least that it can be made to look bad, but it's easiest to make a company look bad if it actually is.) In the case where the critic does not have a short position, it's almost tempting to ask why not, and wonder whether it's evidence they secretly don't believe what they're saying.

All that said, I agree that none of this matters from a PR point of view. The public perception (as I perceive it) is that to short a company is to vandalize it, basically, and probably approximately all short-selling is suspicious / unethical.

Comment by BenMillwood on Objections to Value-Alignment between Effective Altruists · 2020-07-19T14:34:12.726Z · EA · GW

Here are a couple of interpretations of value alignment:

  • A pretty tame interpretation of "value-aligned" is "also wants to do good using reason and evidence". In this sense, distinguishing between value-aligned and non-aligned hires is basically distinguishing between people who are motivated by the cause and people who are motivated by the salary or the prestige or similar. It seems relatively uncontroversial that you'd want to care about this kind of alignment, and I don't think it reduces our capacity for dissent: indeed people are only really motivated to tell you what's wrong with your plan to do good if they care about doing good in the first place. I think your claim is not that "all value-alignment is bad" but rather "when EAs talk about value-alignment, they're talking about something much more specific and constraining than this tame interpretation". I'd be interested in whether you agree.
  • Another (potentially very specific and constraining) interpretation of "value alignment" that I understand people to be talking about when they're hiring for EA roles is "I can give this person a lot of autonomy and they'll still produce results that I think are good". This recommends people who essentially have the same goals and methods as you right down to the way they affect decisions about how to do your job. Hiring people like that means that you tax your management capacity comparatively less and don't need to worry so much about incentive design. To the extent that this is a big focus in EA hiring it could be because we have a deficit of management capacity and/or it's difficult to effectively manage EA work. It certainly seems like EA research is often comparatively exploratory / preliminary and therefore underspecified, and so it's very difficult to delegate work on it except to people who are already in a similar place to you on the matter.
Comment by BenMillwood on BenMillwood's Shortform · 2020-07-08T05:00:40.820Z · EA · GW

Though betting money is a useful way to make epistemics concrete, sometimes it introduces considerations that tease apart the bet from the outcome and probabilities you actually wanted to discuss. Here's some circumstances when it can be a lot more difficult to get the outcomes you want from a bet:

  • When the value of money changes depending on the different outcomes,
  • When the likelihood of people being able or willing to pay out on bets changes under the different outcomes.

As an example, I saw someone claim that the US was facing civil war. Someone else thought this was extremely unlikely, and offered to bet on it. You can't make bets on this! The value of the payout varies wildly depending on the exact scenario (are dollars lifesaving or worthless?), and more to the point the last thing on anyone's minds will be internet bets with strangers.

In general, you can't make bets about major catastrophes (leaving aside the question of whether you'd want to), and even with non-catastrophic geopolitical events, the bet you're making may not be the one you intended to make, if the value of money depends on the result.

A related idea is that you can't sell (or buy) insurance against scenarios in which insurance contracts don't pay out, including most civilizational catastrophes, which can make it harder to use traditional market methods to capture the potential gains from (say) averting nuclear war. (Not impossible, but harder!)

Comment by BenMillwood on Ramiro's Shortform · 2020-07-08T03:56:59.796Z · EA · GW

I don't think this is a big concern. When people say "timing the market" they mean acting before the market does. But donating countercyclically means acting after the market does, which is obviously much easier :)

Comment by BenMillwood on Slate Star Codex, EA, and self-reflection · 2020-07-06T16:07:20.755Z · EA · GW

While I think it's important to understand what Scott means when Scott says eugenics, I think:

a. I'm not certain clarifying that you mean "liberal eugenics" will actually pacify the critics, depending on why they think eugenics is wrong,

b. if there's really two kinds of thing called "eugenics", and one of them has a long history of being practiced by horrible, racist people coercively to further their horrible, racist views, and the other one is just fine, I think Scott is reckless in using the word here. I've never heard of "liberal eugenics" before reading this post. I don't think it's unreasonable of me to hear "eugenics" and think "oh, you mean that racist, coercive thing".

I don't think Scott is racist or a white supremacist but based on stuff like this I don't get very surprised when I find people who do.

Comment by BenMillwood on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-07-06T13:40:23.957Z · EA · GW

I'm very motivated to make accurate decisions about when it will be safe for me to see the people I love again. I'm in Hong Kong and they're in the UK, though I'm sure readers will prefer generalizable stuff. Do you have any recommendations about how I can accurately make this judgement, and who or what I should follow to keep it up to date?

Comment by BenMillwood on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-07-06T13:32:38.593Z · EA · GW

Do you think people who are bad at forecasting or related skills (e.g. calibration) should try to become mediocre at it? (Do you think people who are mediocre should try to become decent but not great? etc.)

Comment by BenMillwood on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-07-06T13:31:07.618Z · EA · GW

As someone with some fuzzy reasons to believe in their own judgement, but little explicit evidence of whether I would be good at forecasting or not, what advice do you have for figuring out if I would be good at it, and how much do you think it's worth focusing on?

Comment by BenMillwood on How to Fix Private Prisons and Immigration · 2020-06-21T07:32:57.738Z · EA · GW
No one is going to run a prison for free--there has to be some exchange of money (even in public prisons, you must pay the employees). Whether that exchange is moral or not, depends on whether it is facilitated by a system that has good consequences.

In the predominant popular consciousness, this is not sufficient for the exchange to be moral. Buying a slave and treating them well is not moral, even if they end up with a happier life than they otherwise would have had. Personally, I'm consequentialist, so in some sense I agree with you, but even then, "consequences" includes all consequences, including those on societal norms, perceptions, and attitudes, so in practice framing effects and philosophical objections do still have relevance.

Of course there has to be an exchange of money, but it's still very relevant what, conceptually or practically, that money buys. We have concepts like "criminal law" and "human rights" because we see benefits to not permitting everything to be bought or sold or contracted, so it's worth considering whether something like this crosses one of those lines.

Under this system, I think prisons will treat their inmates far better than they currently do: allowing inmates to get raped probably doesn't help maximize societal contribution.

I agree that seems likely, but in my mind it's not the main reason to prevent it, and treating it as an afterthought or a happy coincidence is a serious omission. If your prison system's foundational goal doesn't recognize what (IMO) may be the most serious negative consequence of prison as it exists today, then your goal is inadequate. Indirect effects can't patch that.

As a concrete example, there are people that you might predict are likely to die in prison (e.g. they have a terminal illness with a prognosis shorter than their remaining sentence). Their expected future tax revenue is roughly zero. Preventing their torture is still important, but your system won't view it as such.

Now that I'm thinking about it, I'm more convinced that this is exactly the kind of thing people are concerned about when they are concerned about commodification and dehumanization. Your system attempts to quantify the good consequences of rehabilitation, but entirely omits the benefits for the person being rehabilitated. You measure them only by what they can do for others – how they can be used. That seems textbook dehumanization to me, and the concrete consequence is that when they can't be used they are worthless, and need not be protected or cared for.

Comment by BenMillwood on How to Fix Private Prisons and Immigration · 2020-06-20T08:58:11.144Z · EA · GW

As my other comment promised, here's a couple of criticisms of your model on its own terms:

  • "If the best two prisons are equally capable, the profit is zero. I.e. criterion 3 is satisfied." I don't see why we should assume the best two prisons are equally capable? Relatedly, if the profit really is zero, I don't see why any prison would want to participate. But perhaps this is what your remark about zero economic profit is meant to address. I didn't understand that; perhaps you can elaborate.
  • Predicting the total present value of someone's future tax revenue minus welfare costs just seems extremely difficult in general. It will have major components that are just general macroeconomic trends or tax policy projections. While you are in part rewarding people who manage to produce better outcomes, you are also rewarding people who are simply best able to spot already-existing good (or bad) outcomes, especially if you allow these things to be traded on a secondary market.
  • You say things like "whenever the family uses a government service, the government passes the cost on to the company" as if the costs of doing so are always transparent or easy (or wise) to track. I guess an easy example would be the family driving down a public road, which is in some sense "using a public service" but in a way that isn't usually priced, and arguably it would be very wasteful to do so. Other examples are things like using public education, where it's understood that the cost is worth it because there's a benefit, but the benefit isn't necessarily easy to capture for the company who had to pay for the education. Amount of tax paid on salary doesn't reliably reflect amount of public benefit of someone doing their job, for a variety of reasons: arguably this is some kind of economic / market failure, but it is also undeniably the reality we live in. In essence, this is saying that many things are funded by taxation and not privately precisely because it's difficult or otherwise undesirable to do this kind of valuation for them.
  • Once you've extended your suggestion to prisoners and immigrants, I think it's worth asking why you can't securitize anyone's future "societal contributions". One obvious drawback is that once this happens on a large enough scale, it starts distorting the incentives of the government, which is after all elected by people who are happy when taxes go down, but no longer raises (as much) additional revenue for itself when taxes go up.
  • In part, I think the above remark goes to the core of the philosophical legitimacy of taxation: it's worth considering how the slogan "no taxation without representation" applies to people whose taxes go to a corporation that they have no explicit control over.
Comment by BenMillwood on How to Fix Private Prisons and Immigration · 2020-06-20T08:07:21.372Z · EA · GW

My instinctive emotional reaction to this post is that it worries me, because it feels a bit like "purchasing a person", or purchasing their membership in civil society. I think that a common reaction to this kind of idea would be that it contributes to, or at least continues, the commodification and dehumanization of prison inmates, the reduction of people to their financial worth / bottom line (indeed, parts of your analysis explicitly ignore non-monetary aspects of people's interactions with society and the state; as far as I can tell, all of it ignores the benefits to the inmate of different treatment by different prisons).

This is in a context where the prison system is already seen by some as effectively legal slavery by the back door, where people believe that (for example) black people have been deliberately criminalized by the war on drugs as part of an explicit effort to exploit and suppress them, and where popular rhetoric around criminals has always been eager to treat them as less than human.

This perspective explicitly doesn't address the specific content of your proposal, but I think there's a few reasons why it's important to pay attention to it:

  • It can help you to communicate better with people about your proposal, understand why they might be hostile to it, and what you can do to distance yourself from implicit associations that you don't want.
  • It raises the idea that perhaps viewing the problem with prisons as one of incentive design is missing the point entirely – the problem is not a misalignment of interests of the government and private prisons, but that the interests of the government are wrong in the first place. If true, that diagnosis prompts an entirely different kind of solution.
  • Similarly, you don't address the reason why the seemingly-terrible existing incentive structure exists already, and the role played in its construction and maintenance by lobbying from prison groups and corruption in politicians. Keeping that in mind, you need to think not only how your proposal would function now, but how it would be mutated by continued lobbying and corruption, if they were left unaddressed.

I'm posting this as a "frame challenge", but I think I also have some critiques of your model on its own terms, which I'll post as a separate comment.

Comment by BenMillwood on How to Fix Private Prisons and Immigration · 2020-06-20T06:51:23.142Z · EA · GW

As an offtopic aside, I'm never sure how to vote on comments like this. I'm glad the comment was made and want to encourage people to make comments like this in future. But, having served its purpose, it's not useful for future readers, so I don't want to sort it to the top of the conversation.

Comment by BenMillwood on Will protests lead to thousands of coronavirus deaths? · 2020-06-06T10:59:38.231Z · EA · GW

The number of possible pairs of people in a room of n people is about n^2/2, not n factorial. 10^2 is many orders of magnitude smaller than 10! :)

(I think you are making the mistake of multiplying together the contacts from each individual, rather than adding them together)

Comment by BenMillwood on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-13T21:11:15.676Z · EA · GW

I strongly agree with both this specific sentiment and the general attitude that generates sentiments like this.

However, I think it's worth pointing out that you don't have to agree with the Labour Party's current positions, or think that it's doing a good job, to be a good (honest) member. I think as long as you sincerely wish the party to perform well in elections or have more influence, even if you hope to achieve that by nudging its policy platform or general strategy in a different direction from the current one, then I wouldn't think you were being entryist or dishonest by joining.

(I feel like this criterion is maybe a bit weak and there should be some ideological essence of the Labour Party that you should agree with before joining, but I'm not sure it would be productive to pin down exactly what it was and I expect it strongly overlaps with "I want the Labour Party to do well" anyway)

Comment by BenMillwood on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-13T21:05:10.539Z · EA · GW

I actually thought the "of course I'd rather you'd stay a member" part was odd, since nowhere in the post up to that point had you said anything to indicate that you supported Labour yourself. The post doesn't say anything about whether Labour itself is good or bad, or whether that should factor into your decision to join it at all, but in this comment it sounds like those are crucial questions for whether this step is right or not.

Comment by BenMillwood on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-09-15T05:48:42.699Z · EA · GW

Yeah I think you have to view this exercise as optimizing for one end of the correctness-originality spectrum. Most of what is submitted is going to be uncomfortable admitting in public because it's just plain wrong, so if this exercise is to have any value at all, it's in sifting through all the nonsense, some of it pretty rotten, in the hope of finding one or two actually interesting things in there.

Comment by BenMillwood on Movement Collapse Scenarios · 2019-09-03T17:26:40.583Z · EA · GW

GiveWell used to solicit external feedback a fair bit years ago, but (as I understand it) stopped doing so because it found that it generally wasn't useful. Their blog post External evaluation of our research goes some way to explaining why. I could imagine a lot of their points apply to CEA too.

I think you're coming at this from a point of view of "more feedback is always better", forgetting that making feedback useful can be laborious: figuring out which parts of a piece of feedback are accurate and actionable can be at least as hard as coming up with the feedback in the first place, and while soliciting comments can give you raw material, if your bottleneck is not on raw material but instead on which of multiple competing narratives to trust, you're not necessarily gaining anything by hearing more copies of each.

Certainly you won't gain anything for free, and you may not be able to afford the non-monetary cost.

Comment by BenMillwood on BenMillwood's Shortform · 2019-08-29T17:31:56.773Z · EA · GW

Lead with the punchline when writing to inform

The convention in a lot of public writing is to mirror the style of writing for profit, optimized for attention. In a co-operative environment, you instead want to optimize to convey your point quickly, to only the people who benefit from hearing it. We should identify ways in which these goals conflict; the most valuable pieces might look different from what we think of when we think of successful writing.

  • Consider who doesn't benefit from your article, and if you can help them filter themselves out.
  • Consider how people might skim-read your article, and how to help them derive value from it.
  • Lead with the punchline – see if you can make the most important sentence in your article the first one.
  • Some information might be clearer in a non-discursive structure (like… bullet points, I guess).

Writing to persuade might still be best done discursively, but if you anticipate your audience already being sold on the value of your information, just present the information as you would if you were presenting it to a colleague on a project you're both working on.

Comment by BenMillwood on Ask Me Anything! · 2019-08-29T17:25:24.780Z · EA · GW

Why would the whole community read it? You'd set out in the initial post, as Will has done, why people might or might not be interested in what you have to say, and only people who passed that bar would spend any real time on it. I don't think the bar should be that high.

Comment by BenMillwood on Movement Collapse Scenarios · 2019-08-27T11:52:52.149Z · EA · GW

This is a question I consider crucial in evaluating the work of organizations, so it's sort of embarrassing I've never really tried to apply it to the community as a whole. Thanks for bringing that to light.

I think one thing uniting all your collapse scenarios is that they're gradual. I wonder how much damage could be done to EA by a relatively sudden catastrophe, or perhaps a short-ish series of catastrophes. A collapse in community trust could be a big deal: say there was a fraud or embezzlement scandal at CEA, OPP, or GiveWell. I'm not sure that would be catastrophic by itself, but perhaps if several of the organizations were damaged at once it would make people skeptical about the wisdom of reforming around any new centre, which would make it much harder to co-ordinate.

Another thing that I see as a potential risk is high-level institutions having a pattern of low-key misbehaviour that people start to see (wrongly, I hope) as an inevitable consequence of the underlying ideas. Suppose the popular perception starts to be "thinking about effectiveness in charity is all well and good, but it inevitably leads down a road of voluntary extinction / techno-utopianism / eugenics / something else low-status or bad". Depending on how bad the thing is, smart thoughtful people might start self-selecting out of the movement, and the remainder might mismanage perceptions of them even worse.

Comment by BenMillwood on Open Thread #45 · 2019-07-28T16:42:06.130Z · EA · GW

Recent EA thinking on this is probably mostly:

Both are claiming to have done a lot of research, but I don't think either Founders' Pledge or Let's Fund have a GiveWell-like track record and I'm slightly nervous that we're repeating the mistake we (as a community) made when we recommended Cool Earth based on Giving What We Can's relatively cursory investigation into it, and then an only somewhat less cursory investigation suggested it wasn't much use.

Comment by BenMillwood on EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge? · 2019-06-17T18:30:54.738Z · EA · GW

I don't think that "going silent" or failing to report donations is indication that people are not meeting the pledge. Nowadays I don't pay GWWC as an organisation much / any attention, but I'm still donating 10% a year (and then some).

To be honest I haven't read closely enough to understand where you do and don't account for "quiet pledge-keepers" in your analysis, but I at least think stuff like this is just plain wrong:

total number of people ceasing reporting donations (and very likely ceasing keeping the pledge)

Comment by BenMillwood on Amazon Smile · 2019-06-15T22:19:10.033Z · EA · GW

I couldn't find The Clear Fund when I looked just now. Would be interested in someone confirming that it's still there.

Comment by BenMillwood on A general framework for evaluating aging research. Part 1: reasoning with Longevity Escape Velocity · 2019-06-07T14:30:38.120Z · EA · GW

If you want to look up the maths elsewhere, it may be helpful to know that a constant, independent chance of death (or survival) per year is modelled by a negative binomial distribution.

Comment by BenMillwood on Evidence Action is shutting down No Lean Season · 2019-06-07T14:23:30.241Z · EA · GW

Sounds like the fact there was already substantial doubt over whether the program worked was a key part of the decision to shut it down. That suggests that if the same kind of scandal had affected a current top charity, they would have worked harder to continue the project.

Comment by BenMillwood on There's Lots More To Do · 2019-06-06T03:20:16.824Z · EA · GW

I actually think even justifying yourself only to yourself, being accountable only to yourself, is probably still too low a standard. No-one is an island, so we all have a responsibility to the communities we interact with, and it is to some extent up to those communities, not the individuals in isolation, what that means. If Ben Hoffman wants to have a relationship with EAs (individually or collectively), it's necessary to meet the standards of those individuals or the community as a whole about what's acceptable.

Comment by BenMillwood on There's Lots More To Do · 2019-06-06T03:06:44.135Z · EA · GW

When you say "you don't need to justify your actions to EAs", then I have sympathy with that, because EAs aren't special, we're no particular authority and don't have internal consensus anyway. But you seem to be also arguing "you don't need to justify your actions to yourself / at all". I'm not confident that's what you're saying, but if it is I think you're setting too low a standard. If people aren't required to live in accordance with even their own values, what's the point in having values?

Comment by BenMillwood on Could the crowdfunder to prosecute Boris Johnson be a high impact donation opportunity? · 2019-06-06T01:48:44.727Z · EA · GW

It's odd to call Boris an opponent of the government. He's a sitting MP - he's part of the state. To me this seems to be more about the courts being able to hold Parliament accountable.

Comment by BenMillwood on Stories and altruism · 2019-05-20T09:17:49.597Z · EA · GW

I like the idea here a great deal, but I expect there's going to be a lot of variation in what creates what effect in whom. I wonder if there's better ways to come up with aggregate recommendations, so we can find out what seems to be consistent in its EA appeal, vs. what's idiosyncratic

Comment by BenMillwood on Why isn't GV psychedelics grantmaking housed under Open Phil? · 2019-05-06T15:19:57.653Z · EA · GW

There's an unanswered question here of why Good Ventures makes grants that OpenPhil doesn't recommend, given that GV believes in the OpenPhil approach broadly. But I guess I don't find it that surprising that they do so. People like to do more than one thing?

Comment by BenMillwood on Why isn't GV psychedelics grantmaking housed under Open Phil? · 2019-05-06T05:05:05.110Z · EA · GW

Have you attempted to contact GV or OpenPhil directly about this?

Comment by BenMillwood on Political culture at the edges of Effective Altruism · 2019-04-14T12:17:17.199Z · EA · GW

I think this is only true with a very narrow conception of what the "EA things that we are doing" are. I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world.

That's all I believe constitutes "EA things" in your usage. Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally "the experts" on. If the global research community on poverty interventions came to the consensus "actually we think bednets are bad now" then EA orgs would need to listen to that and change course.

"Politicized" questions and values are no different, so we need to be open to feedback and input from external experts, whatever constitutes expertise in the field in question.

Comment by BenMillwood on Does EA need an underpinning philosophy? Could sentientism be that philosophy? · 2019-03-30T18:24:19.746Z · EA · GW

Downvotes aren't primarily to help the person being downvoted. They help other readers, which after all there are many more of than writers. Creating an expectation that they should all be explained increases the burden on the downvoter significantly, making them less likely to be used and therefore less useful.

Comment by BenMillwood on Apology · 2019-03-25T15:05:11.961Z · EA · GW

Just to remark on the "criminal law" point – I think it's appropriate to apply a different, and laxer, standard here than we do for criminal law, because:

  • the penalties are not criminal penalties, and in particular do not deprive anyone of anything they have a right to, like their property or freedom – CEA are free to exclude anyone from EAG who in their best judgement would make it a worse event to attend,
  • we don't have access to the kinds of evidence or evidence-gathering resources that criminal courts do, so realistically it's pretty likely that in most cases of misconduct or abuse we won't have criminal-standard evidence that it happened, and we'll have to either act despite that or never act at all. Some would defend never acting at all, I'm sure (or acting in only the most clear-cut cases), but I don't think it's the mainstream view.
Comment by BenMillwood on Apology · 2019-03-25T14:04:17.369Z · EA · GW
And this is a clear case in which I would have first-person authority on whether I did anything wrong.

I think this is the main point of disagreement here. Generally when you make sexual or romantic advances on someone and those advances make them uncomfortable, you're often not aware of the effect that you're having (and they may not feel safe telling you), so you're not the authority on whether you did something wrong.

Which is not to say that you're guilty because they accused you! It's possible to behave perfectly reasonably and for people around you to get upset, even to blame you for it. In that scenario you would not be guilty of doing anything wrong necessarily. But more often it looks like this:

  • someone does something inappropriate without realizing it,
  • impartial observers agree, having heard the facts, that it was inappropriate,
  • it seems clearly-enough inappropriate that the offender had a moral duty to identify it as such in advance and not do it.

Then they need to apologize and do what's necessary to prevent it happening again, including withdrawing from the community if necessary.