Posts

BenMillwood's Shortform 2019-08-29T17:31:56.643Z

Comments

Comment by Ben Millwood (BenMillwood) on EAs should recommend cost-effective interventions in more cause areas (not just the most pressing ones) · 2022-09-01T18:47:40.570Z · EA · GW

Though the flipside of this is I think we probably don't have a bunch of people sitting around like "ah, I would do a cost-benefit analysis, but none of the things to analyse are worth my time", so reading this post probably doesn't generate LICAs unless we also figure out what people are missing to be able to do more of this stuff.

I expect partly it's just that doing Real, Important Research is more intimidating than it deserves to be, and it would be useful to try to "demystify" some of this work a bit.

Comment by Ben Millwood (BenMillwood) on EAs should recommend cost-effective interventions in more cause areas (not just the most pressing ones) · 2022-09-01T18:40:17.771Z · EA · GW

Another possible benefit is that doing cost-benefit analyses might make you better at doing other cost -benefit analyses, or give you other transferrable skills or knowledge that are helpful for top-priority causes. I think that for all our enthusiasm about these kinds of assessments, we don't actually as a community produce that many of them. Scaling up the analysis industry might lead to all sorts of improvements in how quickly and accurately we can do them.

Comment by Ben Millwood (BenMillwood) on Earn To Give $1M/year or Work Directly? · 2022-08-29T22:19:18.118Z · EA · GW

I think it's more like: CEA projects are limited by other essential resources (like staff, management capacity, onboarding capacity) before they run out of money.

(I agree it's not 0/1 exactly, but it's not as easy as you'd think to just spend more money and get more good stuff.)

Comment by Ben Millwood (BenMillwood) on How many EA billionaires five years from now? · 2022-08-20T17:50:10.410Z · EA · GW

Upvoted because I don't think this tension is discussed enough, even if to refute it.

It strikes me that the median non-EA is more risk averse than EAs should be, so moving non-EA to EA you should probably drop some of your risk aversion. But it does also seem true that the top performing people in your field might disproportionately be people who took negative EV bets and got lucky, so we don't necessarily want to be less risk averse than them.

Comment by Ben Millwood (BenMillwood) on Let’s not glorify people for how they look. · 2022-08-11T20:40:30.303Z · EA · GW

I think we should eliminate any discussion of attractiveness from professional spaces (as is the norm among professional spaces generally, I'd hope), but... not all EA spaces are professional spaces, and given especially that EAs often date within EA I think it's reasonable to have a normal, respectful amount of discussion of physical appearance in social spaces (while at the same time agreeing with you that ranking every nearby woman by physical attractiveness is not respectful and I'm on board with calling that kind of thing out as inappropriate in any context).

I agree we should avoid fixating on it, overvaluing it, or letting our preferences for physical appearances leak into any other part of our opinion of a person, but I think suppressing it altogether is too much. It's part of how people interact with the world and I think trying to deny that it exists isn't ultimately healthy.

Comment by Ben Millwood (BenMillwood) on The AI Messiah · 2022-07-23T12:51:20.307Z · EA · GW

And we've created robots that can outperform humans in virtually all physical tasks.

Not that this is at all central to your point, but I don't think this is true. We're capable of building robots that move with more force and precision than humans, but mostly only in environments that are pretty simple or heavily customised for them. The cutting edge in robots moving over long distances or over rough terrain (for example) seems pretty far behind where humans are. Similarly, I believe fruit-picking is very hard to automate, in ways that seem likely to generalise to lots of similar tasks.

I also don't think we're very close to artificial smell, although possibly people aren't working on it very much?

Comment by Ben Millwood (BenMillwood) on Using the “executive summary” style: writing that respects your reader’s time · 2022-07-23T12:14:39.481Z · EA · GW

Another thing you can do to respect the time of your readers is to think a little about who doesn't need to read your post, and how you can get them to stop. I don't have a lot of advice about what works here, but it seems like a good goal to have.

Comment by Ben Millwood (BenMillwood) on New cause area: bivalve aquaculture · 2022-06-17T19:27:14.067Z · EA · GW

This should enable the nations become affluent more easily, because not many people would have to farm (efficiency gains would be relatively low) but industrial processing machinery will be invested into.

I don't understand this. More easily than what? What's your story for why people aren't doing this already, if it would make them more affluent?

Comment by Ben Millwood (BenMillwood) on New cause area: bivalve aquaculture · 2022-06-14T10:01:50.507Z · EA · GW

I’m informed that EAs do not care about climate change

This is an exaggeration IMO. EAs care about climate change, but often don't prioritise it, because they care about other things even more. If everything more important than climate change was solved, I think EAs would be working pretty hard on climate change.

Comment by Ben Millwood (BenMillwood) on Why the EA aversion to local altruistic action? · 2022-06-10T23:22:59.052Z · EA · GW

A brief response to one point: if you are including second-order and third-order effects in your analysis, you should include them on both sides. Yes, donating to a local cause fosters connections in the community and ultimately state capacity and so on. But saving people from malaria does that stuff too, and intuitively when the first order effects are more dramatic, one expects the second order effects to be correspondingly more dramatic: you meet a new friend at your local animal shelter, and meanwhile the child that didn't die of malaria meets a whole life's worth of people, their family has less grief and trauma, their community has greater certainty and security. Of course, it's really hard to be sure of the whole story, but I don't see any reason to suppose that going one step deeper in the analysis will totally invert the conclusion of the first-level analysis.

Comment by Ben Millwood (BenMillwood) on Bad Omens in Current Community Building · 2022-05-21T15:41:28.388Z · EA · GW

I feel a desire to lower some expectations:

  • I don't think any social movement of real size or influence has ever avoided drawing some skepticism, mockery, or even suspicion,
  • I think community builders should have a solid and detailed enough understanding of EA received wisdom to be able to lay out the case for our recommendations in a reasonably credible way, but I don't think it's reasonable to expect them to be domain experts in every domain, and that means that sometimes they aren't going to be able to seem impressive to every domain expert that comes to us.
  • To be frank, it isn't realistic to be able to capture the imagination of everyone who seems promising even if we make the best possible versions of our arguments. Some people will inevitably come away thinking we "just don't get it", that we haven't addressed their objections, that we're not serious about [specific concern X] and therefore our point of view is uninteresting. Communication channels just aren't high-fidelity enough, and people's engagement heuristics aren't precise enough, to avoid this happening from time to time.
  • When some people are weirded out by the way we behave or try to attract new members, it seems to me like sometimes this is just reasonable self-protective heuristics that they have, working exactly as intended. People are creeped out by us giving them free books or telling them to change their careers or telling them that the future of humanity is at stake, because they reason "these people are putting a lot into me because they want a lot out of me". They're basically correct about that! While we value contributions from people at a wide range of levels of engagement and dedication, the "top end" is pretty extreme, as it should be, and some people are going to notice that and be worried about it. We can work to reduce that tension, but I don't think it's going away.

Obviously we should try our best on all of these dimensions, progress can be made, we can be more impressive and more appealing and less threatening and more welcoming. But I can't imagine a realistic version of the EA community that honestly communicates about everything we believe and want to do and doesn't alienate anyone by doing that.

Comment by Ben Millwood (BenMillwood) on What is meant by 'infrastructure' in EA? · 2022-05-14T14:58:35.135Z · EA · GW

I think EA uses the word in a basically standard way. I imagine there being helpful things to say about "what do we mean by funding infrastructure" or "what kind of infrastructure is the EA Infrastructure Fund meaning to support", but I don't know that there's anything to say in a more general context than that.

Comment by Ben Millwood (BenMillwood) on Launching SoGive Grants · 2022-05-14T14:54:31.479Z · EA · GW

Why do you think it's valuable? I don't think we have this norm already, and it's not immediately obvious to me how it would change my behaviour.

Comment by Ben Millwood (BenMillwood) on Bad Omens in Current Community Building · 2022-05-14T13:35:54.106Z · EA · GW

I don't think we have a single "landing page" for all the needs of the community, but I'd recommend applying for relevant jobs or getting career advice or going to an EA Global conference, or figuring out what local community groups are nearby you and asking them for advice.

Comment by Ben Millwood (BenMillwood) on Bad Omens in Current Community Building · 2022-05-14T13:24:22.147Z · EA · GW

I agree with paragraph 1 and 2 and disagree with paragraph 3 :)

That is: I agree longtermism and x-risk are much more difficult to introduce to the general population. They're substantially farther from the status quo and have weirder and more counterintuitive implications.

However, we don't choose what to talk about by how palatable it is. We must be guided by what's true, and what's most important. Unfortunately, we live in a world where what's palatable and what's true need not align.

To be clear, if you think global development is more important than x-risk, it makes sense to suggest that we should focus that way instead. But if you think x-risk is more important, the fact that global development is less "weird" is not enough reason to lean back that way.

Comment by Ben Millwood (BenMillwood) on Against immortality? · 2022-04-28T18:24:57.971Z · EA · GW

I don't buy the asymmetry of your scope argument. It feels very possible that totalitarian lock-in could have billions of lives at stake too, and cause a similar quantity of premature deaths.

Comment by Ben Millwood (BenMillwood) on Free-spending EA might be a big problem for optics and epistemics · 2022-04-19T22:53:47.662Z · EA · GW

apologies if this was obvious from the responses in some other way, but did you consider that the person who gave a 9 might have had the scale backwards, i.e. been thinking of 1 as the maximally uncomfortable score?

Comment by Ben Millwood (BenMillwood) on Critique of OpenPhil's macroeconomic policy advocacy · 2022-03-27T19:11:50.753Z · EA · GW

I don't understand what you think Holden / OpenPhil's bias is. I can see why they might have happened to be wrong, but I don't see what in their process makes them systematically wrong in a particular way.

I also think it's generally reasonable to form expectations about who in an expert disagreement is correct using heuristics that don't directly engage with the content of the arguments. Such heuristics, again, can go wrong, but I think they still carry information, and I think we often have to ultimately rely on them when there's just too many issues to investigate them all.

Comment by Ben Millwood (BenMillwood) on Announcing Alvea—An EA COVID Vaccine Project · 2022-03-08T12:50:36.961Z · EA · GW

(in case anyone else was confused, this was a reply to a now-deleted comment)

Comment by Ben Millwood (BenMillwood) on Simplify EA Pitches to "Holy Shit, X-Risk" · 2022-02-27T23:02:27.925Z · EA · GW

I don't know. Partly I think that some of those people are working on something that's also important and neglected, and they should keep working on it, and need not switch.

Comment by Ben Millwood (BenMillwood) on Simplify EA Pitches to "Holy Shit, X-Risk" · 2022-02-27T22:50:47.047Z · EA · GW

I think to the extent you are trying to draw the focus away from longtermist philosophical arguments when advocating for people to work on extinction risk reduction, that seems like a perfectly reasonable thing to suggest (though I'm unsure which side of the fence I'm on).

But I don't want people casually equivocating between x-risk reduction and EA, relegating the rest of the community to a footnote.

  • I think it's a misleading depiction of the in-practice composition of the community,
  • I think it's unfair to the people who aren't convinced by x-risk arguments,
  • I think it could actually just make us worse at finding the right answers to cause prioritization questions.
Comment by Ben Millwood (BenMillwood) on Simplify EA Pitches to "Holy Shit, X-Risk" · 2022-02-27T22:18:13.894Z · EA · GW

It's not enough to have an important problem: you need to be reasonably persuaded that there's a good plan for actually making the problem better, the 1% lower. It's not a universal point of view among people in the field that all or even most research that purports to be AI alignment or safety research is actually decreasing the probability of bad outcomes. Indeed, in both AI and bio it's even worse than that: many people believe that incautious action will make things substantially worse, and there's no easy road to identifying which routes are both safe and effective.

I also don't think your argument is effective against people who already think they are working on important problems. You say, "wow, extinction risk is really important and neglected" and they say "yes, but factory farm welfare is also really important and neglected".

To be clear, I think these cases can be made, but I think they are necessarily detailed and in-depth, and for some people the moral philosophy component is going to be helpful.

Comment by Ben Millwood (BenMillwood) on Simplify EA Pitches to "Holy Shit, X-Risk" · 2022-02-14T01:13:51.043Z · EA · GW

My main criticism of this post is that it seems to implicitly suggest that "the core action relevant points of EA" are "work on AI or bio", and doesn't seem to acknowledge that a lot of people don't have that as their bottom line. I think it's reasonable to believe that they're wrong and you're right, but:

  • I think there's a lot that goes into deciding which people are correct on this, and only saying "AI x-risk and bio x-risk are really important" is missing a bunch of stuff that feels pretty essential to my beliefs that x-risk is the best thing to work on,
  • this post seems to frame your pitch as "the new EA pitch", and it's weird to me to omit from your framing that lots of people that I consider EAs are kind of left out in the cold by it.
Comment by Ben Millwood (BenMillwood) on Long-Term Future Fund: May 2021 grant recommendations · 2021-12-27T00:36:11.114Z · EA · GW

It's been about 7 months since this writeup. Did the Survival and Flourishing Fund make a decision on funding NOVID?

Comment by Ben Millwood (BenMillwood) on Longtermism in 1888: fermi estimate of heaven’s size. · 2021-12-26T23:18:41.489Z · EA · GW

Pointing out more weirdnesses may by now be unnecessary to make the point, but I can't resist: the estimate also seems to equivocate between "number of people alive at any moment" and "number of people in each generation", as if the 900 million population was comprised of a single generation that fully replaced itself each 31.125 years. Numerically this only impacts the result by a factor of 3 or so, but it's perhaps another reason not to take it as a serious attempt :)

Comment by Ben Millwood (BenMillwood) on Is EA compatible with technopessimism? · 2021-12-26T22:40:38.540Z · EA · GW

Can you give examples of technopessimists "in the wild"? I'm sure there are plenty of examples of "folk technopessimism" but if you mean something more fleshed-out than that I don't think I've seen it expressed or argued for a lot. (That said, I'm not very widely-read, so I'm sure there's lots of stuff out there I don't hear about.)

Comment by Ben Millwood (BenMillwood) on December 2021 monthly meme post · 2021-12-01T14:11:58.493Z · EA · GW

I see the image now (weirdly, it's a stylized form of https://reductress.com/post/quiz-are-you-even-good-enough-to-have-imposter-syndrome/ )

Comment by Ben Millwood (BenMillwood) on December 2021 monthly meme post · 2021-11-30T15:24:08.904Z · EA · GW

Don't think it's hosted on the forum, when I right-click and copy image link I get https://scontent-lhr8-1.xx.fbcdn.net/v/t39.30808-6/259629194_10220039871613660_9218217279654834365_n.jpg?_nc_cat=110&ccb=1-5&_nc_sid=825194&_nc_ohc=tnrYKfG2lQ4AX8LlETd&_nc_ht=scontent-lhr8-1.xx&oh=5af3c7d105c83cc6c472526d4573647c&oe=61A320C8 which looks like a Facebook URL.

Comment by BenMillwood on [deleted post] 2021-10-18T10:08:59.946Z

"if AI has moral status, then AI helping its replicas grow or share pleasant experiences is morally valuable stuff". Sure, but I think the claim is that "most" AI won't be interested in doing that, and will pursue some other goal instead that doesn't really involve helping anyone.

Comment by Ben Millwood (BenMillwood) on The Cost of Rejection · 2021-10-08T19:44:31.225Z · EA · GW

It's a little aside from your point, but good feedback is not only useful for emotionally managing the rejection -- it's also incredibly valuable information! Consider especially that someone who is applying for a job at your organization may well apply for jobs at other organizations. Telling them what is good or bad with their application will help them improve that process, and make them more likely to find something that is the right fit for them. It could be vital in helping them understand what they need to do to position themselves to be more useful to the community, or at least it could save the time and effort of them applying for more jobs that have the same requirements you did, that they didn't meet -- and save the time and effort of the hiring team there rejecting them.

A unique characteristic of EA hiring is that it's often good for your goals to help candidates who didn't succeed at your process succeed at something else nearby. I often think we don't realize how significantly this shifts our incentives in cases like these.

Comment by Ben Millwood (BenMillwood) on How would you run the Petrov Day game? · 2021-09-29T22:05:36.920Z · EA · GW

Like Sanjay's answer, I think this is a correct diagnosis of a problem, but I think the advertising solution is worse than the problem.

  • A month of harm seems too long to me,
  • I can't think of anything we'd want to advertise on LW that we wouldn't already want to advertise on EAF, and we've chosen "no ads" in that case.
Comment by Ben Millwood (BenMillwood) on How would you run the Petrov Day game? · 2021-09-29T21:59:18.758Z · EA · GW

I'd like to push the opt-in / opt-out suggestion further, and say that the button should only affect people who have opted in (that is, the button bans all the opted-in players for a day, rather than taking the website down for a day). Or you could imagine running it on another venue than the Forum entirely, that was more focused on these kinds of collaborative social experiments.

I can see an argument that this takes away too much from the game, but in that case I'd lean towards just not running it at all. I think it's a cute idea but I don't think it feels important enough to me to justify obstructing unrelated uses of the forum and creating a bunch of unnecessary frustration. I'd like the forum to remain accessible to people who don't think of themselves as "in the community", and I think stuff like this gets in the way of that.

Comment by Ben Millwood (BenMillwood) on How would you run the Petrov Day game? · 2021-09-29T21:56:27.140Z · EA · GW

I think this correctly identifies a problem (not only is it a bad model for reality, it's also confusing for users IMO). I don't think extra karma points is the right fix, though, since I imagine a lot of people only care about karma insofar as it's a proxy for other people's opinions of their posts, which you can't just give 30 more of :)

(also it's weird inasmuch as karma is a proxy for social trust, whereas nuking people probably lowers your social trust)

Comment by Ben Millwood (BenMillwood) on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T15:20:38.248Z · EA · GW

Sure, precommitments are not certain, but they're a way of raising the stakes for yourself (putting more of your reputation on the line) to make it more likely that you'll follow through, and more convincing to other people that this is likely.

In other words: of course you don't have any way to reach probability 0, but you can form intentions and make promises that reduce the probability (I guess technically this is "restructuring your brain"?)

Comment by Ben Millwood (BenMillwood) on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T14:29:41.119Z · EA · GW

Yeah, that did occur to me. I think it's more likely that he's telling the truth, and even if he's lying, I think it's worth engaging as if he's sincere, since other people might sincerely believe the same things.

Comment by Ben Millwood (BenMillwood) on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T20:37:46.495Z · EA · GW

I downvoted this. I'm not sure if that was an appropriate way to express my views about your comment, but I think you should lift your pledge to second strike, and I think it's bad that you pledged to do so in the first place.

I think one important disanalogy between real nuclear strategy and this game is that there's kind of no reason to press the button, which means that for someone pressing the button, we don't really understand their motives, which makes it less clear that this kind of comment addresses their motives.

Consider that last time LessWrong was persuaded to destroy itself, it was approximately by accident. Especially considering the context of the event we're commemorating was essentially another accident, I think the most likely story for why one of the sites gets destroyed is not intentional, and thus not affected by precommitments to retaliate.

Comment by Ben Millwood (BenMillwood) on Cultured meat predictions were overly optimistic · 2021-09-19T23:47:25.136Z · EA · GW

While I think it's useful to have concrete records like this, I would caution against drawing conclusions about the cultured meat community specifically unless we draw a comparison with other fields and find that forecast accuracy is better anywhere else. I'd expect that overoptimistic forecasts are just very common when people evaluate their own work in any field.

Comment by Ben Millwood (BenMillwood) on The motivated reasoning critique of effective altruism · 2021-09-18T11:29:48.014Z · EA · GW

Another two examples off the top of my head:

Comment by Ben Millwood (BenMillwood) on Three charitable recommendations for COVID-19 in India · 2021-05-08T15:40:05.208Z · EA · GW

GiveIndia says donations from India or the US are tax-deductible.

Milaap says they have tax benefits to donations but I couldn't find a more specific statement so I guess it's just in India?

Anyone know a way to donate with tax deduction from other jurisdictions? If 0.75x - 2x is accurate, it seems like for some donors that could make the difference.

(Siobhan's comment elsewhere here suggests that Canadian donors might want to talk to RCForward about this).

Comment by Ben Millwood (BenMillwood) on AMA: Toby Ord @ EA Global: Reconnect · 2021-03-17T21:13:26.871Z · EA · GW

You've previously spoken about the need to reach "existential security" -- in order to believe the future is long and large, we need to believe that existential risk per year will eventually drop very close to zero. What are the best reasons for believing this can happen, and how convincing do they seem to you? Do you think that working on existential risk reduction or longtermist ideas would still be worthwhile for someone who believed existential security was very unlikely?

Comment by Ben Millwood (BenMillwood) on Why EA groups should not use “Effective Altruism” in their name. · 2021-02-27T14:06:27.418Z · EA · GW

It seems plausible that reasonable people might disagree on whether student groups on the whole would benefit from being more or less conforming to the EA consensus on things. One person's "value drift" might be another person's "conceptual innovation / development".

On balance I think I find it more likely that an EA group would be co-opted in the way you describe than an EA group would feel limited from doing  something effective because they were worried it was too "off-brand", but it seems worth mentioning the latter as a possibility.

Comment by Ben Millwood (BenMillwood) on Why EA groups should not use “Effective Altruism” in their name. · 2021-02-27T13:58:18.297Z · EA · GW

I think this post doesn't explicitly recognize a (to me) important upside of doing this, which applies to doing all things that other people aren't doing: potential information value.

This post exists because people tried something different and were thoughtful about the results, and now potentially many other people in similar situations can benefit from the knowledge of how it went. On the other hand, if you try it and it's bad, you can write a post about what difficulties you encountered so that other people can anticipate and avoid them better.

By contrast, naming your group Effective Altruism Erasmus wouldn't have led to any new insights about group naming.

Comment by Ben Millwood (BenMillwood) on Deference for Bayesians · 2021-02-16T22:33:45.363Z · EA · GW

Bluntly I think a prior of 98% is extremely unreasonable. I think that someone who had thoroughly studied the theory, all credible counterarguments against it, had long discussions about it with experts who disagreed, etc. could reasonably come to a belief that strong. An amateur who has undertaken a simplistic study of the basic elements of the situation can't IMO reasonably conclude that all the rest of that thought and debate would have a <2% chance of changing their mind.

Even in an extremely empirically grounded and verifiable theory like physics, for much of the history of the field, the dominant theoretical framework has had significant omissions or blind spots that would occasionally lead to faulty results when applied to areas that were previously unknown. Economic theory is much less reliable. I think you're correct to highlight that economic data can be unreliable too, and it's certainly true that many people overestimate the size of Bayesian updates based on shaky data, and should perhaps stick to their priors more. But let's not kid ourselves about how good our cutting edge of theoretical understanding is in fields like economics and medicine – and let's not kid ourselves that nonspecialist amateurs can reach even that level of accuracy.

Comment by Ben Millwood (BenMillwood) on Population Size/Growth & Reproductive Choice: Highly effective, synergetic & neglected · 2021-02-14T16:18:16.889Z · EA · GW

I agree with Halstead that this post seems to ignore the upsides of creating more humans. If you, like me, subscribe to a totalist population ethics, then each additional person who enjoys life, lives richly, loves, expresses themselves creatively, etc. -- all of these things make for a better world. (That said, I think that improving the lives of existing people is currently a better way to achieve that than creating more -- but I wouldn't say that creating more is wrong).

Moreover, I think this post misses the instrumental value of people, too. To understand the all-inclusive impact of an additional person on the environment, you surely have to also consider the chance that they become a climate researcher or activist, or a politician, or a worker in a related technical field; or even more indirectly, that they contribute to the social and economic environment that supports people who do those things. For sure, that social and economic environment supports climate damage as well, but deciding how these factors weigh up means (it seems to me) deciding whether human social and technological progress is good or bad for climate change, and that seems like a really tricky question, never mind all the other things it's good or bad for.

Comment by Ben Millwood (BenMillwood) on Population Size/Growth & Reproductive Choice: Highly effective, synergetic & neglected · 2021-02-14T13:53:36.829Z · EA · GW

The only place where births per woman are not close to 2 is sub-saharan Africa. Thus, the only place where family planning could reduce emissions is sub-saharan Africa, which is currently a tiny fraction of emissions.

This is not literally true: family planning can reduce emissions in the developed world if the desired births per woman is even lower than the actual births per woman. But I don't dispute the substance of the argument: it seems relatively difficult to claim that there's a big unmet need for contraceptives elsewhere, and that should determine what estimates we use for emissions.

Comment by Ben Millwood (BenMillwood) on Deference for Bayesians · 2021-02-14T13:41:40.595Z · EA · GW

I buy two of your examples: in the case of masks, it seems clear now that the experts were wrong before, and in "First doses first", you present some new evidence that the priors were right.

On nutrition and lockdowns, you haven't convinced me that the point of view you're defending isn't the one that deference would arrive at anyway: it seems to me like the expert consensus is that lockdowns work and most nutritional fads are ignorable.

On minimum wage and alcohol during pregnancy, you've presented a conflict between evidence and priors, but I don't feel like you resolved the conflict: someone who believed the evidence proved the priors wrong won't find anything in your examples to change their minds. For drinking during pregnancy, I'm not even really convinced there is a conflict: I suspect the heart of the matter is what people mean by "safe", what risks or harms are small enough to be ignored.

I think in general there are for sure some cases where priors should be given more weight than they're currently afforded. But it also seems like there are often cases where intuitions are bad, where "it's more complicated than that" tends to dominate, where there are always more considerations or open uncertainties than one can adequately navigate on priors alone. I don't think this post helps me understand how to distinguish between those cases.

Comment by Ben Millwood (BenMillwood) on Where I Am Donating in 2016 · 2021-02-14T01:17:13.745Z · EA · GW

I don't know if this meets all the details, but it seems like it might get there: Singapore restaurant will be the first ever to serve lab-grown chicken (for $23)

Comment by Ben Millwood (BenMillwood) on BenMillwood's Shortform · 2020-10-23T16:08:56.754Z · EA · GW

Hmm, I was going to mention mission hedging as the flipside of this, but then noticed the first reference I found was written by you :P

For other interested readers, mission hedging is where you do the opposite of this and invest in the thing you're trying to prevent -- invest in tobacco companies as an anti-smoking campaigner, invest in coal industry as a climate change campaigner, etc. The idea being that if those industries start doing really well for whatever reason, your investment will rise, giving you extra money to fund your countermeasures.

I'm sure if I thought about it for a bit I could figure out when these two mutually contradictory strategies look better or worse than each other. But mostly I don't take either of them very seriously most of the time anyway :)

Comment by Ben Millwood (BenMillwood) on BenMillwood's Shortform · 2020-10-23T16:04:24.347Z · EA · GW

I don't buy your counterargument exactly. The market is broadly efficient with respect to public information. If you have private information (e.g. that you plan to mount a lobbying campaign in the near future; or private information about your own effectiveness at lobbying) then you have a material advantage, so I think it's possible to make money this way. (Trading based on private information is sometimes illegal, but sometimes not, depending on what the information is and why you have it, and which jurisdiction you're in. Trading based on a belief that a particular industry is stronger / weaker than the market perceives it to be is surely fine; that's basically what active investors do, right?)

(Some people believe the market is efficient even with respect to private information. I don't understand those people.)

However, I have my own counterargument, which is that the "conflict of interest" claim seems just kind of confused in the first place. If you hear someone criticizing a company, and you know that they have shorted the company, should that make you believe the criticism more or less? Taking the short position as some kind of fixed background information, it clearly skews incentives. But the short position isn't just a fixed fact of life: it is itself evidence about the critic's true beliefs. The critic chose to short and criticize this company and not another one. I claim the short position is a sign that they do truly believe the company is bad. (Or at least that it can be made to look bad, but it's easiest to make a company look bad if it actually is.) In the case where the critic does not have a short position, it's almost tempting to ask why not, and wonder whether it's evidence they secretly don't believe what they're saying.

All that said, I agree that none of this matters from a PR point of view. The public perception (as I perceive it) is that to short a company is to vandalize it, basically, and probably approximately all short-selling is suspicious / unethical.

Comment by Ben Millwood (BenMillwood) on Objections to Value-Alignment between Effective Altruists · 2020-07-19T14:34:12.726Z · EA · GW

Here are a couple of interpretations of value alignment:

  • A pretty tame interpretation of "value-aligned" is "also wants to do good using reason and evidence". In this sense, distinguishing between value-aligned and non-aligned hires is basically distinguishing between people who are motivated by the cause and people who are motivated by the salary or the prestige or similar. It seems relatively uncontroversial that you'd want to care about this kind of alignment, and I don't think it reduces our capacity for dissent: indeed people are only really motivated to tell you what's wrong with your plan to do good if they care about doing good in the first place. I think your claim is not that "all value-alignment is bad" but rather "when EAs talk about value-alignment, they're talking about something much more specific and constraining than this tame interpretation". I'd be interested in whether you agree.
  • Another (potentially very specific and constraining) interpretation of "value alignment" that I understand people to be talking about when they're hiring for EA roles is "I can give this person a lot of autonomy and they'll still produce results that I think are good". This recommends people who essentially have the same goals and methods as you right down to the way they affect decisions about how to do your job. Hiring people like that means that you tax your management capacity comparatively less and don't need to worry so much about incentive design. To the extent that this is a big focus in EA hiring it could be because we have a deficit of management capacity and/or it's difficult to effectively manage EA work. It certainly seems like EA research is often comparatively exploratory / preliminary and therefore underspecified, and so it's very difficult to delegate work on it except to people who are already in a similar place to you on the matter.