comment by Paul_Christiano ·
2021-02-24T16:37:48.881Z · EA(p) · GW(p)
These cases are also interesting for alignment agreements between AI labs, and it's interesting to see it playing out in practice. Cullen wrote about this here much better than I will.
Roughly speaking, if individual consumers would prefer use a riskier AI (because costs are externalized) then it seems like an agreement to make AI safer-but-more-expensive would run afoul of the same principles as this chicken-welfare agreement.
On paper, there are some reasons that the AI alignment case should be easier than the chicken-welfare case: (i) using unsafe AI hurts non-customer humans, and AI customers care more about other humans than they do about chickens, (ii) deploying unaligned AI actually likely hurts other AI customers in particular (since they will be the main ones competing with the unaligned but more sophisticated AI). So it's likely that every individual AI customer would benefit.
Unfortunately, it seems like the same thing could be true in the chicken case---every individual customer could prefer the world with the welfare agreement---and it wouldn't change the regulator's decision.
For example, suppose that Dutch consumers eat 100 million chickens a year, 10/year for each of 10 million customers. Customer surveys discover that customers would only be willing to pay $0.01 for a chicken to have more space and a slightly longer life, but that these reforms increase chicken prices by $1. So they strike down the reform.
But with welfare standards in place, each customer pays an extra $10/year for chicken and 100 million chickens have improved lives, with a cost per chicken of less than $0.0000001/chicken, thousands of times lower than their WTP. (This is the same dynamic described here [LW · GW].) So every chicken consumer prefers the world where the standards are in place, despite not being willing to pay money to improve the lives of the tiny number of chickens they eat personally. This seems to be a very common reaction to discussions of animal welfare ("what difference does my consumption make? I can't change the way most chickens are treated...")
Because the number of chicken-eaters is so large, the relevant question in the survey should be "Would you prefer that someone else pay $X in order to improve chicken welfare?", making a tradeoff between two strangers. That's the relevant question for them, since the welfare standards mostly affect other people.
Analogously, if you ask AI consumers "Would you prefer have an aligned AI, or a slightly more sophisticated unaligned AI?" they could easily all say "I want the more sophisticated one," even if every single human would be better off if there were an agreement to make only aligned AI. If an anti-trust regulator used the same standard as in this case, it seems like they would throw out an alignment agreement because of that, even knowing that it would make every single human worse off.
I still think in practice AI alignment agreements would be fine for a variety of reasons. For example, I think if you ran a customer survey it's likely people would say they prefer use aligned AI even if it would disadvantage them personally because public sentiment towards AI is very different and the regulatory impulse is stronger. (Though I find it hard to believe that anything would end up hinging on such a survey, and even more strongly I think it would never come to this because there would be much less political pressure to enforce anti-trust.)Replies from: Tsunayoshi
↑ comment by Tsunayoshi ·
2021-02-24T22:32:42.788Z · EA(p) · GW(p)
I think you might have an incorrect impression of the ruling. The agreement was not just struck down because consumers seemed to not be willing to pay for it, but also because the ACM (on top (!) of the missing willingness to pay) decided that the agreement did not benefit consumers by the nature of the improvements (clearly, most of the benefit goes to the chickens).
From the link: "In order to qualify for an exemption from the prohibition on cartels under the Dutch competition regime it is necessary that the benefits passed on to the consumers exceed the harm inflicted upon them under agreements."Replies from: Paul_Christiano
↑ comment by Paul_Christiano ·
2021-02-25T17:41:26.548Z · EA(p) · GW(p)
Is your impression that if customers were willing to pay for it, then that wouldn't be sufficient cause to say that it benefited customers? (Does that mean that e.g. a standard ensuring that children's food doesn't cause discomfort also can't be protected, since it benefits customers' kids rather than customers themselves?)Replies from: Tsunayoshi
↑ comment by Tsunayoshi ·
2021-02-25T19:39:06.369Z · EA(p) · GW(p)
No, my impression is that willingness to pay is a sufficient but not necessary condition to conclude that an industry standard benefits customers. A different sufficient condition would be an assessment of the effects of the standard by the regulators in terms of welfare. I assume that is the reason why the regulators in this case carried out an analysis of the welfare benefits, because why even do so if willingness-to-pay is the only factor?
More speculatively, I would guess that Dutch regulators also take account welfare improvements to other humans , and would not strike down an industry standard for safe food (if the standard actually contributed to safety). Replies from: Cullen_OKeefe, Cullen_OKeefe
↑ comment by Cullen_OKeefe ·
2021-02-28T00:23:27.467Z · EA(p) · GW(p)
I haven't read the case, but under US antitrust law this case would have the same result. The reasoning would be that individual consumers WTP for animal welfare improvements is a benefit to them, but that benefit can be realized without anticompetitive harms of raised costs and reduced variety: namely, welfare-conscious consumers can pay more for chicken raised in better conditions, and welfare-indifferent consumers still have the option to buy cheaper chickens. The discussion of "welfare" as such would therefore be a bit misleading in the US context—it's a shorthand for the maximal consumer surplus in competitive market conditions, not the type of felicific calculus EAs often do.
Replies from: Ramiro
↑ comment by Cullen_OKeefe ·
2021-02-28T00:16:52.615Z · EA(p) · GW(p)
I would guess that Dutch regulators also take account welfare improvements to other humans , and would not strike down an industry standard for safe food (if the standard actually contributed to safety).
I wouldn't be so sure. US antitrust authorities have repeatedly struck down pro-safety anticompetitive agreements on the justification that consumers should generally be allowed to make their own price-safety tradeoffs. See my paper that Paul linked. Of course, Dutch antitrust authorities may see it differently, but European and US antitrust analysis are usually pretty harmonized.
Replies from: Ramiro
↑ comment by Ramiro ·
2021-03-19T22:02:31.438Z · EA(p) · GW(p)
Sorry if this is a lame question, but do you think that regulations and standards on ESG that explicitly mentioned animal welfare - something more like soft law, or "comply or explain", e.g., "companies must disclose animal welfare policies", or "social and environmental risks include losses due to... animal cruelty" - could be enough to start a change in US antitrust law interpretation on blacklisting products out of animal welfare concerns?