BenMillwood's Shortform

post by BenMillwood · 2019-08-29T17:31:56.643Z · EA · GW · 12 comments

12 comments

Comments sorted by top scores.

comment by BenMillwood · 2019-08-29T17:31:56.773Z · EA(p) · GW(p)

Lead with the punchline when writing to inform

The convention in a lot of public writing is to mirror the style of writing for profit, optimized for attention. In a co-operative environment, you instead want to optimize to convey your point quickly, to only the people who benefit from hearing it. We should identify ways in which these goals conflict; the most valuable pieces might look different from what we think of when we think of successful writing.

  • Consider who doesn't benefit from your article, and if you can help them filter themselves out.
  • Consider how people might skim-read your article, and how to help them derive value from it.
  • Lead with the punchline – see if you can make the most important sentence in your article the first one.
  • Some information might be clearer in a non-discursive structure (like… bullet points, I guess).

Writing to persuade might still be best done discursively, but if you anticipate your audience already being sold on the value of your information, just present the information as you would if you were presenting it to a colleague on a project you're both working on.

comment by JP Addison (jpaddison) · 2019-08-30T02:24:42.495Z · EA(p) · GW(p)

Agree that there's a different incentive for cooperative writing than for clickbait-y news in particular. And I agree with your recommendations. That said, I think many community writers may undervalue making their content more goddamn readable. Scott Alexander is a verbose and often spends paragraphs getting to the start of his point, but I end up with a better understanding of what he's saying by virtue of being fully interested.

All in all though, I'd recommend people try to write like Paul Graham more than either Scott Alexander or an internal memo. He is in general more concise than Scott and more interesting than a memo.

He has several essays about how he writes.

Writing, Briefly — Laundry list of tips

Write like you talk

The Age of the Essay — History of the essays we write in school versus the essays that are useful

A Version 1.0 — "The Age of the Essay" in rough draft form with color coding for if it was kept

comment by BenMillwood · 2020-07-08T05:00:40.820Z · EA(p) · GW(p)

Though betting money is a useful way to make epistemics concrete, sometimes it introduces considerations that tease apart the bet from the outcome and probabilities you actually wanted to discuss. Here's some circumstances when it can be a lot more difficult to get the outcomes you want from a bet:

  • When the value of money changes depending on the different outcomes,
  • When the likelihood of people being able or willing to pay out on bets changes under the different outcomes.

As an example, I saw someone claim that the US was facing civil war. Someone else thought this was extremely unlikely, and offered to bet on it. You can't make bets on this! The value of the payout varies wildly depending on the exact scenario (are dollars lifesaving or worthless?), and more to the point the last thing on anyone's minds will be internet bets with strangers.

In general, you can't make bets about major catastrophes (leaving aside the question of whether you'd want to), and even with non-catastrophic geopolitical events, the bet you're making may not be the one you intended to make, if the value of money depends on the result.

A related idea is that you can't sell (or buy) insurance against scenarios in which insurance contracts don't pay out, including most civilizational catastrophes, which can make it harder to use traditional market methods to capture the potential gains from (say) averting nuclear war. (Not impossible, but harder!)

comment by HaukeHillebrandt · 2020-07-10T10:53:35.818Z · EA(p) · GW(p)

Also see:

https://marginalrevolution.com/marginalrevolution/2017/08/can-short-apocalypse.html

comment by meerpirat · 2020-07-10T15:34:39.519Z · EA(p) · GW(p)

After reading this I thought that a natural next step for the self-interested rational actor that wants to short nuclear war would be to invest in efforts to reduce its likelihood, no? Then one might simply look at the yearly donation numbers of a pool of such efforts.

comment by HaukeHillebrandt · 2020-07-12T10:33:15.124Z · EA(p) · GW(p)

Yes, this is a general strategy for a philanthropists who wants to recoup some of their philanthropic investment:

1. Short harmful industry/company X (e.g. tobacco/Philip Morris, meat / Tyson)

2. Then lobby against this industry (e.g. fund a think tank that lobbies for tobacco taxes in a market that the company is very exposed to).

3. Profit from the short to get a discount on your philanthropic investment.

Contrary to what many people intuit, this is perfectly legal in many jurisdictions (this is not legal or investment advice though).

comment by Stefan_Schubert · 2020-07-12T11:56:58.367Z · EA(p) · GW(p)

Even if it's legal, some people may think it's unethical to lobby against an industry that you've shorted.

It could provide that industry with an argument to undermine the arguments against them. They might claim that their critics have ulterior motives.

comment by HaukeHillebrandt · 2020-07-13T13:41:46.638Z · EA(p) · GW(p)

This is a excellent point, I agree. You're absolutely right that they could argue that and that reputational risks should be considered before such a strategy is adopted. And even though it is perfectly legal to lobby for your own positions / stock, lobbying for shorts is usually more morally laden in the eyes of the public (there is in fact evidence that people react very strongly to this).

However, I think if someone were to mount the criticism of having ulterior motives, then there is a counterargument to show that this criticism is ultimately misguided:

If the market is efficient, then the valuation of an industry will have risks that could be created easily through lobbying priced in. In other words, if the high valuation of Big Tobacco were dependent on someone not doing a relatively cheap lobbying campaign for tobacco taxes, then shorting it would make sense for socially neutral investors with no altruistic motives - and thus is should already be done.

Thus, this strategy would only work for truly altruistic agent who will ultimately lose money in the process, but only get a discount on their philanthropic investment. In other words, the investment in the lobbying should likely be higher than the profit from the short. And so, it would be invalid to say that someone using this strategy would have ulterior motives. But yes again, I take your point that this subtle point might get lost and it will end up being a PR disaster.

comment by BenMillwood · 2020-10-23T16:04:24.347Z · EA(p) · GW(p)

I don't buy your counterargument exactly. The market is broadly efficient with respect to public information. If you have private information (e.g. that you plan to mount a lobbying campaign in the near future; or private information about your own effectiveness at lobbying) then you have a material advantage, so I think it's possible to make money this way. (Trading based on private information is sometimes illegal, but sometimes not, depending on what the information is and why you have it, and which jurisdiction you're in. Trading based on a belief that a particular industry is stronger / weaker than the market perceives it to be is surely fine; that's basically what active investors do, right?)

(Some people believe the market is efficient even with respect to private information. I don't understand those people.)

However, I have my own counterargument, which is that the "conflict of interest" claim seems just kind of confused in the first place. If you hear someone criticizing a company, and you know that they have shorted the company, should that make you believe the criticism more or less? Taking the short position as some kind of fixed background information, it clearly skews incentives. But the short position isn't just a fixed fact of life: it is itself evidence about the critic's true beliefs. The critic chose to short and criticize this company and not another one. I claim the short position is a sign that they do truly believe the company is bad. (Or at least that it can be made to look bad, but it's easiest to make a company look bad if it actually is.) In the case where the critic does not have a short position, it's almost tempting to ask why not, and wonder whether it's evidence they secretly don't believe what they're saying.

All that said, I agree that none of this matters from a PR point of view. The public perception (as I perceive it) is that to short a company is to vandalize it, basically, and probably approximately all short-selling is suspicious / unethical.

comment by HaukeHillebrandt · 2020-10-24T08:20:55.891Z · EA(p) · GW(p)

it's possible to make money this way

Agreed, but I don't think there's a big market inefficiency here with risk-adjusted above market rate returns. Of course, if you do research to create private information then there should be a return to that research.

Trading based on private information is sometimes illegal, but sometimes not, depending on what the information is and why you have it, and which jurisdiction you're in. [...[

True, but I've heard that in the US, normally, if I lobby in the U.S. for an outcome and I short the stock about which I am lobbying, I have not violated any law unless I am a fiduciary or agent of the company in question. Also see https://www.forbes.com/sites/realspin/2014/04/24/its-perfectly-fine-for-herbalife-short-sellers-to-lobby-the-government/#95b274610256

I have my own counterargument

I really like this, but...

it can be made to look bad

This seems to be why people have a knee jerk reaction against it.

comment by BenMillwood · 2020-10-23T16:08:56.754Z · EA(p) · GW(p)

Hmm, I was going to mention mission hedging [EA · GW] as the flipside of this, but then noticed the first reference I found was written by you :P

For other interested readers, mission hedging is where you do the opposite of this and invest in the thing you're trying to prevent -- invest in tobacco companies as an anti-smoking campaigner, invest in coal industry as a climate change campaigner, etc. The idea being that if those industries start doing really well for whatever reason, your investment will rise, giving you extra money to fund your countermeasures.

I'm sure if I thought about it for a bit I could figure out when these two mutually contradictory strategies look better or worse than each other. But mostly I don't take either of them very seriously most of the time anyway :)

comment by HaukeHillebrandt · 2020-10-24T08:37:31.518Z · EA(p) · GW(p)

I'm sure if I thought about it for a bit I could figure out when these two mutually contradictory strategies look better or worse than each other. But mostly I don't take either of them very seriously most of the time anyway :)

I think these strategies can actually be combined:

A patient philanthropist sets up their endowment according to mission hedging principles.

For instance, someone wanting to hedge against AI risks could invest in (leveraged) AI FAANG+ ETF (https://c5f7b13c-075d-4d98-a100-59dd831bd417.filesusr.com/ugd/c95fca_c71a831d5c7643a7b28a7ba7367a3ab3.pdf), then when AI seems more capable and risky and the market is up, they sell and buy shorts, then donate the appreciated assets to fund advocacy to regulate AI.

I think this might work better for bigger donors.

Like this got me thinking: https://www.vox.com/recode/2020/10/20/21523492/future-forward-super-pac-dustin-moskovitz-silicon-valley

“We can push the odds of victory up significantly—from 23% to 35-55%—by blitzing the airwaves in the final two weeks.”

https://www.predictit.org/markets/detail/6788/Which-party-will-win-the-US-Senate-election-in-Texas-in-2020