Posts

Anecdotes Can Be Strong Evidence and Bayes Theorem Proves It 2022-03-13T04:37:50.791Z
The S&P 500 Will Drop Below 3029 Before July 16 (65 percent confidence) 2021-10-10T04:26:45.300Z
Should Grants Fund EA Projects Retrospectively? 2021-09-08T02:58:00.053Z
Logical Foundations of Government Policy 2020-10-10T16:11:16.605Z
Maximizing the Long-Run Returns of Retirement Savings 2020-07-07T08:52:48.271Z
How to Fix Private Prisons and Immigration 2020-06-19T10:40:58.609Z
Two Requirements for Any Welfare Payment 2020-06-17T13:07:03.935Z

Comments

Comment by FCCC on How to Fix Private Prisons and Immigration · 2022-04-25T06:43:00.855Z · EA · GW

I would assume that for a private prison that has become good at its business the benefits of more inmates would outweigh the liabilities and that at some point it would (in principle, ignoring the free rider problem for a moment) become easier to increase the profits by increasing the revenue by making more things illegal than trying to reduce the reoffending rate.

Ignoring the free-rider problem ("problem" being from the perspective of the prison), as the prison gets more and more current/former inmates, it becomes harder for that cost-benefit calculation to make sense. With no change in the law or the performance of the prison, the prison's liabilities will grow until the point at which the current/former inmates who die are are as numerous incoming inmates. So for lobbying to make financial sense, it would probably have to occur soon after the prison is started or soon after the system is implemented. But that time is also when the prison has the least information about their own competence (in terms of rehabilitation and auction pricing).

Also do administrators profit from more crimes in a public system? It of course increases the demand for administrators, but I don't see how it would increase the salary of a significant number of them.

Not really, but that's besides the point. The point is that they don't benefit from rehabilitating their inmates. They don't benefit from firing abusive guards. They don't benefit from reading the latest literate on inmate rehabilitation and creating policies that reduce the chance of their inmates re-offending.

Does insurance contracts typically contain clauses for future “products”? I would have assumed that the insurance of the prison would only cover the damage of the point in time the contract was firmed.

I don’t know much about insurance, but I think you can write pretty much whatever contract you like, as long as no laws are broken.

Comment by FCCC on Against the "smarts fetish" · 2022-04-10T19:47:35.245Z · EA · GW

The “Planck principle” seems more applicable to scientists who are strongly invested in a given hypothesis

Yep, that’s why I referred to your 2nd and 3rd traits: A better competing theory is only an inconvenient conclusion if you’re invested in the wrong theory (especially if you yourself created that theory).

I know IQ and these traits are probably correlated (again, since some level intelligence is a prerequisite for most of the traits). But I’m assuming the reason you wrote the post is that a correlation across a population isn’t relevant when you’re dealing with a smart individual who lacks one of these traits.

Comment by FCCC on Against the "smarts fetish" · 2022-04-10T18:18:27.391Z · EA · GW

I think you have to be smart to have all the OP’s listed traits, so sure, there’s going to be correlation. But what’s the phrase? “Science advances one funeral at a time.” If that’s true, then there are plenty of geniuses who can’t bring themselves to admit when someone else has a better theory. That would show that traits 2 and 3 are commonly lacking in smart people, which yes, makes those people dumber than they otherwise would be, but they’re still smart.

Comment by FCCC on Anecdotes Can Be Strong Evidence and Bayes Theorem Proves It · 2022-04-09T03:00:10.675Z · EA · GW

Wow, that essay explains strong anecdotes a lot better than I did. I knew about the low-variance aspect, but his third point and onwards made things even clearer for me. Thanks for the link!

Comment by FCCC on Anecdotes Can Be Strong Evidence and Bayes Theorem Proves It · 2022-03-13T23:34:27.228Z · EA · GW

Yep, I agree.

Maybe I should have gone into why everyone puts anecdotes at the bottom of the evidence hierarchy. I don't disagree that they belong there, especially if all else between the study types is equal. And even if the studies are quite different, the hierarchy is a decent rule of thumb. But it becomes a problem when people use it to disregard strong anecdotes and take weak RCTs as truth.

Comment by FCCC on "Should have been hired" Prizes · 2022-01-22T02:04:13.298Z · EA · GW

One big change that a lot of employers can make is changing their interviews and written tests.

I’ve been required to create a new policy from scratch in interview settings. “Okay now you should come up with an idea on the spot, and you will need to say why this policy should now be a legal requirement of every person in the country.” It’s exactly that type of surface-level thinking that policymakers should avoid.

You should be allowed to bring in work that you’ve already made into the interview and for the written application. It’s far more reflective of the work you will do, because it is the work that you’ve done. Plans for future policy writings mean nothing because there very well could be some technical reason why your nascent policy idea is fundamentally flawed.

Comment by FCCC on The S&P 500 Will Drop Below 3029 Before July 16 (65 percent confidence) · 2021-10-11T09:00:24.732Z · EA · GW

(One of my comments from LessWrong)

If we were to see inflation going back to levels expected by the Fed (2-3% I suppose?) how would that change your forecast?

Great question. So my view is that there could be a few potential triggers for a sell-off cascade (via some combination of margin calls and panic selling), leading to a large drop. There’s also a few triggers for increasing interest rates, not just inflation: The Fed doesn’t have a monopoly on rates. When they buy fewer bonds, they shift the demand curve left, decreasing the price, leading to higher effective interest rates. I’m kind of baffled that they speak about “tapering” as if it’s possible to do so without increasing interest rates.

The particular problem with persistent inflation is that the Fed is less able to increase the cash supply in the event of a large crash. So while I think that inflation isn’t necessary for a 30 percent drop (I’d say it’s over half of my credence), I expect it to magnify the downside if it is higher than normal right before a crash.

Interestingly, the Fed itself was (and probably still is) concerned about the current high valuations.

When you wrote “The main thing I’m worried about is increased savings” did you mean what you described in the previous paragraph (e.g. zero-NPV assets investing and alike), or was it something else?

When I say zero-NPV assets, I mean anything that doesn’t pay out future cash flows to investors, like gold, silver, bitcoin, and NTFs. Certain stocks are being traded as if they were these assets too (AMC, GameStop). I think investment in these things is indicative of mania.

I’m worried that the Fed has flooded the market with so much cash that the new normal for the CAPE ratio and PS ratio are close to what they are now. If it is, then margin-debt-to-GDP isn’t the relevant ratio anymore, margin-debt-to-total-market-cap is, which is not at as high of a level as margin-to-GDP. Basically, supposing we have a smooth exponential curve for the S&P 500, I’m worried about a one-off discontinuity in the graph. I’m also worried about people investing more of their income and net worth, which would have the same effect.

Comment by FCCC on The S&P 500 Will Drop Below 3029 Before July 16 (65 percent confidence) · 2021-10-10T11:29:34.807Z · EA · GW

I’m thinking that you might be able to bet against experienced bettors who think that you’re the victim of confirmation bias (which you might be)

I’d say I’m neutral (though so would anyone who has confirmation bias). I’ve given reasons why these indicators may have lost their predictive value. My main concern is increased savings (and investment of those savings). But hey, we don’t get better at prediction until we actually make predictions.

I’m just looking for market odds. I’d prefer to read the other side that you mention before I size my bets, but I listened to Chairman Powell’s reasoning every time he gives it. Watching Bloomberg’s videos. Listened to Buffett explain why he’s still holding stocks (low interest rates). Let me know if I should be listening to something else. I’m very happy to read the other side if someone is giving their credences.

I’m not quite sure about my bet structure. I’ve got my probability distribution, and I want to bet it against the market’s implied probability distribution in such a way that I’m maximizing long-run growth. Not sure how to do that other than run simulations. If there’s a formula, please let me know.

Comment by FCCC on [deleted post] 2021-09-26T02:30:41.420Z

That’s a fair question. Culture is extremely important (e.g. certain cultural norms facilitate corruption and cronyism, which leads to slower annual increases in quality of life indices), but whether cancelling, specifically, is a big problem, I’m not sure.

Government demonstrably changes culture. At a minor level, drink-driving laws and advertising campaigns have changed something that was a cultural norm into a serious crime. At a broader level, you have things like communist governments making religion illegal and creating a culture where everyone snitches on everyone else to the police.

If we can influence government policy, which I think we can, we can influence culture. It’s probably much easier when most people aren’t questioning a norm (drink-driving, again, being a good example), but I think you’re right in this case: Since cancelling is fairly common to talk about, it’s probably much harder to change the general discourse (and the laws).

Comment by FCCC on [deleted post] 2021-09-26T02:02:08.900Z

Thanks for the link; I should read Overcoming Bias more. I liked Hanson’s Futarchy idea, specifically the idea of replacing the Fed with financial instruments (which I can no longer seem to find anywhere). (Though I think the idea of tying returns of a policy’s implementation to GDP+ is doomed for several technical reasons, including getting stuck at local maxima and a good policy choice being a losing bet because of unrelated policy failures). I think he probably influenced my prison and immigration idea, and really my whole methodology (along with Alvin Roth’s Who Gets What and Why).

For a few of the reasons you outlined, I wrote, “[people would also be fined] for the offence of participating in an online pile on”. Which, quite possibly, is not technically feasible (at least not without requiring the major platforms to verify real-life identity). But making pile-ons illegal doesn’t fix your last point (i.e. how to agree upon the rules especially without throwing mud at each other).

I don’t think expansive laws will ever solve the whole problem. But something like adultery, for instance, is seen almost universally as morally wrong, but there’s no fine associated with it (in fact, there can actually financial benefit if you’re not the main breadwinner).

But yes, I do not think making laws more expansive is a good solution at all. I’m trying to signal my level of confidence by separating ideas into posts (proposals that I’ve thought about a lot and considered many alternatives) and questions (proposals that I’ve just loosely considered, and I’m asking for better alternatives).

Comment by FCCC on Should Grants Fund EA Projects Retrospectively? · 2021-09-09T03:42:17.752Z · EA · GW

Well now I'm definitely glad I wrote "is not a new idea". I didn't know so many people had discussed similar proposals. Thank you all for the reading material. It'll be interesting to hear some downsides to funding retrospectively.

I mentioned the Future of Life Institute which, for those who haven't checked it out yet, does the "Future of Life" award. (Although, now that I think about it, all awards are retrospective.) They also do a podcast, which I haven't listened to in a while but, when I was listening, they had some really interesting discussions.

Comment by FCCC on Politics is far too meta · 2021-03-20T08:49:49.668Z · EA · GW

It's not that any criticism is bad, it's that people who agree with an idea (when political considerations are ignored) are snuffing it out based on questionable predictions of political feasibility. I just don't think people are good at predicting political feasibility. How many people said Trump would never be president (despite FiveThirtyEight warning there was a 30 percent chance)?

Rather than the only disagreement being political feasibility, I would actually prefer someone to be against a policy and criticise it based on something more substantive (like pointing out an error in my reasoning). Criticism needs to come in the right order for any policy discussion to be productive. Maybe this:

  1. Given the assumptions of the argument, does the policy satisfy its specified goals?
  2. Are the goals of the policy good? Is there a better goalset we could satisfy?
  3. Is the policy technically feasible to implement? (Are the assumptions of the initial argument reasonable? Can we steelman the argument to make a similar conclusion from better assumptions?)
  4. Is the policy politically feasible to implement?

I think talking about political feasibility should never ever be the first thing we bring up when debating new ideas. And if someone does give a prediction on political feasibility, they should either show that they do produce good predictions on such things, or significantly lower their confidence in their political feasibility claims.

Comment by FCCC on Politics is far too meta · 2021-03-18T22:05:24.983Z · EA · GW

saying that it's unfeasible will tend to make it more unfeasible

Thank you for saying this. It's frustrating to have people who agree with you bat for the other team. I'd like to see how accurate people are for their infeasibility predictions: Take a list of policies that passed, a list that failed to pass, mix them together, and see how much better you can unscramble them than random chance. Your "I'm not going to talk about political feasibility in this post" idea is a good one that I'll use in future.

Poor meta-arguments I've noticed on the Forum:

  • Using a general reference class when you have a better, more specific class available (e.g. taking an IQ test, having the results in your hand, and refusing to look at them because "I probably got 100 points, because that's the average.")
  • Bringing up common knowledge, i.e. things that are true, but everyone in the conversation already knows and applies that information. (E.g. "Logical arguments can be wrong in subtle ways, so just because your argument looks airtight, doesn't mean it is". A much better contribution is to actually point out the weaknesses in the specific argument that's in front of you.)
  • And, as you say, predictions of infeasibility.
Comment by FCCC on Good v. Optimal Futures · 2020-12-12T17:45:08.282Z · EA · GW

within some small number

In terms of cardinal utility? I think drawing any line in the sand has problems when things are continuous because it falls right into a slippery slope (if doesn't make a real difference, what about drawing the line at , and then what about ?).

But I think of our actions as discrete. Even if we design a system with some continuous parameter, the actual implementation of that system is going to be in discrete human actions. So I don't think we can get arbitrarily small differences in utility. Then maximalism (i.e. going for only ideal outcomes) makes sense when it comes to designing long-lasting institutions, since the small (but non-infinitesimal) differences add up across many people and over a long time.

Comment by FCCC on Good v. Optimal Futures · 2020-12-12T07:22:23.116Z · EA · GW

I think he's saying "optimal future = best possible future", which necessarily has a non-zero probability.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-12-10T07:36:52.213Z · EA · GW

Agreed, but at least in theory, a model that takes into account inmate's welfare at the proper level will, all else being equal, do better under utilitarian lights than a model that does not take into account inmate welfare.

What if the laws forced prisons to treat inmates in a particular way, and the legal treatment of inmates coincided with putting each inmate's wellbeing at the right level? Then the funding function could completely ignore the inmate's wellbeing, and the prisons' bids would drop to account for any extra cost to support the inmate's wellbeing or loss to societal contribution. That's what I was trying to do by saying the goal was to "maximize the total societal contribution of any given set of inmates within the limits of the law". There definitely should be limits on how a prison can treat its inmates, even if it were to serve the rest of society's interests.

But the more I think about it, the more I like the idea of having the inmate's welfare as part of the funding function. It would avoid having to go through the process of developing the right laws to make the prison system function as intended, and it's better at self-correcting when compared to laws (i.e. the prisons that are better at supporting inmate welfare will outcompete the prisons that are bad at it). And it would probably reduce the number of people who think that supporters of this policy change don't care about what happens to inmates, which is nice.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-12-08T00:46:51.987Z · EA · GW

That's a good point. You could set up the system so that it's "societal contribution" + funding - price (which is what it is at the moment) + "Convict's QALYs in dollars" (maybe plus some other stuff too). The fact that you have to value a murder means that you should already have the numbers to do the dollar conversion of the QALYs.

I'm hesitant to make that change though. The change would allow prisons to trade off societal benefit for the inmate's benefit, who, as some people say, "owes a debt to society". Allowing this trade-off would also reduce the deterrence effect of prisons on would-be offenders, so denying the trade-off is not necessarily an anti-utilitarian stance.

And denying the trade-off doesn't mean the inmate is not looked after either. There's a kind of... "Laffer Curve" equivalent where decreasing inmate wellbeing beyond a certain point necessarily means a reduction in societal contribution (destroying an inmate's mind is not good for their future societal contribution). So inmate wellbeing is not minimized by the system I've described (it's not maximized either).

I'm not 100 percent set on the exact funding function. I might change my mind in the future.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-12-07T23:02:49.124Z · EA · GW

You mean the first part? (I.e. Why pay for lobbying when you share the "benefits" with your competitors and still have to compete?) Yeah, when a company becomes large enough, the benefits of a rule change can outweigh the cost of lobbying.

But, for this particular system, if a prison is large enough to lobby, then they're going to have a lot of liabilities from all of their former and current inmates. If they lobby for longer sentences or try to make more behaviours illegal, and one of their former inmates is caught doing one of these new crimes, the prison has to pay.

One way prisons could avoid this is by paying someone else to take on these liabilities. But, in the contract, this person could ensure the prison pays for compensation for any lobbying that damages them.

So a lobbying prison (1) benefits from more inmates in the future, (2) has to pay the cost of lobbying, and (3) has to pay more for the additional liabilities of their past and current inmates (not for their future inmates though, because the liabilities will be offset by a lower initial price for those inmate contracts). Points 1 and 2 are the same under the current prison system. Point 3 is new, and it should push in the direction of less lobbying, at least once the system has existed for a while.

Comment by FCCC on Lotteries for everything? · 2020-12-06T23:41:15.321Z · EA · GW

There are mechanisms that aggregate distributed knowledge, such as free-market pricing.

I cannot really evaluate the value of a grant if I have not seen all the other grants.

Not with 100 percent accuracy, but that's not the right question. We want to know whether it can be done better than chance. Someone can lack knowledge and be biased and still reliably do better than random (try playing chess against a computer that plays uniformly random moves).

In addition, if there would be an easy and obvious system people would probably already have implemented it.

Wouldn't the "efficient-policy hypothesis" imply that lotteries are worse than the existing systems? I don't think you really believe this. Are our systems better than most hypothetical systems? Usually, but this doesn't mean there's no low-hanging fruit. There's plenty of good policy ideas that are well-known and haven't been implemented, such as 100 percent land-value taxes.

Let's take a subset of the research funding problem: How can we decide what to fund for research about prisoner rehabilitation? I've suggested a mechanism that would do this.

Comment by FCCC on Lotteries for everything? · 2020-12-04T06:11:29.827Z · EA · GW

When designing a system, you give it certain goals to satisfy. A good example of this done well is voting theory. People come up with apparently desirable properties, such the Smith criterion, and then demonstrate mathematically that certain voting methods succeed or fail the criterion. Some desirable goals cannot be achieved simultaneously (an example of this is Arrow's impossiblility theorem).

Lotteries give every ticket has an equal chance. And if each person has one ticket, this implies each person has an equal chance. But this goal is in conflict with more important goals. I would guess that lotteries are almost never the best mechanism. Where they improve situations is for already bad mechanisms. But in that case, I'd look further for even better systems.

Comment by FCCC on [deleted post] 2020-10-25T03:14:29.679Z

If people fill in the free-text box in the survey, this is essentially the same as sending an email. If I disagree with the fund's decisions, I can send them my reasons why. If my reasons aren't any good, the fund can see that, and ignore me; if I have good reasons, the fund should (hopefully) be swayed.

Votes without the free-text box filled in can't signal whether the voter's justifications are valid or not. Opinions have differing levels of information backing them up. An "unpopular" decision might be supported by everyone who knows what they're talking about; a "popular" decision might be considered to be bad by every informed person.

Comment by FCCC on EA's abstract moral epistemology · 2020-10-22T06:18:37.099Z · EA · GW

My idea of EA's essential beliefs are:

  • Some possible timelines are much better than others
  • What "feels" like the best action often won't result in anything close to the best possible timeline
  • In such situations, it's better to disregard our feelings and go with the actions that get us closer to the best timeline.

This doesn't commit you to a particular moral philosophy. You can rank timelines by whatever aspects you want: Your moral rule can tell you to only consider your own actions, and disregard their effects on the behaviour of other people's actions. I could consider such a person to be an effective altruist, even though they'd be a non-consequentialist. While I think it's fair to say that, after the above beliefs, consequentialism is fairly core to EA, I think the whole EA community could switch away from consequentialism without having to rebrand itself.

The critique targets effective altruists’ tendency to focus on single actions and their proximate consequences and, more specifically, to focus on simple interventions that reduce suffering in the short term.

But she also says EA has a "god’s eye moral epistemology". This seems contradictory. Even if we suppose that most EAs focus on proximate consequences, that's not a fundamental failing of the philosophy, it's a failed application of it. If many fail to accurately implement the philosophy, it doesn't imply the philosophy bad[1]: There's a difference between a "criterion of right" and a "decision procedure". Many EAs are longtermists who essentially use entire timelines as the unit of moral analysis. This is clearly is not focused on "proximate consequences". That's more the domain of non-consequentialists (e.g. "Are my actions directly harming anyone?").

The article's an incoherent mess, even ignoring the Communist nonsense at the end.


  1. This is in contrast with a policies being bad because no one can implement them with the desired consequences. ↩︎

Comment by FCCC on Can my self-worth compare to my instrumental value? · 2020-10-11T17:06:27.375Z · EA · GW

It happens in philosophy sometimes too: "Saving your wife over 10 strangers is morally required because..." Can't we just say that we aren't moral angels? It's not hypocritical to say the best thing is to do is save the 10 strangers, and then not do it (unless you also claim to be morally perfect). Same thing here. You can treat yourself well even if it's not the best moral thing to do. You can value non-moral things.

Comment by FCCC on Can my self-worth compare to my instrumental value? · 2020-10-11T14:59:01.338Z · EA · GW

I think you're conflating moral value with value in general. People value their pets, but this has nothing to do with the pet's instrumental moral value.

So a relevant question is "Are you allowed to trade off moral value for non-moral value?" To me, morality ranks (probability distributions of) timelines by moral preference. Morally better is morally better, but nothing is required of you. There's no "demandingness". I don't buy into the notions of "morally permissible" or "morally required": These lines in the sand seem like sociological observations (e.g. whether people are morally repulsed by certain actions in the current time and place) rather than normative truths. 

I do think having more focus on moral value is beneficial, not just because it's moral, but because it endures. If you help a lot of people, that's something you'll value until you die. Whereas if I put a bunch of my time into playing chess, maybe I'll consider that to be a waste of time at some point in the future. There's other things, like enjoying relationships with your family, that also aren't anywhere close to the most moral thing you could be doing, but you'll probably continue to value.

You're allowed to value things that aren't about serving the world.

Comment by FCCC on Timeline Utilitarianism · 2020-10-10T16:39:56.983Z · EA · GW

Hey Bob, good post. I've had the same thought (i.e. the unit of moral analysis is timelines, or probability distributions of timelines) with different formalism

The trolley problem gives you a choice between two timelines (). Each timeline can be represented as the set containing all statements that are true within that timeline. This representation can neatly state whether something is true within a given timeline or not: “You pull the lever” , and “You pull the lever” . Timelines contain statements that are combined as well as statements that are atomized. For example, since “You pull the lever”, “The five live”, and “The one dies” are all elements of , you can string these into a larger statement that is also in : “You pull the lever, and the five live, and the one dies”. Therefore, each timeline contains a very large statement that uniquely identifies it within any finite subset of . However, timelines won’t be our unit of analysis because the statements they contain have no subjective empirical uncertainty.

This uncertainty can be incorporated by using a probability distribution of timelines, which we’ll call a forecast (). Though there is no uncertainty in the trolley problem, we could still represent it as a choice between two forecasts: guarantees (the pull-the-lever timeline) and guarantees (the no-action timeline). Since each timeline contains a statement that uniquely identifies it, each forecast can, like timelines, be represented as a set of statements. Each statement within a forecast is an empirical prediction. For example, would contain “The five live with a credence of 1”. So, the trolley problem reveals that you either morally prefer (denoted as ), prefer (denoted as ), or you believe that both forecasts are morally equivalent (denoted as ).

Comment by FCCC on Deliberate Consumption of Emotional Content to Increase Altruistic Motivation · 2020-09-19T06:53:03.994Z · EA · GW

I watched those videos you linked. I don't judge you for feeling that way. 

Did you convert anyone to veganism? If people did get converted, maybe there were even more effective ways to do so. Or maybe anger was the most effective way; I don't know. But if not, your own subjective experience was worse (by feeling contempt), other people felt worse, and fewer animals were helped. Anger might be justified but, assuming there was some better way to convert people, you'd be unintentionally prioritizing emotions ahead of helping the animals. 

Another thing to keep in mind: When we train particular physical actions, we get better at repeating that action. Athletes sometimes repeat complex, trained actions before they have any time to consciously decide to act. I assume the same thing happens with our emotions: If we feel a particular way repeatedly, we're more likely to feel that way in future, maybe even when it's not warranted.

We can be motivated to do something good for the world in lots of different ways. Helping people by solving problems gives my life meaning and I enjoy doing it. No negative emotions needed.

Comment by FCCC on The case of the missing cause prioritisation research · 2020-08-23T02:55:55.761Z · EA · GW

“writing down stylized models of the world and solving for the optimal thing for EAs to do in them”

I think this is one of the most important things we can be doing. Maybe even the most important since it covers such a wide area and so much government policy is so far from optimal.

you just solve for the policy ... that maximizes your objective function, whatever that may be. 

I don't think that's right. I've written about what it means for a system to do "the optimal thing" and the answer cannot be that a single policy maximizes your objective function:

Societies need many distinct systems: a transport system, a school system, etc. These systems cannot be justified if they are amoral, so they must serve morality. Each system cannot, however, achieve the best moral outcome on its own: If your transport system doesn’t cure cancer, it probably isn’t doing everything you want; if it does cure cancer, it isn’t just a “transport” system...

Unless by policy, you mean "the entirety of what government does", then yes. But given that you're going to consider one area at a time, and you're "only including all the levers between which you’re considering", you could reach a local optimum rather than a truly ideal end state. The way I like to think about it is "How would a system for prisons (for example) be in the best possible future?" This is not necessarily going to be the system that does the greatest good at the margin when constrained to the domain you're considering (though they often are). Rather than think about a system maximizing your objective function, it's better to think of systems as satisfying goals that are aligned with your objective function.

Comment by FCCC on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-27T06:36:33.055Z · EA · GW
And bits describe proportional changes in the number of possibilities, not absolute changes...
And similarly, the 3.3 bits that take you from 100 possibilities to 10 are the same amount of information as the 3.3 bits that take you from 10 possibilities to 1. In each case you're reducing the number of possibilities by a factor of 10.

Ahhh. Thanks for clearing that up for me. Looking at the entropy formula, that makes sense and I get the same answer as you for each digit (3.3). If I understand, I incorrectly conflated "information" with "value of information".

Comment by FCCC on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-25T17:26:51.081Z · EA · GW
I think this is better parsed as diminishing marginal returns to information.

How does this account for the leftmost digit giving the most information, rather than the rightmost digit (or indeed any digit between them)?

per-thousandths does not have double the information of per-cents, but 50% more

Let's say I give you $1 + $ where is either 0, $0.1, $0.2 ... or $0.9. (Note $1 is analogous to 1%, and is equivalent adding a decimal place. I.e. per-thousandths vs per-cents.) The average value of , given a uniform distribution, is $0.45. Thus, against $1, adds almost half the original value, i.e. $0.45/$1 (45%). But what if I instead gave you $99 + $? $0.45 is less than 1% of the value of $99.

The leftmost digit is more valuable because it corresponds to a greater place value (so the magnitude of the value difference between places is going to be dependent on the numeric base you use). I don't know information theory, so I'm not sure how to calculate the value of the first two digits compared to the third, but I don't think per-thousandths has 50% more information than per-cents.

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-22T08:17:10.610Z · EA · GW
From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences

This statement is just incorrect.

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-22T08:07:43.521Z · EA · GW
Does this match your view?

Basically, yeah.

But I do think it's a mistake to update your credence based off someone else's credence without knowing their argument and without knowing whether they're calibrated. We typically don't know the latter, so I don't know why people are giving credences without supporting arguments. It's fine to have a credence without evidence, but why are people publicising such credences?

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-22T07:58:39.329Z · EA · GW
But you say invalid meta-arguments, and then give the example "people make logic mistakes so you might have too". That example seems perfectly valid, just often not very useful.

My definition of an invalid argument contains "arguments that don't reliably differentiate between good and bad arguments". "1+1=2" is also a correct statement, but that doesn't make it a valid response to any given argument. Arguments need to have relevancy. I dunno, I could be using "invalid" incorrectly here.

And I'd also say that that example meta-argument could sometimes be useful.

Yes, if someone believed that having a logical argument is a guarantee, and they've never had one of their logical arguments have a surprising flaw, it would be valid to point that out. That's fair. But (as you seem to agree with) the best way to do this is to actually point to the flaw in the specific argument they've made. And since most people who are proficient with logic already know that logic arguments can be unsound, it's not useful to reiterate that point to them.

Also, isn't your comment primarily meta-arguments of a somewhat similar nature to "people make logic mistakes so you might have too"?

It is, but as I said, "Some meta-arguments are valid". (I can describe how I delineate between valid and invalid meta-arguments if you wish.)

Describing that as pseudo-superforecasting feels unnecessarily pejorative.

Ah sorry, I didn't mean to offend. If they were superforecasters, their credence alone would update mine. But they're probably not, so I don't understand why they give their credence without a supporting argument.

Did you mean "some ideas that are probably correct and very important"?

The set of things I give 100% credence is very, very small (i.e. claims that are true even if I'm a brain in a vat). I could say "There's probably a table in front of me", which is technically more correct than saying that there definitely is, but it doesn't seem valuable to qualify every statement like that.

Why am I confident in moral uncertainty? People do update their morality over time, which means that they were wrong at some point (i.e. there is demonstrably moral uncertainty), or the definition of "correct" changes and nobody is ever wrong. I think "nobody is ever wrong" is highly unlikely, especially because you can point to logical contradictions in people's moral beliefs (not just unintuitive conclusions). At that point, it's not worth mentioning the uncertainty I have.

I definitely don't think EAs are perfect, but they do seem above-average in their tendency to have true beliefs and update appropriately on evidence, across a wide range of domains.

Yeah, I'm too focused on the errors. I'll concede your point: Some proportion of EAs are here because they correctly evaluated the arguments. So they're going to bump up the average, even outside of EA's central ideas. My reference classes here were all the groups that have correct central ideas, and yet are very poor reasoners outside of their domain. My experience with EAs is too limited to support my initial claim.

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T08:12:06.341Z · EA · GW

It's almost irrelevant, people still should provide their supporting argument of their credence, otherwise evidence can get "double counted" (and there's "flow on" effects where the first person who updates another person's credence has a significant effect on the overall credence of the population). For example, say I have arguments A and B supporting my 90% credence on something. And you have arguments A, B and C supporting your 80% credence on something. And neither of us post our reasoning; we just post our credences. It's a mistake for you to then say "I'll update my credence a few percent because FCCC might have other evidence." For this reason, providing supporting arguments is a net benefit, irrespective of EA's accuracy of forecasts.

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T07:15:56.107Z · EA · GW
The two statements are pretty similar in verbalized terms (and each falls under loose interpretations of what "pretty sure" means in common language), but ought to have drastically different implications for behavior!

Yes you're right. But I'm making a distinction between people's own credences and their ability to update the credences of other people. As far as changing the opinion of the reader, when someone says "I haven't thought much about it", it should be an indicator to not update your own credence by very much at all.

I basically think EA and associated communities would be better off to have more precise credences, and be accountable for them

I fully agree. My problem is that this is not the current state of affairs for the majority of Forum users, in which case, I have no reason to update my credences because an uncalibrated random person says they're 90% confident without providing any reasoning that justifies their position. All I'm asking for is for people to provide a good argument along with their credence.

I agree that EAs put superforecasters and superforecasting techniques on a pedestal, more than is warranted.

I think that they should be emulated. But superforcasters have reasoning to justify their credences. They break problems down into components that they're more confident in estimating. This is good practice. Providing a credence without any supporting argument, is not.

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T06:57:10.039Z · EA · GW

I'm not sure how you think that's what I said. Here's what I actually said:

A superforecaster's credence can shift my credence significantly...
If the credence of a random person has any value to my own credence, it's very low...
The evidence someone provides is far more important than someone's credence (unless you know the person is highly calibrated and precise)...
[credences are] how people should think...
if you're going to post your credence, provide some evidence so that you can update other people's credences too.

I thought I was fairly clear about what my position is. Credences have internal value (you should generate your own credence). Superforecasters' credences have external value (their credence should update yours). Uncalibrated random people's credences don't have much external value (they shouldn't shift your credence much). And an argument for your credence should always be given.

I never said vague words are valuable, and in fact I think the opposite.

This is an empirical question. Again, what is the reference class for people providing opinions without having evidence? We could look at all of the unsupported credences on the forum and see how accurate they turned out to be. My guess is that they're of very little value, for all the reasons I gave in previous comments.

you are concretely making the point that it's additionally bad for them to give explicit credences!

I demonstrated a situation where a credence without evidence is harmful:

If we have different credences and the set of things I've considered is a strict subset of yours, you might update your credence because you mistakenly think I've considered something you haven't.

The only way we can avoid such a situation is either by providing a supporting argument for our credences, OR not updating our credences in light of other people's unsupported credences.

Comment by FCCC on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-21T06:16:42.832Z · EA · GW
Yes, in most cases if somebody has important information that an event has XY% probability of occurring, I'd usually pay a lot more to know what X is than what Y is.

As you should, but Greg is still correct in saying that Y should be provided.

Regarding the bits of information, I think he's wrong because I'd assume information should be independent of the numeric base you use. So I think Y provides 10% of the information of X. (If you were using base 4 numbers, you'd throw away 25%, etc.)

But again, there's no point in throwing away that 10%.

Comment by FCCC on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-21T06:03:53.339Z · EA · GW

I agree. Rounding has always been ridiculous to me. Methodologically, "Make your best guess given the evidence, then round" makes no sense. As long as your estimates are better than random chance, it's strictly less reliable than just "Make your best guess given the evidence".

Credences about credences confuse me a lot (is there infinite recursion here? I.e. credences about credences about credences...). My previous thoughts have been give a credence range or to size a bet (e.g. "I'd bet $50 out of my $X of wealth at a Y odds"). I like both your solutions (e.g. "if I thought about it for an hour..."). I'd like to see an argument that shows there's an optimal method for representing the uncertainty of a credence. I wouldn't be surprised if someone has the answer and I'm just unaware of it.

I've thought about the coin's 50% probability before. Given a lack of information about the initial forces on the coin, there exists an optimal model to use. And we have reasons to believe a 50-50 model is that model (given our physics models, simulate a billion coin flips with a random distribution of initial forces). This is why I like your "If I thought about it more" model. If I thought about the coin flip more, I'd still guess 49%-51% (depending on the specific coin, of course).

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T04:15:30.854Z · EA · GW
I mean, very frequently it's useful to just know what someone's credence is. That's often an order of magnitude cheaper to provide, and often is itself quite a bit of evidence.

I agree, but only if they're a reliable forecaster. A superforecaster's credence can shift my credence significantly. It's possible that their credences are based off a lot of information that shifts their own credence by 1%. In that case, it's not practical for them to provide all the evidence, and you are right.

But most people are poor forecasters (and sometimes they explicitly state they have no supporting evidence other than their intuition), so I see no reason to update my credence just because someone I don't know is confident. If the credence of a random person has any value to my own credence, it's very low.

This is like saying that all statements of opinions or expressions of feelings are bad, unless they are accompanied with evidence, which seems like it would massively worsen communication.

That would depend on the question. Sometimes we're interested in feelings for their own sake. That's perfectly legitimate because the actual evidence we're wanting is the data about their feelings. But if someone's giving their feelings about whether there are an infinite number of primes, it doesn't update my credences at all.

I think opinions without any supporting argument worsen discourse. Imagine a group of people thoughtfully discussing evidence, then someone comes in, states their feelings without any evidence, and then leaves. That shouldn't be taken seriously. Increasing the proportion of those people only makes it worse.

Bayesians should want higher-quality evidence. Isn't self-reported data is unreliable? And that's when the person was there when the event happened. So what is the reference class for people providing opinions without having evidence? It's almost certainly even more unreliable. If someone has an argument for their credence, they should usually give that argument; if they don't have an argument, I'm not sure why they're adding to the conversation.

I'm not saying we need to provide peer-reviewed articles. I just want to see some line of reasoning demonstrating why you came to the conclusion you made, so that everyone can examine your assumptions and inferences. If we have different credences and the set of things I've considered is a strict subset of yours, you might update your credence because you mistakenly think I've considered something you haven't.

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T03:43:42.089Z · EA · GW
You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report.

Habryka seems to be talking about people who have evidence and are just not stating it, so we might be talking past one another. I said in my first comment "There's also a lot of pseudo-superforcasting ... without any evidence backing up those credences." I didn't say "without stating any evidence backing up those credences." This is not a guess on my part. I've seen comments where they say explicitly that the credence they're giving is a first impression, and not something well thought out. It's fine for them to have a credence, but why should anyone care what your credence is if it's just a first impression?

See Greg Lewis's recent post; I'm not sure if you disagree.

I completely agree with him. Imprecision should be stated and significant figures are a dumb way to do it. But if someone said "I haven't thought about this at all, but I'm pretty sure it's true", is that really all that much worse than providing your uninformed prior and saying you haven't really thought about it?

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T08:30:18.637Z · EA · GW
From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences

Sure there is: By communicating, we're trying to update one another's credences. You're not going to be very successful in doing so if you provide a credence without supporting evidence. The evidence someone provides is far more important than someone's credence (unless you know the person is highly calibrated and precise). If you have a credence that you keep to yourself, then yes, there's no need for supporting evidence.

if only to avoid problems of ambiguous language.

Ambiguous statements are bad, 100%, but so are clear, baseless statements.

As you say, people can legitimately have credences about anything. It's how people should think. But if you're going to post your credence, provide some evidence so that you can update other people's credences too.

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T04:12:06.875Z · EA · GW
EA epistemology is weaker than expected.

I'd say nearly everyone's ability to determine an argument's strength is very weak. On the Forum, invalid meta-arguments* are pretty common, such as "people make logic mistakes so you might have too", rather than actually identifying the weaknesses in an argument. There's also a lot of pseudo-superforcasting, like "I have 80% confidence in this", without any evidence backing up those credences. This seems to me like people are imitating sound arguments without actually understanding how they work. Effective altruists have centred around some ideas that are correct (longtermism, moral uncertainty, etc.), but outside of that, I'd say we're just as wrong as anyone else.

*Some meta-arguments are valid, like discussions on logical grounding of particular methodologies, e.g. "Falsification works because of the law of contraposition, which follows from the definition of logical implication".

Comment by FCCC on Maximizing the Long-Run Returns of Retirement Savings · 2020-07-08T03:29:22.680Z · EA · GW

Yeah, that's right. The problem with my toy model is that it assumes that funds can actually estimate their optimal bid, which would need to be an exact prediction of the their future returns at an exact time, which is not possible. Allowing bids to reference a single, agreed-upon global index reduces the problem to a prediction of costs, which is much easier for the funds. And in the long run, returns can't be higher the return of the global index, so it should maximize long-run returns.

However, most (?) indices are made by committees, which I don't like, so I wanted to see other people's ideas for making this workable. (But index committees are established and seem to work well, so relying on them is less risky than setting up a brand-new committee as proposed in that government report.)

Comment by FCCC on Maximizing the Long-Run Returns of Retirement Savings · 2020-07-07T14:51:16.872Z · EA · GW

Yeah, it's definitely flawed. I was more thinking that the bids could be made as a difference between an index (probably a global one). So the profit-maximizing bids for the funds would be the index return (whatever it happens to be) minus their expected costs. And then you have large underwriters of the firms, who make sure that the fund's processes are sound. What I'd like is everyone to be in Vanguard/Blackrock, but there should be some mechanism for others to overthrow them if someone can match the index at a lower cost.

Comment by FCCC on Maximizing the Long-Run Returns of Retirement Savings · 2020-07-07T14:45:47.716Z · EA · GW

Caught red handed. I'd been thinking about this idea for a while and was trying to get the maths to work last night, so I had my prison/immigration idea next to me for reference.

I like this idea; we should have many more second-price auctions out there. Do you have any further references about it?

Thanks. I'm not the best person to ask about auctions. For people looking for an introduction, this video is pretty good. If anyone's got a good textbook, I'd be interested.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-07-07T08:36:14.581Z · EA · GW

Ah yes, I have mentioned in other comments about regulation keeping private prisons in check too. I should have restated it here. I am in favour of checks and balances, which is why my goal for the system contains "...within the limits of the law". I agree with almost everything you say here (I'd keep some public prisons until confident that the market is mature enough to handle all cases better than the public system, but I wouldn't implement your 10-year loan).

Human rights laws. Etc.

Yep, I'm all for that. One thing that people are missing is that the goal of a system kind of... "compresses" the space of things you want to happen. That compression is lossy. You want that goal to lose as few things as possible, but you will lose some things. To fix that, you will need some regulation to make sure the system works in the important edge cases.

prisons may not have a strong incentive to care about the welfare of the prisoners whilst they are in the prison

This is incorrect. They do have a strong incentive, since the contract comes into effect immediately after the auction: If a crime happens in their prison, the prison has to pay the government. The resulting problem is that prisons have an incentive to hide these crimes. So I recommended that prisons be outfitted with cameras and microphones that are externally audited.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-07-07T07:12:29.844Z · EA · GW
Basically everyone was convinced by the theory of small loans to those too poor to receive finance.

I was against microfinance, but I also don't know how they justified the idea. I think empirical evidence should be used to undermine certain assumptions of models, such as "People will only take out a loan if it's in their own interests". Empirically, that's clearly not always the case (e.g. people go bankrupt from credit cards), and any model that relies on that statement being true may fail because of it. A theoretical argument with bad assumptions is a bad argument.

"Any prison system that does so [works] will look similar to mine, e.g. prisons would need to get paid when convicts pay tax."
Blanket statements like this – suggesting your idea or similar is the ONLY way prisons can work still concerns me and makes me think that you value theoretical data too highly compared to empirical data.

That's not what I said. I said "Most systems can work well in the short run", but systems that don't internalize externalities are brittle ("For a system to work well and be robust"). Internalizing externalities has a very strong evidence base (theoretical and empirical). If anyone can show me a system that internalizes externalities and doesn't look like my proposal, I will concede.

I still think it could help the case to think about how a pilot prison could be made to produce useful data.

I think we agree that you want "cheap and informative tests". Some of the data you're talking about already exists (income tax data, welfare payments, healthcare subsidies), which is cheap (because it already exists) and informative (because it's close to the data prisons would need to operate effectively).

Social impact bonds for prisons are already in place, and that system is similar to mine in some respects (though it has poor, ad hoc justification, and so the system should fail in predictable ways). You're right about me not being interested those systems. Social impact bonds are probably the best reference class we have for my proposal. But if they failed, I wouldn't update my beliefs very much since the theory for the impact bonds is so poor.

Hmm. Actually, you're right, you can make a small trial work.

1. Randomly select enough inmates to fill US prisons. Randomly put 50 percent of those inmates in set A, and the other inmates in set B.

2. Allow any prison to bid on the contracts of set B. The participating prisons have to bid on all of those inmates' contracts.

3. Select the prisons in such a way that those prisons are filled, and that the sum of the bids is maximized. Because all prisons are bidding, you get a thick market, and the assumptions of my argument should hold (you may have to compensate prisons who make reasonable bids).

4. Select random prisons to take on set A. Does this introduce selection bias? Yes, and that's exactly the point. In my proposal, the best prisons self-select by making higher bids (criterion 1).

5. Observe.

I'm interested to hear what you think.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-07-06T19:35:33.336Z · EA · GW
Theoretical reasons are great but actual evidence [is] more important.

Good theoretical evidence is "actual evidence". No amount of empirical evidence is up to the task of proving there are an infinite number of primes. Our theoretical argument showing there are an infinite number of primes is the strongest form of evidence that can be given.

That's not to say I think my argument is airtight, however. My argument could probably be made with more realistic assumptions (alternatively, more realistic assumptions might show my proposed system is fundamentally mistaken). My model also just describes an end state, rather than provide any help on how to build this market from scratch (implementing the system overnight would produce a market that's too thin and inefficient).

Theory can go wrong if the assumptions are false or the inferences are invalid. Both of these errors happen all the time of course, and in subtle ways too, so I agree that empirical evidence is important. But even with data, you need a model to properly interpret it. Data without a model only tells you what was measured, which usually isn't that informative. No matter what the numbers are, no one can say the UK prison system is better than that of the US without some assumptions. And no one can get data on a system that hasn't been tried before (depending on the reference classes available to you).

Consider international development. The effective altruism community has been saying for years (and backing up these claims) that in development you cannot just do things that theoretically sound like they will work (like building schools) but you need to do things that have empirical evidence of working well.

Can you show me a theoretical model of school building that would convince me that it would work when it would, in fact, fail? I don't think I would be convinced. (How can you be sure the teachers are good? How can you be sure good teachers will stay? How can you be sure students will show up?) You can't bundle all theory in the same basket (otherwise I could point to mathematics and say theory is always sound). Whether a theory is good evidence hinges on whether the assumptions are close enough to reality.

People are very very good at persuading themselves in what they believe [...]
Be wary of the risk of motivated reasoning
[But your] claim might be true and you might have good evidence for it

The process of building the argument changed what I believed the system should be. I have no dog in this fight. I claimed "If the UK system currently works well, I suspect that you have good regulators who are manually handling the shortcomings of the underlying system" for several reasons:

  • it is true of so many systems I've seen, including all of the prison systems I've seen (reference class)
  • For a system to work well and be robust, externalities need to be internalized. Any prison system that does so will look similar to mine, e.g. prisons would need to get paid when convicts pay tax. You would have mentioned if the UK system did so.
  • The data I've seen on UK prisons is about 50% recidivism, which I don't think would be the case under a functional system.
Don’t under-value evidence, you might miss things. An underplayed strength of your case for fixing private prisons is that the solution you suggest is testable. A single pilot prison could be run and data collected and lessons learned. To some degree this could even be done by an committed entrepreneur with minimal government support.

A pilot prison wouldn't work because it wouldn't have competitive bidding. I did mention in Aaron Gertler's comment what data should be collated prior to implementation. I'm all for looking at the relevant data.

If one [UK public prisons or UK private prisons are] clearly failing it motivates change in the other.

That doesn't seem like a good system.

The bidding process and the actualisation of losses (tied to real social interests) keep the prisons in check.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-07-04T05:42:10.603Z · EA · GW

Thanks for the kind comment.

My guess is that the US would be the best place to start (a thick "market", poor outcomes), but I'm talking about prison systems in general.

I'm not familiar with the UK system, but I haven't heard of any prison system with a solid theoretical grounding. Theory is required because we want to compare a proposed system to all other possible systems and conclude that our proposal is the best. You want theoretical reasons to believe that your system will perform well, and that good performance will endure.

Most systems can work well in the short run, but that doesn't mean they're good. For example, if I were ruled by a good king, I still wouldn't want to have a monarchy in place. If the UK system currently works well, I suspect that you have good regulators who are manually handling the shortcomings of the underlying system.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-24T02:12:08.536Z · EA · GW

I emailed Robin Hanson about my immigration idea in 2018. His post was in 2019. But to be fair, he came up with futarchy well before I started working on policy.

pay annual dividends proportional to these numbers

Doing things in proportion (rather than selling the full value) undervalues the impact of good forecasts. Since making forecasts has a cost, proportional payment (where the proportionality constant is not equal to 1) would generate inefficient outcomes: Imagine the contribution of the immigrant is $100 and it costs $80 to make the forecast, then paying forecasters anything less than 80% will cause poor outcomes.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-23T10:18:45.794Z · EA · GW
Does any country in the world create estimates of individual citizens' cost to social services? Does any country in the world have a system where companies can bid for the right to collect individuals' future tax revenue? Has anyone else (politicians, researchers, etc.) ever argued for a system resembling this one?

I'm not aware of any "net tax contribution" measurement, but I haven't done an extensive search either. I'm not aware of anyone arguing for anything close to the system I proposed. The closest (but still far away) system that I've heard of is social impact bonds, which have been implemented in Australia to some degree. In the implementations of them that I've seen, they give prisons bonuses for reaching a low level of recidivism.

There's several weaknesses of that model. Maybe one prison happens to get allocated inmates that are not likely to reoffend, in which case, they get paid for no reason (bidding on inmate contracts stops this). A reoffending murderer is given the same weight as a reoffending thief, so the prison is indifferent to who they rehabilitate (valuing "equivalent compensation" of crimes stops this). And the government is not cost minimizing (again, bidding stops this). It also doesn't incentivize prisons to increase the net tax contribution of the convict (whereas mine does).

I'd have loved to hear thoughts in the post on how we might "carefully work towards" a system that works so differently from any that (AFAIK) exist in the world today. What intermediate steps get us closer to this system without creating a full transition?

I have some ideas, but no strong theoretical reasons for believing they're ideal.

You could implement a subset of the system: Just the income tax paid and welfare used. That data already exists. This data could be linked to the inmate's attributes, which would allow the prisons to have good reference classes from which they could estimate their profit-maximizing bids. The primary difference between this half-measure and the full proposal is that the prisons don't pay the government when a crime is committed (secondarily, income tax paid is different from tax incidence).

When you have the right metrics for the full system (valuations of equivalent compensation for all crimes, and tax incidence) if, for some reason, tax incidence data can't be generated for the past, you could keep the half-measure system in place for a few decades. But during that time, you do the measurements and store the data. When it's time to implement the full system, again, prisons have good reference classes from which they can estimate their optimal bids.

If politicians are particularly skittish on the idea, they could say to the prisons "We'll pay you 80% of what we would under the current scheme, and pay you 20% of what we would under the new scheme." I wouldn't recommend this, because some viable rehabilitative measures won't be taken.

I'd have been more convinced by the post if it referred to any existing policy which mirrors any aspect of these proposals.

I think the best argument for conservativism is "Out of all possible sets of institutions, the one we're in is pretty good". But this doesn't mean "never implement radical change" (if you never do so, you'll get stuck on a local optimum). It just means that when you're implementing radical change, you have to have good evidence. I wouldn't implement this system based off the mathematical justification I've given here. But I think the problems with the argument could be solved with minimal changes to the actual system. And if one of the assumptions of the argument fails when the system is implemented, we can always change back to the existing system. But if it is successful, it has implications for other policy areas (such as immigration).

An example of radical, evidence-supported policy is the kidney-exchange reform. Rather than exchange kidneys between pairs of people, they now chain groups of people together. The new system facilitates far more exchanges between willing donors and people in need. Countries that waited for the "evidence" (by which they mean "empirical evidence") let people die because they didn't value the logical argument.

The hospital-intern matching algorithm is another example (it was also applied to student-to-school allocation).

The most prominent system that's supported on a theoretical-level is free markets. It is supported by the general equilibrium model. When the assumptions that model stray too far from reality, the system breaks down. When they're close enough, it works very well.

So the question is "Are the assumptions of the prison-policy argument close enough to reality". I think they mostly are, and where they're not, the error is balanced by a countervailing error.