## Posts

A Model of Rational Policy: When Is a Goal “Good”? 2020-10-10T16:11:16.605Z
Maximizing the Long-Run Returns of Retirement Savings 2020-07-07T08:52:48.271Z
How to Fix Private Prisons and Immigration 2020-06-19T10:40:58.609Z
Two Requirements for Any Welfare Payment 2020-06-17T13:07:03.935Z

Comment by FCCC on Politics is far too meta · 2021-03-20T08:49:49.668Z · EA · GW

Criticism can be great. But I think we need an agreed-upon order of critical focus to have more productive arguments. Maybe this:

1. Given the assumptions of the argument, does the policy satisfy its specified goals?
2. Are the goals of the policy good? Is there a better goalset we could satisfy?
3. Is the policy technically feasible to implement?
4. Is the policy politically feasible to implement?

I think talking about political feasibility should never ever be first thing we bring up when debating new ideas.

Comment by FCCC on Politics is far too meta · 2021-03-18T22:05:24.983Z · EA · GW

saying that it's unfeasible will tend to make it more unfeasible

Thank you for saying this. It's frustrating to have people who agree with you bat for the other team. I'd like to see how accurate people are for their infeasibility predictions: Take a list of policies that passed, a list that failed to pass, mix them together, and see how much better you can unscramble them than random chance. Your "I'm not going to talk about political feasibility in this post" idea is a good one that I'll use in future.

Poor meta-arguments I've noticed on the Forum:

• Using a general reference class when you have a better, more specific class available (e.g. taking an IQ test, having the results in your hand, and refusing to look at them because "I probably got 100 points, because that's the average.")
• Bringing up common knowledge, i.e. things that are true, but everyone in the conversation already knows and applies that information. (E.g. "Logical arguments can be wrong in subtle ways, so just because your argument looks airtight, doesn't mean it is". A much better contribution is to actually point out the weaknesses in the specific argument that's in front of you.)
• And, as you say, predictions of infeasibility.
Comment by FCCC on Good v. Optimal Futures · 2020-12-12T17:45:08.282Z · EA · GW

Ah, another victim of a last-minute edit (originally, I wrote "which is necessarily possible").

within some small number

In terms of cardinal utility? I think drawing any line in the sand has problems when things are continuous because it falls right into a slippery slope (if doesn't make a real difference, what about drawing the line at , and then what about ?).

But I think of our actions as discrete. Even if we design a system with some continuous parameter, the actual implementation of that system is going to be in discrete human actions. So I don't think we can get arbitrarily small differences in utility. Then maximalism (i.e. going for only ideal outcomes) makes sense when it comes to designing long-lasting institutions, since the small (but non-infinitesimal) differences add up across many people and over a long time.

Comment by FCCC on Good v. Optimal Futures · 2020-12-12T07:22:23.116Z · EA · GW

I think he's saying "optimal future = best possible future", which necessarily has a non-zero probability.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-12-10T07:36:52.213Z · EA · GW

Agreed, but at least in theory, a model that takes into account inmate's welfare at the proper level will, all else being equal, do better under utilitarian lights than a model that does not take into account inmate welfare.

What if the laws forced prisons to treat inmates in a particular way, and the legal treatment of inmates coincided with putting each inmate's wellbeing at the right level? Then the funding function could completely ignore the inmate's wellbeing, and the prisons' bids would drop to account for any extra cost to support the inmate's wellbeing or loss to societal contribution. That's what I was trying to do by saying the goal was to "maximize the total societal contribution of any given set of inmates within the limits of the law". There definitely should be limits on how a prison can treat its inmates, even if it were to serve the rest of society's interests.

But the more I think about it, the more I like the idea of having the inmate's welfare as part of the funding function. It would avoid having to go through the process of developing the right laws to make the prison system function as intended, and it's better at self-correcting when compared to laws (i.e. the prisons that are better at supporting inmate welfare will outcompete the prisons that are bad at it). And it would probably reduce the number of people who think that supporters of this policy change don't care about what happens to inmates, which is nice.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-12-08T00:46:51.987Z · EA · GW

That's a good point. You could set up the system so that it's "societal contribution" + funding - price (which is what it is at the moment) + "Convict's QALYs in dollars" (maybe plus some other stuff too). The fact that you have to value a murder means that you should already have the numbers to do the dollar conversion of the QALYs.

I'm hesitant to make that change though. The change would allow prisons to trade off societal benefit for the inmate's benefit, who, as some people say, "owes a debt to society". Allowing this trade-off would also reduce the deterrence effect of prisons on would-be offenders, so denying the trade-off is not necessarily an anti-utilitarian stance.

And denying the trade-off doesn't mean the inmate is not looked after either. There's a kind of... "Laffer Curve" equivalent where decreasing inmate wellbeing beyond a certain point necessarily means a reduction in societal contribution (destroying an inmate's mind is not good for their future societal contribution). So inmate wellbeing is not minimized by the system I've described (it's not maximized either).

I'm not 100 percent set on the exact funding function. I might change my mind in the future.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-12-07T23:02:49.124Z · EA · GW

You mean the first part? (I.e. Why pay for lobbying when you share the "benefits" with your competitors and still have to compete?) Yeah, when a company becomes large enough, the benefits of a rule change can outweigh the cost of lobbying.

But, for this particular system, if a prison is large enough to lobby, then they're going to have a lot of liabilities from all of their former and current inmates. If they lobby for longer sentences or try to make more behaviours illegal, and one of their former inmates is caught doing one of these new crimes, the prison has to pay.

One way prisons could avoid this is by paying someone else to take on these liabilities. But, in the contract, this person could ensure the prison pays for compensation for any lobbying that damages them.

So a lobbying prison (1) benefits from more inmates in the future, (2) has to pay the cost of lobbying, and (3) has to pay more for the additional liabilities of their past and current inmates (not for their future inmates though, because the liabilities will be offset by a lower initial price for those inmate contracts). Points 1 and 2 are the same under the current prison system. Point 3 is new, and it should push in the direction of less lobbying, at least once the system has existed for a while.

Comment by FCCC on Lotteries for everything? · 2020-12-06T23:41:15.321Z · EA · GW

There are mechanisms that aggregate distributed knowledge, such as free-market pricing.

I cannot really evaluate the value of a grant if I have not seen all the other grants.

Not with 100 percent accuracy, but that's not the right question. We want to know whether it can be done better than chance. Someone can lack knowledge and be biased and still reliably do better than random (try playing chess against a computer that plays uniformly random moves).

In addition, if there would be an easy and obvious system people would probably already have implemented it.

Wouldn't the "efficient-policy hypothesis" imply that lotteries are worse than the existing systems? I don't think you really believe this. Are our systems better than most hypothetical systems? Usually, but this doesn't mean there's no low-hanging fruit. There's plenty of good policy ideas that are well-known and haven't been implemented, such as 100 percent land-value taxes.

Let's take a subset of the research funding problem: How can we decide what to fund for research about prisoner rehabilitation? I've suggested a mechanism that would do this.

Comment by FCCC on Lotteries for everything? · 2020-12-04T06:11:29.827Z · EA · GW

When designing a system, you give it certain goals to satisfy. A good example of this done well is voting theory. People come up with apparently desirable properties, such the Smith criterion, and then demonstrate mathematically that certain voting methods succeed or fail the criterion. Some desirable goals cannot be achieved simultaneously (an example of this is Arrow's impossiblility theorem).

Lotteries give every ticket has an equal chance. And if each person has one ticket, this implies each person has an equal chance. But this goal is in conflict with more important goals. I would guess that lotteries are almost never the best mechanism. Where they improve situations is for already bad mechanisms. But in that case, I'd look further for even better systems.

Comment by FCCC on [deleted post] 2020-10-25T03:14:29.679Z

If people fill in the free-text box in the survey, this is essentially the same as sending an email. If I disagree with the fund's decisions, I can send them my reasons why. If my reasons aren't any good, the fund can see that, and ignore me; if I have good reasons, the fund should (hopefully) be swayed.

Votes without the free-text box filled in can't signal whether the voter's justifications are valid or not. Opinions have differing levels of information backing them up. An "unpopular" decision might be supported by everyone who knows what they're talking about; a "popular" decision might be considered to be bad by every informed person.

Comment by FCCC on EA's abstract moral epistemology · 2020-10-22T06:18:37.099Z · EA · GW

My idea of EA's essential beliefs are:

• Some possible timelines are much better than others
• What "feels" like the best action often won't result in anything close to the best possible timeline
• In such situations, it's better to disregard our feelings and go with the actions that get us closer to the best timeline.

This doesn't commit you to a particular moral philosophy. You can rank timelines by whatever aspects you want: Your moral rule can tell you to only consider your own actions, and disregard their effects on the behaviour of other people's actions. I could consider such a person to be an effective altruist, even though they'd be a non-consequentialist. While I think it's fair to say that, after the above beliefs, consequentialism is fairly core to EA, I think the whole EA community could switch away from consequentialism without having to rebrand itself.

The critique targets effective altruists’ tendency to focus on single actions and their proximate consequences and, more specifically, to focus on simple interventions that reduce suffering in the short term.

But she also says EA has a "god’s eye moral epistemology". This seems contradictory. Even if we suppose that most EAs focus on proximate consequences, that's not a fundamental failing of the philosophy, it's a failed application of it. If many fail to accurately implement the philosophy, it doesn't imply the philosophy bad[1]: There's a difference between a "criterion of right" and a "decision procedure". Many EAs are longtermists who essentially use entire timelines as the unit of moral analysis. This is clearly is not focused on "proximate consequences". That's more the domain of non-consequentialists (e.g. "Are my actions directly harming anyone?").

The article's an incoherent mess, even ignoring the Communist nonsense at the end.

1. This is in contrast with a policies being bad because no one can implement them with the desired consequences. ↩︎

Comment by FCCC on Can my self-worth compare to my instrumental value? · 2020-10-11T17:06:27.375Z · EA · GW

It happens in philosophy sometimes too: "Saving your wife over 10 strangers is morally required because..." Can't we just say that we aren't moral angels? It's not hypocritical to say the best thing is to do is save the 10 strangers, and then not do it (unless you also claim to be morally perfect). Same thing here. You can treat yourself well even if it's not the best moral thing to do. You can value non-moral things.

Comment by FCCC on Can my self-worth compare to my instrumental value? · 2020-10-11T14:59:01.338Z · EA · GW

I think you're conflating moral value with value in general. People value their pets, but this has nothing to do with the pet's instrumental moral value.

So a relevant question is "Are you allowed to trade off moral value for non-moral value?" To me, morality ranks (probability distributions of) timelines by moral preference. Morally better is morally better, but nothing is required of you. There's no "demandingness". I don't buy into the notions of "morally permissible" or "morally required": These lines in the sand seem like sociological observations (e.g. whether people are morally repulsed by certain actions in the current time and place) rather than normative truths.

I do think having more focus on moral value is beneficial, not just because it's moral, but because it endures. If you help a lot of people, that's something you'll value until you die. Whereas if I put a bunch of my time into playing chess, maybe I'll consider that to be a waste of time at some point in the future. There's other things, like enjoying relationships with your family, that also aren't anywhere close to the most moral thing you could be doing, but you'll probably continue to value.

You're allowed to value things that aren't about serving the world.

Comment by FCCC on Timeline Utilitarianism · 2020-10-10T16:39:56.983Z · EA · GW

Hey Bob, good post. I've had the same thought (i.e. the unit of moral analysis is timelines, or probability distributions of timelines) with different formalism

The trolley problem gives you a choice between two timelines (). Each timeline can be represented as the set containing all statements that are true within that timeline. This representation can neatly state whether something is true within a given timeline or not: “You pull the lever” , and “You pull the lever” . Timelines contain statements that are combined as well as statements that are atomized. For example, since “You pull the lever”, “The five live”, and “The one dies” are all elements of , you can string these into a larger statement that is also in : “You pull the lever, and the five live, and the one dies”. Therefore, each timeline contains a very large statement that uniquely identifies it within any finite subset of . However, timelines won’t be our unit of analysis because the statements they contain have no subjective empirical uncertainty.

This uncertainty can be incorporated by using a probability distribution of timelines, which we’ll call a forecast (). Though there is no uncertainty in the trolley problem, we could still represent it as a choice between two forecasts: guarantees (the pull-the-lever timeline) and guarantees (the no-action timeline). Since each timeline contains a statement that uniquely identifies it, each forecast can, like timelines, be represented as a set of statements. Each statement within a forecast is an empirical prediction. For example, would contain “The five live with a credence of 1”. So, the trolley problem reveals that you either morally prefer (denoted as ), prefer (denoted as ), or you believe that both forecasts are morally equivalent (denoted as ).

Comment by FCCC on Deliberate Consumption of Emotional Content to Increase Altruistic Motivation · 2020-09-19T06:53:03.994Z · EA · GW

I watched those videos you linked. I don't judge you for feeling that way.

Did you convert anyone to veganism? If people did get converted, maybe there were even more effective ways to do so. Or maybe anger was the most effective way; I don't know. But if not, your own subjective experience was worse (by feeling contempt), other people felt worse, and fewer animals were helped. Anger might be justified but, assuming there was some better way to convert people, you'd be unintentionally prioritizing emotions ahead of helping the animals.

Another thing to keep in mind: When we train particular physical actions, we get better at repeating that action. Athletes sometimes repeat complex, trained actions before they have any time to consciously decide to act. I assume the same thing happens with our emotions: If we feel a particular way repeatedly, we're more likely to feel that way in future, maybe even when it's not warranted.

We can be motivated to do something good for the world in lots of different ways. Helping people by solving problems gives my life meaning and I enjoy doing it. No negative emotions needed.

Comment by FCCC on The case of the missing cause prioritisation research · 2020-08-23T02:55:55.761Z · EA · GW

“writing down stylized models of the world and solving for the optimal thing for EAs to do in them”

I think this is one of the most important things we can be doing. Maybe even the most important since it covers such a wide area and so much government policy is so far from optimal.

you just solve for the policy ... that maximizes your objective function, whatever that may be.

I don't think that's right. I've written about what it means for a system to do "the optimal thing" and the answer cannot be that a single policy maximizes your objective function:

Societies need many distinct systems: a transport system, a school system, etc. These systems cannot be justified if they are amoral, so they must serve morality. Each system cannot, however, achieve the best moral outcome on its own: If your transport system doesn’t cure cancer, it probably isn’t doing everything you want; if it does cure cancer, it isn’t just a “transport” system...

Unless by policy, you mean "the entirety of what government does", then yes. But given that you're going to consider one area at a time, and you're "only including all the levers between which you’re considering", you could reach a local optimum rather than a truly ideal end state. The way I like to think about it is "How would a system for prisons (for example) be in the best possible future?" This is not necessarily going to be the system that does the greatest good at the margin when constrained to the domain you're considering (though they often are). Rather than think about a system maximizing your objective function, it's better to think of systems as satisfying goals that are aligned with your objective function.

Comment by FCCC on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-27T06:36:33.055Z · EA · GW
And bits describe proportional changes in the number of possibilities, not absolute changes...
And similarly, the 3.3 bits that take you from 100 possibilities to 10 are the same amount of information as the 3.3 bits that take you from 10 possibilities to 1. In each case you're reducing the number of possibilities by a factor of 10.

Ahhh. Thanks for clearing that up for me. Looking at the entropy formula, that makes sense and I get the same answer as you for each digit (3.3). If I understand, I incorrectly conflated "information" with "value of information".

Comment by FCCC on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-25T17:26:51.081Z · EA · GW
I think this is better parsed as diminishing marginal returns to information.

How does this account for the leftmost digit giving the most information, rather than the rightmost digit (or indeed any digit between them)?

per-thousandths does not have double the information of per-cents, but 50% more

Let's say I give you $1 +$ where is either 0, $0.1,$0.2 ... or $0.9. (Note$1 is analogous to 1%, and is equivalent adding a decimal place. I.e. per-thousandths vs per-cents.) The average value of , given a uniform distribution, is $0.45. Thus, against$1, adds almost half the original value, i.e. $0.45/$1 (45%). But what if I instead gave you $99 +$? $0.45 is less than 1% of the value of$99.

The leftmost digit is more valuable because it corresponds to a greater place value (so the magnitude of the value difference between places is going to be dependent on the numeric base you use). I don't know information theory, so I'm not sure how to calculate the value of the first two digits compared to the third, but I don't think per-thousandths has 50% more information than per-cents.

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-22T08:17:10.610Z · EA · GW
From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences

This statement is just incorrect.

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-22T08:07:43.521Z · EA · GW

Basically, yeah.

But I do think it's a mistake to update your credence based off someone else's credence without knowing their argument and without knowing whether they're calibrated. We typically don't know the latter, so I don't know why people are giving credences without supporting arguments. It's fine to have a credence without evidence, but why are people publicising such credences?

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-22T07:58:39.329Z · EA · GW
But you say invalid meta-arguments, and then give the example "people make logic mistakes so you might have too". That example seems perfectly valid, just often not very useful.

My definition of an invalid argument contains "arguments that don't reliably differentiate between good and bad arguments". "1+1=2" is also a correct statement, but that doesn't make it a valid response to any given argument. Arguments need to have relevancy. I dunno, I could be using "invalid" incorrectly here.

And I'd also say that that example meta-argument could sometimes be useful.

Yes, if someone believed that having a logical argument is a guarantee, and they've never had one of their logical arguments have a surprising flaw, it would be valid to point that out. That's fair. But (as you seem to agree with) the best way to do this is to actually point to the flaw in the specific argument they've made. And since most people who are proficient with logic already know that logic arguments can be unsound, it's not useful to reiterate that point to them.

Also, isn't your comment primarily meta-arguments of a somewhat similar nature to "people make logic mistakes so you might have too"?

It is, but as I said, "Some meta-arguments are valid". (I can describe how I delineate between valid and invalid meta-arguments if you wish.)

Describing that as pseudo-superforecasting feels unnecessarily pejorative.

Ah sorry, I didn't mean to offend. If they were superforecasters, their credence alone would update mine. But they're probably not, so I don't understand why they give their credence without a supporting argument.

Did you mean "some ideas that are probably correct and very important"?

The set of things I give 100% credence is very, very small (i.e. claims that are true even if I'm a brain in a vat). I could say "There's probably a table in front of me", which is technically more correct than saying that there definitely is, but it doesn't seem valuable to qualify every statement like that.

Why am I confident in moral uncertainty? People do update their morality over time, which means that they were wrong at some point (i.e. there is demonstrably moral uncertainty), or the definition of "correct" changes and nobody is ever wrong. I think "nobody is ever wrong" is highly unlikely, especially because you can point to logical contradictions in people's moral beliefs (not just unintuitive conclusions). At that point, it's not worth mentioning the uncertainty I have.

I definitely don't think EAs are perfect, but they do seem above-average in their tendency to have true beliefs and update appropriately on evidence, across a wide range of domains.

Yeah, I'm too focused on the errors. I'll concede your point: Some proportion of EAs are here because they correctly evaluated the arguments. So they're going to bump up the average, even outside of EA's central ideas. My reference classes here were all the groups that have correct central ideas, and yet are very poor reasoners outside of their domain. My experience with EAs is too limited to support my initial claim.

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T08:12:06.341Z · EA · GW

It's almost irrelevant, people still should provide their supporting argument of their credence, otherwise evidence can get "double counted" (and there's "flow on" effects where the first person who updates another person's credence has a significant effect on the overall credence of the population). For example, say I have arguments A and B supporting my 90% credence on something. And you have arguments A, B and C supporting your 80% credence on something. And neither of us post our reasoning; we just post our credences. It's a mistake for you to then say "I'll update my credence a few percent because FCCC might have other evidence." For this reason, providing supporting arguments is a net benefit, irrespective of EA's accuracy of forecasts.

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T07:15:56.107Z · EA · GW
The two statements are pretty similar in verbalized terms (and each falls under loose interpretations of what "pretty sure" means in common language), but ought to have drastically different implications for behavior!

Yes you're right. But I'm making a distinction between people's own credences and their ability to update the credences of other people. As far as changing the opinion of the reader, when someone says "I haven't thought much about it", it should be an indicator to not update your own credence by very much at all.

I basically think EA and associated communities would be better off to have more precise credences, and be accountable for them

I fully agree. My problem is that this is not the current state of affairs for the majority of Forum users, in which case, I have no reason to update my credences because an uncalibrated random person says they're 90% confident without providing any reasoning that justifies their position. All I'm asking for is for people to provide a good argument along with their credence.

I agree that EAs put superforecasters and superforecasting techniques on a pedestal, more than is warranted.

I think that they should be emulated. But superforcasters have reasoning to justify their credences. They break problems down into components that they're more confident in estimating. This is good practice. Providing a credence without any supporting argument, is not.

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T06:57:10.039Z · EA · GW

I'm not sure how you think that's what I said. Here's what I actually said:

A superforecaster's credence can shift my credence significantly...
If the credence of a random person has any value to my own credence, it's very low...
The evidence someone provides is far more important than someone's credence (unless you know the person is highly calibrated and precise)...
[credences are] how people should think...
if you're going to post your credence, provide some evidence so that you can update other people's credences too.

I thought I was fairly clear about what my position is. Credences have internal value (you should generate your own credence). Superforecasters' credences have external value (their credence should update yours). Uncalibrated random people's credences don't have much external value (they shouldn't shift your credence much). And an argument for your credence should always be given.

I never said vague words are valuable, and in fact I think the opposite.

This is an empirical question. Again, what is the reference class for people providing opinions without having evidence? We could look at all of the unsupported credences on the forum and see how accurate they turned out to be. My guess is that they're of very little value, for all the reasons I gave in previous comments.

you are concretely making the point that it's additionally bad for them to give explicit credences!

I demonstrated a situation where a credence without evidence is harmful:

If we have different credences and the set of things I've considered is a strict subset of yours, you might update your credence because you mistakenly think I've considered something you haven't.

The only way we can avoid such a situation is either by providing a supporting argument for our credences, OR not updating our credences in light of other people's unsupported credences.

Comment by FCCC on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-21T06:16:42.832Z · EA · GW
Yes, in most cases if somebody has important information that an event has XY% probability of occurring, I'd usually pay a lot more to know what X is than what Y is.

As you should, but Greg is still correct in saying that Y should be provided.

Regarding the bits of information, I think he's wrong because I'd assume information should be independent of the numeric base you use. So I think Y provides 10% of the information of X. (If you were using base 4 numbers, you'd throw away 25%, etc.)

But again, there's no point in throwing away that 10%.

Comment by FCCC on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-21T06:03:53.339Z · EA · GW

I agree. Rounding has always been ridiculous to me. Methodologically, "Make your best guess given the evidence, then round" makes no sense. As long as your estimates are better than random chance, it's strictly less reliable than just "Make your best guess given the evidence".

Credences about credences confuse me a lot (is there infinite recursion here? I.e. credences about credences about credences...). My previous thoughts have been give a credence range or to size a bet (e.g. "I'd bet $50 out of my$X of wealth at a Y odds"). I like both your solutions (e.g. "if I thought about it for an hour..."). I'd like to see an argument that shows there's an optimal method for representing the uncertainty of a credence. I wouldn't be surprised if someone has the answer and I'm just unaware of it.

I've thought about the coin's 50% probability before. Given a lack of information about the initial forces on the coin, there exists an optimal model to use. And we have reasons to believe a 50-50 model is that model (given our physics models, simulate a billion coin flips with a random distribution of initial forces). This is why I like your "If I thought about it more" model. If I thought about the coin flip more, I'd still guess 49%-51% (depending on the specific coin, of course).

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T04:15:30.854Z · EA · GW
I mean, very frequently it's useful to just know what someone's credence is. That's often an order of magnitude cheaper to provide, and often is itself quite a bit of evidence.

I agree, but only if they're a reliable forecaster. A superforecaster's credence can shift my credence significantly. It's possible that their credences are based off a lot of information that shifts their own credence by 1%. In that case, it's not practical for them to provide all the evidence, and you are right.

But most people are poor forecasters (and sometimes they explicitly state they have no supporting evidence other than their intuition), so I see no reason to update my credence just because someone I don't know is confident. If the credence of a random person has any value to my own credence, it's very low.

This is like saying that all statements of opinions or expressions of feelings are bad, unless they are accompanied with evidence, which seems like it would massively worsen communication.

That would depend on the question. Sometimes we're interested in feelings for their own sake. That's perfectly legitimate because the actual evidence we're wanting is the data about their feelings. But if someone's giving their feelings about whether there are an infinite number of primes, it doesn't update my credences at all.

I think opinions without any supporting argument worsen discourse. Imagine a group of people thoughtfully discussing evidence, then someone comes in, states their feelings without any evidence, and then leaves. That shouldn't be taken seriously. Increasing the proportion of those people only makes it worse.

Bayesians should want higher-quality evidence. Isn't self-reported data is unreliable? And that's when the person was there when the event happened. So what is the reference class for people providing opinions without having evidence? It's almost certainly even more unreliable. If someone has an argument for their credence, they should usually give that argument; if they don't have an argument, I'm not sure why they're adding to the conversation.

I'm not saying we need to provide peer-reviewed articles. I just want to see some line of reasoning demonstrating why you came to the conclusion you made, so that everyone can examine your assumptions and inferences. If we have different credences and the set of things I've considered is a strict subset of yours, you might update your credence because you mistakenly think I've considered something you haven't.

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T03:43:42.089Z · EA · GW
You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report.

Habryka seems to be talking about people who have evidence and are just not stating it, so we might be talking past one another. I said in my first comment "There's also a lot of pseudo-superforcasting ... without any evidence backing up those credences." I didn't say "without stating any evidence backing up those credences." This is not a guess on my part. I've seen comments where they say explicitly that the credence they're giving is a first impression, and not something well thought out. It's fine for them to have a credence, but why should anyone care what your credence is if it's just a first impression?

See Greg Lewis's recent post; I'm not sure if you disagree.

I completely agree with him. Imprecision should be stated and significant figures are a dumb way to do it. But if someone said "I haven't thought about this at all, but I'm pretty sure it's true", is that really all that much worse than providing your uninformed prior and saying you haven't really thought about it?

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T08:30:18.637Z · EA · GW
From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences

Sure there is: By communicating, we're trying to update one another's credences. You're not going to be very successful in doing so if you provide a credence without supporting evidence. The evidence someone provides is far more important than someone's credence (unless you know the person is highly calibrated and precise). If you have a credence that you keep to yourself, then yes, there's no need for supporting evidence.

if only to avoid problems of ambiguous language.

Ambiguous statements are bad, 100%, but so are clear, baseless statements.

As you say, people can legitimately have credences about anything. It's how people should think. But if you're going to post your credence, provide some evidence so that you can update other people's credences too.

Comment by FCCC on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T04:12:06.875Z · EA · GW
EA epistemology is weaker than expected.

I'd say nearly everyone's ability to determine an argument's strength is very weak. On the Forum, invalid meta-arguments* are pretty common, such as "people make logic mistakes so you might have too", rather than actually identifying the weaknesses in an argument. There's also a lot of pseudo-superforcasting, like "I have 80% confidence in this", without any evidence backing up those credences. This seems to me like people are imitating sound arguments without actually understanding how they work. Effective altruists have centred around some ideas that are correct (longtermism, moral uncertainty, etc.), but outside of that, I'd say we're just as wrong as anyone else.

*Some meta-arguments are valid, like discussions on logical grounding of particular methodologies, e.g. "Falsification works because of the law of contraposition, which follows from the definition of logical implication".

Comment by FCCC on Maximizing the Long-Run Returns of Retirement Savings · 2020-07-08T03:29:22.680Z · EA · GW

Yeah, that's right. The problem with my toy model is that it assumes that funds can actually estimate their optimal bid, which would need to be an exact prediction of the their future returns at an exact time, which is not possible. Allowing bids to reference a single, agreed-upon global index reduces the problem to a prediction of costs, which is much easier for the funds. And in the long run, returns can't be higher the return of the global index, so it should maximize long-run returns.

However, most (?) indices are made by committees, which I don't like, so I wanted to see other people's ideas for making this workable. (But index committees are established and seem to work well, so relying on them is less risky than setting up a brand-new committee as proposed in that government report.)

Comment by FCCC on Maximizing the Long-Run Returns of Retirement Savings · 2020-07-07T14:51:16.872Z · EA · GW

Yeah, it's definitely flawed. I was more thinking that the bids could be made as a difference between an index (probably a global one). So the profit-maximizing bids for the funds would be the index return (whatever it happens to be) minus their expected costs. And then you have large underwriters of the firms, who make sure that the fund's processes are sound. What I'd like is everyone to be in Vanguard/Blackrock, but there should be some mechanism for others to overthrow them if someone can match the index at a lower cost.

Comment by FCCC on Maximizing the Long-Run Returns of Retirement Savings · 2020-07-07T14:45:47.716Z · EA · GW

Caught red handed. I'd been thinking about this idea for a while and was trying to get the maths to work last night, so I had my prison/immigration idea next to me for reference.

I like this idea; we should have many more second-price auctions out there. Do you have any further references about it?

Thanks. I'm not the best person to ask about auctions. For people looking for an introduction, this video is pretty good. If anyone's got a good textbook, I'd be interested.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-07-07T08:36:14.581Z · EA · GW

Ah yes, I have mentioned in other comments about regulation keeping private prisons in check too. I should have restated it here. I am in favour of checks and balances, which is why my goal for the system contains "...within the limits of the law". I agree with almost everything you say here (I'd keep some public prisons until confident that the market is mature enough to handle all cases better than the public system, but I wouldn't implement your 10-year loan).

Human rights laws. Etc.

Yep, I'm all for that. One thing that people are missing is that the goal of a system kind of... "compresses" the space of things you want to happen. That compression is lossy. You want that goal to lose as few things as possible, but you will lose some things. To fix that, you will need some regulation to make sure the system works in the important edge cases.

prisons may not have a strong incentive to care about the welfare of the prisoners whilst they are in the prison

This is incorrect. They do have a strong incentive, since the contract comes into effect immediately after the auction: If a crime happens in their prison, the prison has to pay the government. The resulting problem is that prisons have an incentive to hide these crimes. So I recommended that prisons be outfitted with cameras and microphones that are externally audited.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-07-07T07:12:29.844Z · EA · GW
Basically everyone was convinced by the theory of small loans to those too poor to receive finance.

I was against microfinance, but I also don't know how they justified the idea. I think empirical evidence should be used to undermine certain assumptions of models, such as "People will only take out a loan if it's in their own interests". Empirically, that's clearly not always the case (e.g. people go bankrupt from credit cards), and any model that relies on that statement being true may fail because of it. A theoretical argument with bad assumptions is a bad argument.

"Any prison system that does so [works] will look similar to mine, e.g. prisons would need to get paid when convicts pay tax."
Blanket statements like this – suggesting your idea or similar is the ONLY way prisons can work still concerns me and makes me think that you value theoretical data too highly compared to empirical data.

That's not what I said. I said "Most systems can work well in the short run", but systems that don't internalize externalities are brittle ("For a system to work well and be robust"). Internalizing externalities has a very strong evidence base (theoretical and empirical). If anyone can show me a system that internalizes externalities and doesn't look like my proposal, I will concede.

I still think it could help the case to think about how a pilot prison could be made to produce useful data.

I think we agree that you want "cheap and informative tests". Some of the data you're talking about already exists (income tax data, welfare payments, healthcare subsidies), which is cheap (because it already exists) and informative (because it's close to the data prisons would need to operate effectively).

Social impact bonds for prisons are already in place, and that system is similar to mine in some respects (though it has poor, ad hoc justification, and so the system should fail in predictable ways). You're right about me not being interested those systems. Social impact bonds are probably the best reference class we have for my proposal. But if they failed, I wouldn't update my beliefs very much since the theory for the impact bonds is so poor.

Hmm. Actually, you're right, you can make a small trial work.

1. Randomly select enough inmates to fill US prisons. Randomly put 50 percent of those inmates in set A, and the other inmates in set B.

2. Allow any prison to bid on the contracts of set B. The participating prisons have to bid on all of those inmates' contracts.

3. Select the prisons in such a way that those prisons are filled, and that the sum of the bids is maximized. Because all prisons are bidding, you get a thick market, and the assumptions of my argument should hold (you may have to compensate prisons who make reasonable bids).

4. Select random prisons to take on set A. Does this introduce selection bias? Yes, and that's exactly the point. In my proposal, the best prisons self-select by making higher bids (criterion 1).

5. Observe.

I'm interested to hear what you think.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-07-06T19:35:33.336Z · EA · GW
Theoretical reasons are great but actual evidence [is] more important.

Good theoretical evidence is "actual evidence". No amount of empirical evidence is up to the task of proving there are an infinite number of primes. Our theoretical argument showing there are an infinite number of primes is the strongest form of evidence that can be given.

That's not to say I think my argument is airtight, however. My argument could probably be made with more realistic assumptions (alternatively, more realistic assumptions might show my proposed system is fundamentally mistaken). My model also just describes an end state, rather than provide any help on how to build this market from scratch (implementing the system overnight would produce a market that's too thin and inefficient).

Theory can go wrong if the assumptions are false or the inferences are invalid. Both of these errors happen all the time of course, and in subtle ways too, so I agree that empirical evidence is important. But even with data, you need a model to properly interpret it. Data without a model only tells you what was measured, which usually isn't that informative. No matter what the numbers are, no one can say the UK prison system is better than that of the US without some assumptions. And no one can get data on a system that hasn't been tried before (depending on the reference classes available to you).

Consider international development. The effective altruism community has been saying for years (and backing up these claims) that in development you cannot just do things that theoretically sound like they will work (like building schools) but you need to do things that have empirical evidence of working well.

Can you show me a theoretical model of school building that would convince me that it would work when it would, in fact, fail? I don't think I would be convinced. (How can you be sure the teachers are good? How can you be sure good teachers will stay? How can you be sure students will show up?) You can't bundle all theory in the same basket (otherwise I could point to mathematics and say theory is always sound). Whether a theory is good evidence hinges on whether the assumptions are close enough to reality.

People are very very good at persuading themselves in what they believe [...]
Be wary of the risk of motivated reasoning
[But your] claim might be true and you might have good evidence for it

The process of building the argument changed what I believed the system should be. I have no dog in this fight. I claimed "If the UK system currently works well, I suspect that you have good regulators who are manually handling the shortcomings of the underlying system" for several reasons:

• it is true of so many systems I've seen, including all of the prison systems I've seen (reference class)
• For a system to work well and be robust, externalities need to be internalized. Any prison system that does so will look similar to mine, e.g. prisons would need to get paid when convicts pay tax. You would have mentioned if the UK system did so.
• The data I've seen on UK prisons is about 50% recidivism, which I don't think would be the case under a functional system.
Don’t under-value evidence, you might miss things. An underplayed strength of your case for fixing private prisons is that the solution you suggest is testable. A single pilot prison could be run and data collected and lessons learned. To some degree this could even be done by an committed entrepreneur with minimal government support.

A pilot prison wouldn't work because it wouldn't have competitive bidding. I did mention in Aaron Gertler's comment what data should be collated prior to implementation. I'm all for looking at the relevant data.

If one [UK public prisons or UK private prisons are] clearly failing it motivates change in the other.

That doesn't seem like a good system.

The bidding process and the actualisation of losses (tied to real social interests) keep the prisons in check.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-07-04T05:42:10.603Z · EA · GW

Thanks for the kind comment.

My guess is that the US would be the best place to start (a thick "market", poor outcomes), but I'm talking about prison systems in general.

I'm not familiar with the UK system, but I haven't heard of any prison system with a solid theoretical grounding. Theory is required because we want to compare a proposed system to all other possible systems and conclude that our proposal is the best. You want theoretical reasons to believe that your system will perform well, and that good performance will endure.

Most systems can work well in the short run, but that doesn't mean they're good. For example, if I were ruled by a good king, I still wouldn't want to have a monarchy in place. If the UK system currently works well, I suspect that you have good regulators who are manually handling the shortcomings of the underlying system.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-24T02:12:08.536Z · EA · GW

I emailed Robin Hanson about my immigration idea in 2018. His post was in 2019. But to be fair, he came up with futarchy well before I started working on policy.

pay annual dividends proportional to these numbers

Doing things in proportion (rather than selling the full value) undervalues the impact of good forecasts. Since making forecasts has a cost, proportional payment (where the proportionality constant is not equal to 1) would generate inefficient outcomes: Imagine the contribution of the immigrant is $100 and it costs$80 to make the forecast, then paying forecasters anything less than 80% will cause poor outcomes.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-23T10:18:45.794Z · EA · GW
Does any country in the world create estimates of individual citizens' cost to social services? Does any country in the world have a system where companies can bid for the right to collect individuals' future tax revenue? Has anyone else (politicians, researchers, etc.) ever argued for a system resembling this one?

I'm not aware of any "net tax contribution" measurement, but I haven't done an extensive search either. I'm not aware of anyone arguing for anything close to the system I proposed. The closest (but still far away) system that I've heard of is social impact bonds, which have been implemented in Australia to some degree. In the implementations of them that I've seen, they give prisons bonuses for reaching a low level of recidivism.

There's several weaknesses of that model. Maybe one prison happens to get allocated inmates that are not likely to reoffend, in which case, they get paid for no reason (bidding on inmate contracts stops this). A reoffending murderer is given the same weight as a reoffending thief, so the prison is indifferent to who they rehabilitate (valuing "equivalent compensation" of crimes stops this). And the government is not cost minimizing (again, bidding stops this). It also doesn't incentivize prisons to increase the net tax contribution of the convict (whereas mine does).

I'd have loved to hear thoughts in the post on how we might "carefully work towards" a system that works so differently from any that (AFAIK) exist in the world today. What intermediate steps get us closer to this system without creating a full transition?

I have some ideas, but no strong theoretical reasons for believing they're ideal.

You could implement a subset of the system: Just the income tax paid and welfare used. That data already exists. This data could be linked to the inmate's attributes, which would allow the prisons to have good reference classes from which they could estimate their profit-maximizing bids. The primary difference between this half-measure and the full proposal is that the prisons don't pay the government when a crime is committed (secondarily, income tax paid is different from tax incidence).

When you have the right metrics for the full system (valuations of equivalent compensation for all crimes, and tax incidence) if, for some reason, tax incidence data can't be generated for the past, you could keep the half-measure system in place for a few decades. But during that time, you do the measurements and store the data. When it's time to implement the full system, again, prisons have good reference classes from which they can estimate their optimal bids.

If politicians are particularly skittish on the idea, they could say to the prisons "We'll pay you 80% of what we would under the current scheme, and pay you 20% of what we would under the new scheme." I wouldn't recommend this, because some viable rehabilitative measures won't be taken.

I'd have been more convinced by the post if it referred to any existing policy which mirrors any aspect of these proposals.

I think the best argument for conservativism is "Out of all possible sets of institutions, the one we're in is pretty good". But this doesn't mean "never implement radical change" (if you never do so, you'll get stuck on a local optimum). It just means that when you're implementing radical change, you have to have good evidence. I wouldn't implement this system based off the mathematical justification I've given here. But I think the problems with the argument could be solved with minimal changes to the actual system. And if one of the assumptions of the argument fails when the system is implemented, we can always change back to the existing system. But if it is successful, it has implications for other policy areas (such as immigration).

An example of radical, evidence-supported policy is the kidney-exchange reform. Rather than exchange kidneys between pairs of people, they now chain groups of people together. The new system facilitates far more exchanges between willing donors and people in need. Countries that waited for the "evidence" (by which they mean "empirical evidence") let people die because they didn't value the logical argument.

The hospital-intern matching algorithm is another example (it was also applied to student-to-school allocation).

The most prominent system that's supported on a theoretical-level is free markets. It is supported by the general equilibrium model. When the assumptions that model stray too far from reality, the system breaks down. When they're close enough, it works very well.

So the question is "Are the assumptions of the prison-policy argument close enough to reality". I think they mostly are, and where they're not, the error is balanced by a countervailing error.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-21T09:22:17.328Z · EA · GW
I agree that seems likely, but in my mind it's not the main reason to prevent it, and treating it as an afterthought or a happy coincidence is a serious omission.

No, this consequence was one of my intentions. It was not an afterthought. Not every goal needs to be stated, they can be implied.

You measure them only by what they can do for others

...by the convict's own free will. And just because that's the only thing being measured, doesn't mean I'm disregarding everything else. Societal contribution and a person's value are different things: A person who lives separately from society has value. But I don't know how to construct a system that incorporates that value.

when they can't be used they are worthless, and need not be protected or cared for.

This is a misunderstanding of the policy. Crimes that occur within prison must be paid for, so the prisons want to protect their inmates.

there are people that you might predict are likely to die in prison

This is a good point. Maybe they should be put in a public prison.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-21T05:14:30.190Z · EA · GW

That's a good point to bring up. There are a few ends that other people assign to prisons that come to mind: rehabilitation, deterrence, punishment, and removing the criminal from the population (protecting innocents). However, some of these goals can be achieved by other systems. The death penalty is completely compatible with the system I proposed: Though you may disagree with killing criminals for other reasons, it is (at least on the face of it) a deterrent, and it doesn't need to be carried out by prisons. The law could specify ways in which the prisons must treat their inmates. For example, it could forbid prisons from providing computer access.

If the punishments are not dictated by law, they are the ad hoc decisions of the prison warden (or the decisions of the other inmates).

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-21T04:37:25.393Z · EA · GW
My instinctive emotional reaction to this post is that it worries me, because it feels a bit like "purchasing a person", or purchasing their membership in civil society. I think that a common reaction to this kind of idea would be that it contributes to, or at least continues, the commodification and dehumanization of prison inmates, the reduction of people to their financial worth / bottom line

No one is going to run a prison for free--there has to be some money exchanged (even in public prisons, you must pay the employees). Whether that exchange is moral or not, depends on whether it is facilitated by a system that has good consequences. I think a worthy goal is maximizing the societal contribution of any given set of inmates without restricting their freedom after release. This goal is achieved by the system I proposed (a claim supported by my argument in the post). Under this system, I think prisons will treat their inmates far better than they currently do: allowing inmates to get raped probably doesn't help maximize societal contribution. "Commodification" and "dehumanization" don't mean anything unless you can point to their concrete effects. If I've missed some avoidable concrete effect, I will concede it as a good criticism.

(indeed, parts of your analysis explicitly ignore non-monetary aspects of people's interactions with society and the state; as far as I can tell, all of it ignores the benefits to the inmate of different treatment by different prisons).

Not every desirable thing needs to be explicitly stated in the goal of the system: Good consequences can be implied. As I mentioned, inmates will probably be treated much better under my system. Another good implicit consequence of satisfying stated goal, is that prisons will only pursue a rehabilitative measure if and if it is in the interests of society (again, you wouldn't want to prevent the theft of a candy bar for a million dollars).

I account for the nonmonetary aspects of the crimes. But yes, the rest is ignored. If this ignored amount correlates with the measured factors, this is not really an issue.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-21T03:50:47.700Z · EA · GW
But perhaps this is what your remark about zero economic profit is meant to address. I didn't understand that; perhaps you can elaborate.

That's correct. . The profit that most people think about is the accounting profit. Accounting profit ignores opportunity costs, which is what you give up by doing what you're doing (bear with me a moment). Economic profit, on the other hand, includes these opportunity costs in the calculation. For example, let's say Tom Cruise quits acting and decides to bake cakes for a living. Even if his cake shop earns him \$1M in accounting profit, he's giving up all the money he could earn acting instead. So his economic profit is actually negative.

I think you could actually just fix this in the model and still reach the same conclusion (though you'd need extra assumptions to make it work). I really just wanted to introduce my idea for the prison system, rather than make an airtight argument to justify it.

...

Predicting the total present value of someone's future tax revenue minus welfare costs just seems extremely difficult in general. It will have major components that are just general macroeconomic trends or tax policy projections.

It is very difficult, but that's exactly what the financial markets do.

While you are in part rewarding people who manage to produce better outcomes, you are also rewarding people who are simply best able to spot already-existing good (or bad) outcomes, especially if you allow these things to be traded on a secondary market.

Yep. If someone is great at running prisons, you want them to do so, regardless of how good they are at predicting the future. Ideally, you would have a system that allows any good expert to thrive, even if they know little about anything outside of their expertise. But companies deal with this all the time. When they're developing a new product, they have to predict which research ventures will be fruitful and which won't be. They have to predict how well products will sell. They have to predict product breakage rates. They have to predict what advertising will work the best. All these things are hard, which is why companies fail. But they are replaced by ones who better succeed at solving all the issues.

...

You say things like "whenever the family uses a government service, the government passes the cost on to the company" as if the costs of doing so are always transparent or easy (or wise) to track. I guess an easy example would be the family driving down a public road, which is in some sense "using a public service" but in a way that isn't usually priced, and arguably it would be very wasteful to do so.

Well, yeah. That's why I say to not measure those things. Only measure the big things. The reason why I mention that later in my post, rather than including it in the core argument, is because you need to "smooth things out" with simplifying assumptions to make logical arguments work.

Other examples are things like using public education, where it's understood that the cost is worth it because there's a benefit, but the benefit isn't necessarily easy to capture for the company who had to pay for the education.

You could actually use my proposal as a secondary, opt-in public education system as well.

Amount of tax paid on salary doesn't reliably reflect amount of public benefit of someone doing their job, for a variety of reasons: arguably this is some kind of economic / market failure, but it is also undeniably the reality we live in.

Sure. But I don't see why we can't fix those systems as well. (Just to clarify, ideally salaries are paid based on marginal contribution, not the total contribution of the industry--which is why we don't pay farmers an infinite amount. But I agree that not everyone is paid their marginal contribution.)

...

Once you've extended your suggestion to prisoners and immigrants, I think it's worth asking why you can't securitize anyone's future "societal contributions". One obvious drawback is that once this happens on a large enough scale, it starts distorting the incentives of the government, which is after all elected by people who are happy when taxes go down, but no longer raises (as much) additional revenue for itself when taxes go up.

Yes, that's right! But it is a solvable problem. A taxation system that financially compensates people for rule changes would mitigate this. In effect, the prisons would be paid as if the taxation system were fixed at the time the inmate contract was made.

...

In part, I think the above remark goes to the core of the philosophical legitimacy of taxation: it's worth considering how the slogan "no taxation without representation" applies to people whose taxes go to a corporation that they have no explicit control over.

I'm not sure what you're saying here. People still get to vote. The government has simply exchanged their taxation stream for its present value. Are also you saying private companies shouldn't be allowed to buy government bonds?

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-17T12:36:24.695Z · EA · GW

No, I don't think this is a problem. The prisons are competing against each other, not acting as a single, unified block. Why would a prison spend money on making something illegal (through lobbying) when they still have to outbid their opponents? Not only that, prisons would also have an additional liability to pay for their existing prisoners who might commit these new crimes after their release.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-17T10:00:09.667Z · EA · GW

Sorry about the confusion. I hope the new notation makes it easier. (I've removed the graphs.)

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-16T06:33:48.033Z · EA · GW

Thanks, Larks.

I [think it is] a huge mistake that reformists focus on abolishing private prisons, rather than using them.

Yeah, me too. I've told people that "I have an idea for a private prison system" and they think it's a bad idea before they've heard any details. I think the government has probably done a better job than the private sector with prisons, so it's a bit of hard sell.

With privatisation you get what you pay for, and at the moment we pay for volume.

Correct! The performance of the private sector depends on what the system maximizes. The prison's current profit-maximizing behaviour is to minimize the prison's cost per inmate and make sure that inmates are "return customers".

might it be better to define it as the minimum amount we would have to be paid in order to release someone

No, how long someone stays in prison is in the domain of the laws, rather than the prison system. The question is "Would this prison system prevent an ideal set of laws from being implemented?" I can't see any reason why they shouldn't work together. Someone who has caused great harm, and is likely to cause more great harm, should not be allowed out. But that's for the judge to decide.

You still can!

I would, but I'm working on another post.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-16T05:24:39.514Z · EA · GW

Ah, I think we've both made the same mistake (believing recidivism rates were similar across countries). It appears recidivism has quite a large range.

"For all reported outcomes, a 2-year follow-up period was the most commonly used. The 2-year rearrest rates ranged from 26% (Singapore) to 60% (USA), two-year reconviction rates ranged from 20% (Norway) to 63% (Denmark), and two-year reimprisonment rates ranged from 14% (USA – Oregon) to 43% (Canada – Quebec, New Zealand) (see Table 3 for 2-year rates from included countries)."

In any case, my argument doesn't hinge on what the true statistics are.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-16T04:22:27.176Z · EA · GW

The graphs show what is encapsulated by what. The area to which a label corresponds is the smallest convex shape that encapsulates the label. For example, is the whole lower-left quadrant, which also encapsulates the monetary effect of crimes (which is why the monetary effect of crimes is not explicitly included in the formulas). doesn't stand for all monetary factors. It stands for every monetary factor except .

If the convict pays tax, that's a good thing for society (all else being equal). should increase. And it does, since more tax means a higher . If the convict has to use welfare, that's bad. should decrease. And it does, since you get a lower . If the convict's incarceration requires more funding, that's bad (all else being equal). should decrease. And it does, since funding is subtracted. And so on. There is part of the graph that is not included in : The nonmonetary effects that are not crimes. One of my simplifying assumptions was to ignore this section ("Let us ignore Bob’s other nonmonetary contributions for simplicity.")

This system has an advantage over public prisons in that it provides a mechanism to choose which research should be pursed. Should we trial inmates wearing pink uniforms? Is that worth the cost of research or not? I don't know. But there are people who are informed enough to be willing to make a bet on the matter. The people who believe strongly that they can get good outcomes will make those bets. If they're wrong, they lose money and leave the market. If they're right, they make money and gain a greater share of control.

One thing I want to note: I'm not saying "Implement the system as I've described by next month". I think the system is something to carefully work towards.

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-16T03:15:21.402Z · EA · GW

Yep, those perverse incentives that you identified are all good criticisms. If there's a theoretical model that says why a system will work, the real-world failure points of that system will be the assumptions of its model. The assumptions can be made to be true with the right regulations. My model assumes that prisons will act lawfully, which I think they will under the right punishments (since there's always a possibility of being caught).

I knew about the prison's incentive to murder high-risk inmates, but I didn't consider the others you mentioned. Maybe some activities should be illegal, such as providing inmates with lawyers, but I'd wait and see how that plays out in the real world before banning it. There's one big problem that you missed: under-reporting of crime (e.g. drug use, rape) within prison (remember prisons have to pay the government for each crime after the auction). To prevent under-reporting, I'd consider mandating that each prison puts microphones and cameras in every room. The recordings could be accessed by government auditors at any time.

I think you'd agree that the main dangers lie with high-risk inmates. To avoid that issue (at least until you have more data on how the system actually functions), you could prevent negative bids from going over a certain size (i.e. you can't bid less than dollars). The remaining inmates, whose contracts aren't bid on, would go to public prisons. The bid restrictions could be loosened as we gain a better understanding of the system and impose better regulations.

Public systems have common problems. It's hard to overthrow poorly performing incumbents: If I think I can run prisons better than the existing government, I have to overthrow the entire government in an election. The people in charge of prisons don't have the right incentives: If they could prevent a murder for 2 million dollars, they don't have access to that capital. And sure, they could run tests to see which rehabilitation measures work best, but can they make good decisions on which rehabilitation theories to test, especially when the payoffs for the prison don't exist?

Comment by FCCC on How to Fix Private Prisons and Immigration · 2020-06-14T09:58:01.398Z · EA · GW

Oh okay, thanks for the advice. I'll see if I can get it to work.