A system for scoring political candidates. RFC (request for comments) on methodology and positions

post by kbog · 2019-02-13T10:35:46.063Z · EA · GW · 14 comments

Wouldn't it be great if we had a formal rigorous system for determining who to vote and lobby for?

Would you like EA discourse on politics to be smarter and closer to double-cruxing than it was in 2016?

Wouldn't it be neat and convenient if we could share simple ratings on politicians, the way that lobby groups and think tanks do (like the letter grades given by the NRA etc)?

I have created a model that will score American presidential hopefuls, though it can be adapted with more or less difficulty for other kinds of elections.

But before we actually use it at all, we need to get the foundations right. I have created a rough draft report which justifies appropriate policy desiderata on the basis of EA goals, and assigns cardinal weights for how important they are. E.g.: our policy goal on animal farming is to reduce numbers and improve conditions, and the weight is calculated on the basis of the number of farm animals in the United States. I do this across multiple issues. Here is the report. Remember, this is a rough draft and many of the positions are not well established.

The scores are aggregated to create an overall score for each politician, as can be seen in the Excel spreadsheet.

Now I am looking for people to share information, sources, and arguments that can be used to improve the model, as my knowledge of specific issues is limited and my estimation of weights is very rough.

Comments on any issue are generally welcome but naturally you should try to focus on major issues rather than minor ones. If you post a long line of arguments about education policy for instance, I might not get around to reading and fact-checking the whole thing, because the model only gives a very small weight to education policy right now (0.01) so it won't make a big difference either way. But if you say something about immigration, no matter how nuanced, I will pay close attention because it has a very high weight right now (2).

More refined estimates of the weights to assign to various issues are especially welcome. For some issues I have nothing to go on but my own guess, and therefore your own guesses will be useful as well (I will take some kind of average). Also, there are a few big topics which I have not yet included at all because I lack information on them: Social Security, unemployment benefits, and abortion. For the first two, I'm not well read on what politicians and economists typically say. For the last one, I need some clarity on the impact of abortion access on the size of the human population, because that will be a crucial consideration for evaluating abortion policy under standard utilitarian assumptions.

I will review your comments, respond to you, and use the ideas to update the model and make it more rigorous. Hopefully we will get to the point where we can comfortably move ahead with evaluations of specific candidates.

14 comments

Comments sorted by top scores.

comment by Aidan O'Gara · 2019-02-14T02:56:53.214Z · EA(p) · GW(p)

Really cool idea! Two possibilities:

1. I think rating candidates on a few niche EA issues is more likely to gain traction than trying to formalize the entire voting process. If you invest time figuring which candidates are likely to promote good animal welfare and foreign aid policies, every EA has good reason to listen you. But the weight you place on e.g. a candidate's health has nothing to do with the fact that you're an EA; they'd be just as good listening to any other trusted pundit. I'm not sure if popularity is really your goal, but I think people would be primarily interested in the EA side of this.

2. It might be a good idea to stick to issues where any EA would agree: animal welfare, foreign aid. On other topics (military intervention, healthcare, education), values are often not the reason people disagree--they disagree for empirical reasons. If you stick to something where it's mostly a values question, people might trust your judgements more.

comment by Bluefalcon · 2019-02-14T04:36:18.937Z · EA(p) · GW(p)

This is awesome and I've been wanting something like it but am too lazy to create it myself. So I'm really glad kbog did.

I vote for continuing to include weightings for e.g. candidate health. The interesting question is who is actually likely to do the most good, not who believes the best things. So to model that well you need to capture any personal factors that significantly affect their probability of carrying out their agenda.

I think AI safety and biorisk deserve some weighting here even if candidates aren't addressing them directly. You could use proxy issues that the candidates are more likely to have records on and that relevant experts have a consensus are helpful or unhelpful (e.g. actions likely to lead to an arms race with China). And then adjust for uncertainty by giving them a somewhat lower weight than you would give a direct vote on something like creating an unfriendly AI.

comment by kbog · 2019-02-14T05:17:37.327Z · EA(p) · GW(p)

It's worth a shot, although long run cooperation / arms races seems like one of the toughest topics to tackle (due to the inherent complexity of international relations). We should start by looking through x-risk reading lists to collect the policy arguments, then see if there is a robust enough base of ideas to motivate frequent judgements about current policy.

comment by kbog · 2019-02-14T03:30:08.318Z · EA(p) · GW(p)
1. I think rating candidates on a few niche EA issues is more likely to gain traction than trying to formalize the entire voting process. If you invest time figuring which candidates are likely to promote good animal welfare and foreign aid policies, every EA has good reason to listen you. But the weight you place on e.g. a candidate's health has nothing to do with the fact that you're an EA; they'd be just as good listening to any other trusted pundit. I'm not sure if popularity is really your goal, but I think people would be primarily interested in the EA side of this.

I think it would be hard to keep things tight around traditional EA issues because then we would get attacked for ignoring some people's pet causes. They'll say that EA is ignoring this or that problem and make a stink out of it.

There are some things that we could easily exclude (like health) but then it would just be a bit less accurate while still having enough breadth to include stances on common controversial topics. The value of this system over other pundits is that it's all-encompassing in a more formal way, and of course more accurate. The weighting of issues on the basis of total welfare is very different from how other people do it.

Still I see what you mean, I will keep this as a broad report but when it's done I can easily cut out a separate version that just narrows things down to main EA topics. Also, I can raise the minimum weight for issue inclusion above 0.01, to keep the model simpler and more focused on big EA stuff (while not really changing the outcomes).

2. It might be a good idea to stick to issues where any EA would agree: animal welfare, foreign aid. On other topics (military intervention, healthcare, education), values are often not the reason people disagree--they disagree for empirical reasons. If you stick to something where it's mostly a values question, people might trust your judgements more.

Epistemic modesty justifies convergence of opinions.

If there is empirical disagreement that cannot be solved with due diligence looking into the issue, then it's irrational for people to hold all-things-considered opinions to one side or the other.

If it's not clear which policy is better than another, we can say "there is not enough evidence to make a judgement", and leave it unscored.

So yes there is a point where I sort of say "you are scientifically wrong, this is the rational position," but only in cases where there is the clear logic and validated expert opinion to back it up to the point of agreement among good people. People already do this with many issues (consider climate change for instance, where the scientific consensus is frequently treated as an objective fact by liberal institutions and outlets, despite empirical disagreement among many conservatives).

Obviously right now the opinions and arguments are somewhat rough, but they will be more complete in later versions.

comment by Milan_Griffes · 2019-02-13T17:45:09.289Z · EA(p) · GW(p)
Comments on any issue are generally welcome but naturally you should try to focus on major issues rather than minor ones. If you post a long line of arguments about education policy for instance, I might not get around to reading and fact-checking the whole thing, because the model only gives a very small weight to education policy right now (0.01) so it won't make a big difference either way. But if you say something about immigration, no matter how nuanced, I will pay close attention because it has a very high weight right now (2).

I think this begs the question.

If modeler attention is distributed proportional to the model's current weighting (such that discussion of high-weighted issues receive more attention than discussion of low-weighted issues), it'll be hard to identify mistakes in the current weighting.

comment by John_Maxwell (John_Maxwell_IV) · 2019-02-14T00:56:19.715Z · EA(p) · GW(p)

Presumably if the argument is for why the weight should be higher, then kbog will pay attention?

comment by kbog · 2019-02-14T02:11:26.037Z · EA(p) · GW(p)

Yes, and I will pay attention to everything anyway unless this thread gets super unwieldy. I am mainly suggesting that people focus on the things that will make the biggest difference.

comment by mirgee · 2019-02-15T15:44:31.222Z · EA(p) · GW(p)

I would really like to support the idea of keeping EA's focus on issues and solutions. Predicting effects of (an altruistic) action is difficult in this complex world, but still easier than predicting action of another person, and even more more a politician in the current system of irrational agents in political games with incomplete imperfect information (and lack of accountability). We may rank candidates at least roughly according to what they say they intend to do, but this estimate is weighed by so much error to be hardly valuable. Supporting the intentions themselves of course makes sense in cases with hard, relatively long-term empirical evidence of enhancing general well-being, such as free trade.

Moreover, we may want to at least consider the effect supporting politicians expressing controversial opinions on issues unrelated to the values and causes of EA, especially on the basis of highly subjective and fallible ranking systems. Personally, I came to EA in part because I love how (mostly) apolitical this community is, and maybe I am not alone.

comment by kbog · 2019-02-15T16:12:43.235Z · EA(p) · GW(p)

I'm unclear, are you suggesting that we remove "qualifications" (like the candidate's experience, character, etc), or that we remove issues that are not well studied and connected to EA (like American healthcare, tax structure, etc), or both?

comment by mirgee · 2019-02-15T21:24:41.474Z · EA(p) · GW(p)

Apologies for not being clear enough, I am suggesting the first, and part of the second, i.e. removing issues not related to EA. It is fine to discuss the best available evidence on "not well studied topics", but I don't think it's advisable to give "official EA position" on those.

In addition, my first point is questioning the idea of ranking politicians based on the views they claim or seem to hold because of how unpredictable the actual actions are regardless of what they say. I believe EA should stick to spreading the message that each individual can make the world a better place through altruism based on reason and evidence, and that we should trust no politician or anybody else to do it for us.

comment by kbog · 2019-02-16T05:50:40.980Z · EA(p) · GW(p)
Apologies for not being clear enough, I am suggesting the first, and part of the second, i.e. removing issues not related to EA. It is fine to discuss the best available evidence on "not well studied topics", but I don't think it's advisable to give "official EA position" on those.

I will make it reasonably clear in the proper reports that this is An EA Project rather than The EA Position.

Almost by definition, the issues that are distanced from EA will tend to get less weight. So, it's not super important to include them at all, but at the same time they will not change many of the outcomes.

The model easily allows for narrowing down to core issues, so probably I (or anyone else who wants to work on it) will start with by making a narrow report, and then fill it out fully if time allows. Then they can be marketed differently and people can choose which one to look at.

In addition, my first point is questioning the idea of ranking politicians based on the views they claim or seem to hold because of how unpredictable the actual actions are regardless of what they say.

So it seems like you disagree with the weight I give to issues relative to qualifications, you think it should be less than 1.8. Much less?

I believe EA should stick to spreading the message that each individual can make the world a better place through altruism based on reason and evidence, and that we should trust no politician or anybody else to do it for us.

I think of it more as making a bet than as truly trusting them. Reports/materials certainly won't hide the possible flaws and uncertainty in the analysis.

comment by mirgee · 2019-02-16T11:39:08.600Z · EA(p) · GW(p)
I will make it reasonably clear in the proper reports that this is An EA Project rather than The EA Position.

Thank you. If you fully clarify that this is a project of someone who identifies as an effective altruist, and your position may or may not be shared by all 'effective altruists', than my objections are pretty much moot. I really want to reiterate how much I think EA would gain by staying away from most political issues.

Almost by definition, the issues that are distanced from EA will tend to get less weight. So, it's not super important to include them at all, but at the same time they will not change many of the outcomes.

What is the benefit of including them? Does the benefit outweigh the cost of potentially unnecessarily shuffling some candidates? ATM, I would suggest ranking those politicians whould would get shuffled equally, and let the reader decide for themselves (or flip a coin).

Then they can be marketed differently and people can choose which one to look at.

This seems like a very bad idea. This is similar to newspapers proporting to be the source of "factual information" selling different versions of articles based on readers' points of view. There is one objective reality and our goal should be to get our understanding as close to it as possible. Again, I would suggest to instead set a goal of making a model which is

1.) Robust to new evidence

2.) Robust to different points of view

This will require some tradeoffs (or maybe is not possible at all), but only then can you get rid of the cognitive dissonance in the second paragraph of your report and confidently say "If you use the model correctly, and one politician scores better than another, then he/she is better full stop".

comment by kbog · 2019-02-16T16:26:28.296Z · EA(p) · GW(p)
If you fully clarify that this is a project of someone who identifies as an effective altruist, and your position may or may not be shared by all 'effective altruists', than my objections are pretty much moot.

I don't see how objections about methodology would be mooted merely because the audience knows that the methodology is disputed.

What is the benefit of including them?

That they are predictors of how good or bad a political term will be.

Does the benefit outweigh the cost of potentially unnecessarily shuffling some candidates?

If they are weighted appropriately, they will only shuffle them when it is good to do so.

There is one objective reality and our goal should be to get our understanding as close to it as possible.

Then why do you want me to flip coins or leave things to the reader's judgement...?

1.) Robust to new evidence

I recently removed that statement from the document because I decided it's an inaccurate characterization.

2.) Robust to different points of view

This also contradicts the wish for a model that is objective. "Robust to different points of view" really means making no assumptions on controversial topics, and incomplete.

Generally speaking, I don't see justification for your point of view (that issues are generally not predictive of the value of a term... this contradicts how nearly everyone thinks about politics), nor do you seem to have a clear conception of an alternative methodology. You want me to include EA issues, yet at the same time you do not want issues in general to have any weight. Can you propose a complete framework?

comment by mirgee · 2019-02-16T18:21:33.885Z · EA(p) · GW(p)
I don't see how objections about methodology would be mooted merely because the audience knows that the methodology is disputed.

That's not what I'm saying at all. How is a suggestion to include a disclaimer an objection about methodology? It is not that unclear. Am I being read?

If they are weighted appropriately, they will only shuffle them when it is good to do so.

What is the methodology for determining the weights?

Then why do you want me to flip coins or leave things to the reader's judgement...?

Because leaving decision to chance in face of uncertainty may sometimes be a good strategy? And I suggest leaving things to the reader's judgement when there is still considerable uncertainty / insufficient evidence for taking any position? Am I being considered at all, or have you just decided to take a hostile position for some reason...?

I recently removed that statement from the document because I decided it's an inaccurate characterization.

I agree.

This also contradicts the wish for a model that is objective.

Again, nowhere have I expressed that wish.

... really means making no assumptions on controversial topics, and incomplete.

I agree.

... issues are generally not predictive of the value of a term...

That is a vague statement which I didn't make.

.. yet at the same time you do not want issues in general to have any weight...

Again, never said that. That probably refers to my first post where I was talking about general EA position, which is moot once you include a disclaimer.

However, I apologize for not taking the time to do at least some research before making a comment. I am not versed in political science at all. Your model (or its future version) may be very well justifiable. I have some experience in game theory which maybe made me biased to see the problem more complicated than it is at first, and, even more importantly, I also have a truckload of other biases I should try to become more aware of. For example, I thought that if you take a random politician pre-election promise on a random topic, that it is likely going to be left unadressed or broken given they are elected, due to lack of accountability and attempt to appeal (I know some examples of which that happened in the past, which of course doesn't at all mean it's likely). A quick research showed this was probably wrong, so again, I apologize.

I will do some research and thinking when I have time and come back when I have some (hopefully) more informed ideas, and definitely do that in the future. I don't retract the objections which don't rely on unpredictability of decisions, however.