Comment by kbog on Candidate Scoring System, Second Release · 2019-03-24T15:21:52.763Z · score: 3 (2 votes) · EA · GW
The "less authoritative" thing was meant to apply to the entire document, not just this one section.

In the preface I state that hedging language is minimized for the sake of readability.

Political policy in practice isn't "just another question to answer".

Neither is poverty alleviation or veganism or anything else in practice.

Comment by kbog on Candidate Scoring System, Second Release · 2019-03-24T13:30:03.031Z · score: 2 (1 votes) · EA · GW

You said the problem was stating it authoritatively rather than the actual conclusions, I made it sound less authoritative but now you're saying that the actual conclusions matter. The document has sufficient disclaimers as it is, I mean the preface clearly says EAs could disagree. You don't see Givewell writing "assuming that poverty is the #1 cause area, which EAs may disagree on" multiple times and I don't treat politics with special reverence as if different rules should apply. I think there's something unhealthy and self-reinforcing about tiptoeing around like that. The point here is to advertise a better set of implicit norms, so that maybe people (inside and outside EA) can finally treat political policy as just another question to answer rather than playing meta-games.

the whole dispute is whose well-being counts, with anti-abortion advocates claiming that human fetuses count and pro-abortion people claiming that human fetuses don't.

If I care about total well-being, then of course people who say that some people's well being doesn't count are going to be wrong. This includes the pro lifers, who care about the future well being of a particular fetus but not the future well being of any potential child (or not as much, at least).

Comment by kbog on Apology · 2019-03-24T13:14:11.565Z · score: -22 (14 votes) · EA · GW

I'm saying he must have some idea of what the allegations are otherwise it wouldn't make sense for him to apologise.

Why? It makes sense for him to apologize as long as CEA demands that he apologize.

To be clear is your view is that this is likely or with some non-negligible probability, not a real apology, and he is not actually acknowledging wrongdoing?

There are no Real Apologies, it is naive to think otherwise and toxic to demand otherwise. Of course he is acknowledging wrongdoing, and he is acknowledging wrongdoing because he is being pressured to acknowledge wrongdoing. How much wrongdoing actually happened is largely unknown to us.

Comment by kbog on Candidate Scoring System, Second Release · 2019-03-24T06:58:44.564Z · score: 3 (2 votes) · EA · GW

Thanks for giving such detailed feedback.

I am now leaning towards separating cash transfers/antipoverty programs away from taxation. When I next put major time into this (I'm not currently, actually) I plan to do that.

I'm always looking for other people's ratings, depending on the nature of the disagreement I can compromise between multiple ratings for better accuracy.

Comment by kbog on Candidate Scoring System, Second Release · 2019-03-24T03:39:10.951Z · score: 2 (1 votes) · EA · GW

OK fine, in CSS3 it now simply says " Absolutist arguments for or against abortion disappear once we focus on well-being. "

Comment by kbog on Candidate Scoring System, Second Release · 2019-03-24T01:53:22.075Z · score: 6 (2 votes) · EA · GW
For instance, one section begins "Intrinsic moral rights do not exist" - that's certainly not what I believe and it seems inconsistent with other sections that talk about the "intrinsic moral weight" of animal populations, etc.

It's definitely consistent - animals can have interests without having rights, just like humans.

Rights can point in a bunch of different ways depending on the moral inclinations of the reader. And integrating and applying them to policy is a very murky issue. So even if I wanted to investigate that side of things, I would have little ability to provide useful judgments to EAs.

At some point, it would be nice to include full arguments about morality. But that's pretty low on my priorities, I don't expect to add it in the foreseeable future. Those arguments already exist elsewhere.

While the fact that you've "shown your work" with the Excel spreadsheet helps people evaluate the same issues with different weights, if someone is interested in areas that you've chosen to exclude it's less apparent how to proceed.

You can add a column besides the other topics, then insert a new row into the weight table (select three adjacent cells and press insert...). True it's a little complicated - but I have to make the spreadsheet this way in order to make the sensitivity analysis work well.

Comment by kbog on Impact of US Strategic Power on Global Well-Being (quick take) · 2019-03-24T00:39:18.235Z · score: 2 (1 votes) · EA · GW

I would definitely say that suicide is more accepted among secular-rational people. Sure it's not legal but there are people pushing for it, and people have different attitudes about it (they think it's a tragedy rather than condemning it). Not really relevant here though since I don't make a judgment on it (I just leave content like this in to cover all the bases and be ready in case I change my mind).

I would encourage you to expand on your point "I feel that people whose attitudes fall below common Western baselines of tolerance are less deserving of wealth and prosperity." It reads to me as something like ethnocentric or parochial, and it seems to run counter to the common EA principle that everyone is equally deserving of welfare, at least before we take into account instrumental effects. While we might want to incentivize greater tolerance, I wouldn't phrase it as that people who are less tolerant are less deserving of prosperity.

It's neither ethnocentric nor parochial. My view is to treat people the way I would want to treat them if I knew them well and knew what it was like to be them, aggregated in a basically utilitarian manner. I don't care to give a full explanation/argument since this could be a bad point of view to spread compared to pure neutrality. But it's my view.

Comment by kbog on Apology · 2019-03-23T21:27:27.568Z · score: 1 (13 votes) · EA · GW

C also has the highest base rate. And to me, it seems entirely typical that an organization with the CEA's sociocultural context would punish someone this much after C. (Speaking in descriptive terms; it doesn't imply it's desirable.)

Comment by kbog on Apology · 2019-03-23T20:15:27.670Z · score: -12 (10 votes) · EA · GW

Because it's helpful for people to listen to it on an occasion such as this.

Comment by kbog on Apology · 2019-03-23T19:14:08.397Z · score: -3 (11 votes) · EA · GW
Your interpretation stretches credulity

I stand by it.

If he is apologising for things he knows he has done wrong, then he must know the details of the accusations

He wrote "I appreciate that there were other interactions that made people uncomfortable and where details have not been shared with me." You are suggesting that he lied while being supervised by the CEA who did this whole thing? That wouldn't make any sense. CEA practically wrote this post.

If he does not know the details, why is he apologising?

Because if he doesn't then CEA and/or other actors will punish him more severely.

Comment by kbog on Apology · 2019-03-23T16:45:50.240Z · score: -1 (23 votes) · EA · GW

Which he does not know about. It's a blanket apology to whoever may have been affected. CEA can tell him to say whatever they want him to say; this post is obviously made under pressure and he does not know anything except that someone was uncomfortable when he expressed romantic interest. Aside from the one case on Facebook, he has not faced his accusers and does not know the nature of the charges and evidence against him. It's impossible to meaningfully admit to something when you don't know the details and people are pressuring/threatening you to do so.

(Not that I necessarily blame CEA - they are probably acting rationally).

Comment by kbog on Apology · 2019-03-23T16:13:48.616Z · score: 3 (22 votes) · EA · GW
He has himself agreed to step back from the EA community more generally, and to step back from public life in general, which would be an odd move if these were minor misdemeanours

Not necessarily. CEA or the accusers are presumably compelling this with the threat of greater penalties. But in the current climate, it is possible to credibly threaten someone even if they haven't committed major misdemeanors (either by making a bigger story out of minor misdemeanors, or with unsubstantiated accusations of major misdemeanors).

He has admitted that there have been numerous cases of improper conduct.

If I were you, I would be very careful about putting words in people's mouths. He admitted that he was accused of numerous cases of improper conduct. He cannot admit to whether they are true or not because he does not know what the accusations are.

Comment by kbog on Apology · 2019-03-23T16:02:42.322Z · score: -21 (28 votes) · EA · GW

They have also banned numerous people from EA events. I believe it is standard procedure whenever they get evidence of something like this.

Edit: now that I have a comment near the top, I'm signal boosting this video.

Comment by kbog on Apology · 2019-03-23T08:28:32.237Z · score: -3 (13 votes) · EA · GW
So your response to sexual harassment allegations

No, it's my response to someone apologizing after they got punished for sexual harassment allegations.

is to say "please carry on doing good work minus the harassment"?

No, I said "whatever manner best grows and strengthens the movement." I don't know the answer to that and you don't either.

By analogy, if Mr Kaczynski published a letter apologising for being a terrorist, would your first response be "please keep doing good work for our community, without being a terrorist"?

If he published it now, in prison, after his conviction? Of course I would. (If I was an anarchist.)

Comment by kbog on Apology · 2019-03-23T08:20:07.322Z · score: -2 (11 votes) · EA · GW

Could you clarify the second part of your comment please.

Ted Kaczynski, a convicted terrorist, has written substantive essays on anarchist theory while in prison. If he can do that, then anyone can find a way to help animals and EA in such a vastly different case as this.

Comment by kbog on Apology · 2019-03-23T08:11:05.506Z · score: 8 (11 votes) · EA · GW

From the available description, these seem to be less serious than the majority of sexual harassment allegations, and in any case there are many ways to do meaningful work for animals without expressing interest in people over Facebook messenger (or whatever else might be the context here).

Comment by kbog on Apology · 2019-03-23T06:39:11.477Z · score: -52 (35 votes) · EA · GW

You've accomplished quite a bit in your career and I encourage you to maintain involvement in the EA community in whatever manner best grows and strengthens the movement.

Impact of US Strategic Power on Global Well-Being (quick take)

2019-03-23T06:19:33.900Z · score: 11 (5 votes)
Comment by kbog on Candidate Scoring System, Second Release · 2019-03-22T22:01:06.075Z · score: 5 (3 votes) · EA · GW

Well, he really is trying to get people to make $1 donations. He's a pretty wealthy guy but he needs 65,000 individual donors in order to be allowed into the debates.

Comment by kbog on Candidate Scoring System, Second Release · 2019-03-22T21:58:41.344Z · score: 2 (1 votes) · EA · GW

I have written basic TLDRs for the presidency scores on page 74. Though, all I wrote for Beto was:

O’Rourke is inexperienced and has not supported animals as well as some other candidates.

He seemed pretty average in other ways.

I'll point to this section more clearly in the next version.

Comment by kbog on Candidate Scoring System, Second Release · 2019-03-21T18:31:58.753Z · score: 6 (4 votes) · EA · GW

Heh, well at the time of this report he announced that he was giving $2 to charity for every donation he got, to try to qualify for the debates. So maybe it was that. (He still needs them - it would be useful to donate $1 to his campaign if you have a moment.)

Comment by kbog on Candidate Scoring System, Second Release · 2019-03-21T07:04:24.963Z · score: 4 (2 votes) · EA · GW

Odd. Perhaps your browser does not accommodate OneDrive? I have Firefox; I have opened it in private mode (not logged in) and I can access them. Other people have also accessed them.

Issues may be caused by x32 Firefox or Kaspersky Password Manager: https://support.mozilla.org/en-US/questions/1115599

You can PM me your email address, and I will email the documents to you.

Candidate Scoring System, Second Release

2019-03-19T05:41:20.022Z · score: 30 (13 votes)
Comment by kbog on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T23:45:09.298Z · score: 4 (3 votes) · EA · GW

As with all ideas, the best way to handle it is to go over it if you have time or interest in doing so; if not, then say that you do not have time or interest in justifying your opinion. I don't see what the dilemma is. I don't think toonalfrink was saying "you should spend a lot of time debating everyone you disagree with," nor were they saying "you shouldn't have an opinion about something unless you spend a lot of time debating it;" those aren't implied by that conception of Epistemic Honor.

Growing the Mormon church is not an extreme viewpoint.

Comment by kbog on Candidate Scoring System, First Release · 2019-03-13T21:49:57.777Z · score: 2 (1 votes) · EA · GW

Yes he is in the 2nd version.

Comment by kbog on Candidate Scoring System, First Release · 2019-03-09T08:14:46.780Z · score: 4 (2 votes) · EA · GW

Thanks. I am now writing a pretty extensive section about this.

Comment by kbog on Making discussions in EA groups inclusive · 2019-03-06T03:15:05.878Z · score: 12 (9 votes) · EA · GW

Necessity/sufficiency tests are too narrow. Aid is neither necessary nor sufficient to end poverty, but we do it anyway.

Candidate Scoring System, First Release

2019-03-05T15:15:30.265Z · score: 11 (6 votes)
Comment by kbog on EA Survey 2018 Series: How welcoming is EA? · 2019-03-03T17:09:54.381Z · score: 2 (3 votes) · EA · GW

I haven't seen people act differently towards people who prioritize mental health. Wonder if the lower score has something to do with the kind of person who prioritizes mental health? People who are more mentally/emotionally sensitive and therefore feel less welcomed?

Comment by kbog on My new article on EA and the systemic change objection · 2019-02-26T17:19:39.041Z · score: 2 (1 votes) · EA · GW

Only a minority of EA's total impact comes from immediate poverty relief.

this takes on the burden of all the historical and qualitative arguments it has avoided e.g. the tricky stuff about the cultural impact of certain kinds of rhetoric, problems of power and compromise, the holistic and long term impact of the changes it seeks, the relationship between its goals and its methods etc

Sure. Now that we are really talking about donations to movement building rather than bed nets. But it's not prima facie obvious that these things will point against EA rather than in favor of it. So we start with a basic presumption that people who aim at making the world better will on average make the world better overall, compared to those who don't. Then, if the historical and qualitative arguments tell us otherwise about EA, we can change our opinion. We may update to think EA is worse than we though before, or we may update to think that it's even better.

However, critics only seem to care about dimensions by which it would be worse. Picking out the one or two particular dimensions where you can make a provocative enough point to get published in a humanities journal is not a reliable way to approach these questions. It is easy to come up with a long list of positive effects, but "EA charity creates long-run norms of more EA charity" is banal, and nobody is going to write a paper making a thesis out of it. A balanced overview of different effects along multiple dimensions and plausible worldviews is the valid way to approach it.

reliant on deeply uncertain evidence and thus to some extent a matter of faith and commitment rather than certainty

You still don't get it. You think that if we stop at the first step - "our basic presumption that people who aim at making the world better will on average make the world better overall" - that it's some sort of big assumption or commitment. It's not. It's a prior. It is based on simple decision theory and thin social models which are well independent of whether you accept liberalism or capitalism or whatever. It doesn't mean they are telling you that you're wrong and have nothing to say, it means they are telling you that they haven't yet identified overall reason to favor what you're saying over some countervailing possibilities.

You are welcome to talk about the importance of deeper investigation but the idea that EAs are making some thick assumption about society here is baseless. Probably they don't have the time or background that you do to justify everything in terms of lengthy reflectivist theory. Expecting everyone else to spend years reading the same philosophy that you read is inappropriate; if you have a talent then just start applying it, it don't attack people just because they don't know it already. (or, worse, attack people for not simply assuming that you're right and all the other academics are wrong.)

Candidate scoring system for 2020 (second draft)

2019-02-26T04:14:06.804Z · score: 11 (5 votes)
Comment by kbog on Why I Don't Account for Moral Uncertainty · 2019-02-22T17:12:34.914Z · score: 6 (3 votes) · EA · GW

You shouldn't feel sorry about this. Why did you delete your account?? There is absolutely no reason to feel bad.

Comment by kbog on Rodents farmed for pet snake food · 2019-02-21T19:09:13.242Z · score: 2 (1 votes) · EA · GW

There is quite a bit of recent controversy about pitbulls, that seems like the right place to start.

Comment by kbog on You have more than one goal, and that's fine · 2019-02-21T08:14:45.607Z · score: 3 (2 votes) · EA · GW
Minding Our Way by Nate Soares comes close, although I don't think he addresses the "what if there actually exist moral obligations?" question, instead assuming mostly non-moral-realism)

Not sure what he says (haven't got the interest to search through a whole series of posts for the relevant ones, sorry) but my point assuming antirealism (or subjectivism) seems to have been generally neglected by philosophy both inside and outside the academia: just because the impartial good isn't everything doesn't mean that it is rational to generically promote other people's pursuits of their own respective partial goods. The whole reason humans created impartial morality in the first place is that we realized that it works better than for us to each pursue partialist goals.

So, regardless of most moral points of view, the shared standards and norms around how-much-to-sacrifice must be justified on consequentialist grounds.

I should emphasize that antirealism != agent-relative morality, I just happen to think that there is a correlation in plausibility here.

Comment by kbog on You have more than one goal, and that's fine · 2019-02-21T08:03:28.667Z · score: 14 (6 votes) · EA · GW
But, even in that case, it seems often the case that being emotionally healthy requires, among other things, you not to treat your emotional health as a necessary evil than you indulge.

Whether it typically requires it to the degree advocated by OP or Zvi is (a) probably false, on my basic perception, but (b) requires proper psychological research before drawing firm conclusions.

But for most people, there doesn't seem to be a viable approach to integrating the obvious-implications-of-EA-thinking and the obvious-implications-of-living-healthily.

This is a crux, because IMO the way that the people who frequently write and comment on this topic seem to talk about altruism represents a much more neurotic response to minor moral problems than what I consider to be typical or desirable for a human being. Of course the people who feel anxiety about morality will be the ones who talk about how to handle anxiety about morality, but that doesn't mean their points are valid recommendations for the more general population. Deciding not to have a mocha doesn't necessarily mean stressing out about it, and we shouldn't set norms and expectations that lead people to perceive it as such. It creates an availability cascade of other people parroting conventional wisdom about too-much-sacrifice when they haven't personally experienced confirmation of that point of view.

If I think I shouldn't have the mocha, I just... don't get the mocha. Sometimes I do get the mocha, but then I don't feel anxiety about it, I know I just acted compulsively or whatever and I then think "oh gee I screwed up" and get on with my life.

The problem can be alleviated by having shared standards and doctrine for budgeting and other decisions. GWWC with its 10% pledge, or Singer's "about a third" principle, is a first step in this direction.

Comment by kbog on You have more than one goal, and that's fine · 2019-02-20T07:18:57.741Z · score: 13 (16 votes) · EA · GW

There is a difference between cost effectiveness the methodology, and utilitarianism or other impartial philosophy.

You could just as easily use cost-effectiveness for personal daily goals, and some people do with things such as health and fitness, but generally speaking our minds and society happen to be sufficiently well-adapted to let us achieve these goals without needing to think about cost-effectiveness. Even if we are only concerned with the global good, it's not worthwhile or effective to have explicit cost-effectiveness evaluation of everything in our daily lives, though that shouldn't stop us from being ready and willing to use it where appropriate.

Conversely, you could pursue the global good without explicitly thinking about cost-effectiveness even in domains like charity evaluation, but the prevailing view in EA is (rightfully) that this would be a bad idea.

What you seem to really be talking about is whether or not we should have final goals besides the global good. I disagree and think this topic should be treated with more rigor: parochial attachments are philosophically controversial and a great deal of ink has already been spilled on the topic. Assuming robust moral realism, I think the best-supported moral doctrine is hedonistic utilitarianism and moral uncertainty yields roughly similar results. Assuming anti-realism, I don't have any reason to intrinsically care more about your family, friends, etc (and certainly not about your local arts organization) than anyone else in the world, so I cannot endorse your attitude. I do intrinsically care more about you as you are part of the EA network, and more about some other people I know, but usually that's not a large enough difference to justify substantially different behavior given the major differences in cost-effectiveness between local actions and global actions. So I don't think in literal cost-effectiveness terms, but global benefits are still my general goal. It's not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.

It's important to remember that having parochial attitudes towards some things in your own life doesn't necessarily justify attempts to spread analogous attitudes among other people.

Comment by kbog on kbog did an oopsie! (new meat eater problem numbers) · 2019-02-20T06:57:44.362Z · score: 2 (1 votes) · EA · GW

Interesting. Y&G said that they checked for a curvillinear relationship and the results "do not suggest substantively different conclusions," which I understand to mean that there isn't good evidence for a Kuznets curve.

I did not know that India's average consumption was so low, perhaps their marginal increase in consumption is not much either.

Looking at Table 3. Am I reading this right: the relationship for low income countries is +0.0188kg (annually) per $1 annual income? That's 18.8kg from $1000 which is about an order of magnitude greater than the Y&G results.

Comment by kbog on My new article on EA and the systemic change objection · 2019-02-18T10:18:47.628Z · score: 2 (1 votes) · EA · GW

There is a critical omission in all of this line of scholarship - the authors never seem to stop to think about the long-run, systemic value of growing EA itself. They seem to think of it as a bare-bones redirection of small amounts of funds, without taking our potential seriously. It seems prima facie obvious that growing the EA movement has a higher value (person-for-person) than growing any other social or political movement, and the consequences of achieving an EA majority in any polity would be tremendous. As someone who identifies with EA first and other movements second (the framework which the author seems to assume), I think that EA is more philosophically correct than others, so its adherents will aim towards better goals. And in practice, EA appears to be more flexible, rational and productive than other movements. So donations and activism in support of EA movement growth are superior to efforts in favor of other things, assuming equal tractability.

Comment by kbog on A system for scoring political candidates. RFC (request for comments) on methodology and positions · 2019-02-16T16:26:28.296Z · score: 2 (1 votes) · EA · GW
If you fully clarify that this is a project of someone who identifies as an effective altruist, and your position may or may not be shared by all 'effective altruists', than my objections are pretty much moot.

I don't see how objections about methodology would be mooted merely because the audience knows that the methodology is disputed.

What is the benefit of including them?

That they are predictors of how good or bad a political term will be.

Does the benefit outweigh the cost of potentially unnecessarily shuffling some candidates?

If they are weighted appropriately, they will only shuffle them when it is good to do so.

There is one objective reality and our goal should be to get our understanding as close to it as possible.

Then why do you want me to flip coins or leave things to the reader's judgement...?

1.) Robust to new evidence

I recently removed that statement from the document because I decided it's an inaccurate characterization.

2.) Robust to different points of view

This also contradicts the wish for a model that is objective. "Robust to different points of view" really means making no assumptions on controversial topics, and incomplete.

Generally speaking, I don't see justification for your point of view (that issues are generally not predictive of the value of a term... this contradicts how nearly everyone thinks about politics), nor do you seem to have a clear conception of an alternative methodology. You want me to include EA issues, yet at the same time you do not want issues in general to have any weight. Can you propose a complete framework?

Comment by kbog on A system for scoring political candidates. RFC (request for comments) on methodology and positions · 2019-02-16T05:50:40.980Z · score: 2 (1 votes) · EA · GW
Apologies for not being clear enough, I am suggesting the first, and part of the second, i.e. removing issues not related to EA. It is fine to discuss the best available evidence on "not well studied topics", but I don't think it's advisable to give "official EA position" on those.

I will make it reasonably clear in the proper reports that this is An EA Project rather than The EA Position.

Almost by definition, the issues that are distanced from EA will tend to get less weight. So, it's not super important to include them at all, but at the same time they will not change many of the outcomes.

The model easily allows for narrowing down to core issues, so probably I (or anyone else who wants to work on it) will start with by making a narrow report, and then fill it out fully if time allows. Then they can be marketed differently and people can choose which one to look at.

In addition, my first point is questioning the idea of ranking politicians based on the views they claim or seem to hold because of how unpredictable the actual actions are regardless of what they say.

So it seems like you disagree with the weight I give to issues relative to qualifications, you think it should be less than 1.8. Much less?

I believe EA should stick to spreading the message that each individual can make the world a better place through altruism based on reason and evidence, and that we should trust no politician or anybody else to do it for us.

I think of it more as making a bet than as truly trusting them. Reports/materials certainly won't hide the possible flaws and uncertainty in the analysis.

Comment by kbog on kbog did an oopsie! (new meat eater problem numbers) · 2019-02-16T05:31:59.031Z · score: 3 (2 votes) · EA · GW

I'm not sure exactly, my perception is that (1) often they don't currently but the new growth is more likely to be factory farming, (2) traditional farming isn't clearly better. Farming in the West is probably covered by more welfare regulations than farming in poor countries.

Comment by kbog on A system for scoring political candidates. RFC (request for comments) on methodology and positions · 2019-02-15T16:12:43.235Z · score: 2 (1 votes) · EA · GW

I'm unclear, are you suggesting that we remove "qualifications" (like the candidate's experience, character, etc), or that we remove issues that are not well studied and connected to EA (like American healthcare, tax structure, etc), or both?

kbog did an oopsie! (new meat eater problem numbers)

2019-02-15T15:17:35.607Z · score: 31 (19 votes)
Comment by kbog on Effectively Addressing Climate Change · 2019-02-15T12:23:05.007Z · score: 4 (3 votes) · EA · GW

I downvoted this because I think it's valuable for the EA community to have a public, credible norm against violating people's legally recognized rights. Destroying property does this, so we wouldn't be a very trustworthy community if we endorsed such behavior.

Comment by kbog on Three Biases That Made Me Believe in AI Risk · 2019-02-14T14:01:48.129Z · score: 20 (11 votes) · EA · GW
On the other hand, the last sentence of your comment makes me feel that you're equating my not agreeing with you with my not understanding probability. (I'm talking about my own feelings here, irrespective of what you intended to say.)

Well, OK. But in my last sentence, I wasn't talking about the use of information terminology to refer to probabilities. I'm saying I don't think you have an intuitive grasp of just how mind-bogglingly unlikely a probability like 2^(-30) is. There are other arguments to be made on the math here, but getting into anything else just seems fruitless when your initial priors are so far out there (and when you also tell people that you don't expect to be persuaded anyway).

Comment by kbog on A system for scoring political candidates. RFC (request for comments) on methodology and positions · 2019-02-14T05:17:37.327Z · score: 3 (2 votes) · EA · GW

It's worth a shot, although long run cooperation / arms races seems like one of the toughest topics to tackle (due to the inherent complexity of international relations). We should start by looking through x-risk reading lists to collect the policy arguments, then see if there is a robust enough base of ideas to motivate frequent judgements about current policy.

Comment by kbog on A system for scoring political candidates. RFC (request for comments) on methodology and positions · 2019-02-14T03:30:08.318Z · score: 3 (2 votes) · EA · GW
1. I think rating candidates on a few niche EA issues is more likely to gain traction than trying to formalize the entire voting process. If you invest time figuring which candidates are likely to promote good animal welfare and foreign aid policies, every EA has good reason to listen you. But the weight you place on e.g. a candidate's health has nothing to do with the fact that you're an EA; they'd be just as good listening to any other trusted pundit. I'm not sure if popularity is really your goal, but I think people would be primarily interested in the EA side of this.

I think it would be hard to keep things tight around traditional EA issues because then we would get attacked for ignoring some people's pet causes. They'll say that EA is ignoring this or that problem and make a stink out of it.

There are some things that we could easily exclude (like health) but then it would just be a bit less accurate while still having enough breadth to include stances on common controversial topics. The value of this system over other pundits is that it's all-encompassing in a more formal way, and of course more accurate. The weighting of issues on the basis of total welfare is very different from how other people do it.

Still I see what you mean, I will keep this as a broad report but when it's done I can easily cut out a separate version that just narrows things down to main EA topics. Also, I can raise the minimum weight for issue inclusion above 0.01, to keep the model simpler and more focused on big EA stuff (while not really changing the outcomes).

2. It might be a good idea to stick to issues where any EA would agree: animal welfare, foreign aid. On other topics (military intervention, healthcare, education), values are often not the reason people disagree--they disagree for empirical reasons. If you stick to something where it's mostly a values question, people might trust your judgements more.

Epistemic modesty justifies convergence of opinions.

If there is empirical disagreement that cannot be solved with due diligence looking into the issue, then it's irrational for people to hold all-things-considered opinions to one side or the other.

If it's not clear which policy is better than another, we can say "there is not enough evidence to make a judgement", and leave it unscored.

So yes there is a point where I sort of say "you are scientifically wrong, this is the rational position," but only in cases where there is the clear logic and validated expert opinion to back it up to the point of agreement among good people. People already do this with many issues (consider climate change for instance, where the scientific consensus is frequently treated as an objective fact by liberal institutions and outlets, despite empirical disagreement among many conservatives).

Obviously right now the opinions and arguments are somewhat rough, but they will be more complete in later versions.

Comment by kbog on Three Biases That Made Me Believe in AI Risk · 2019-02-14T02:40:16.654Z · score: 24 (14 votes) · EA · GW
The present and past are the only tools we have to think about the future, so I expect the "pre-driven car" model to make more accurate predictions.

They'll be systematically biased predictions, because AGI will be much smarter than the systems we have now. And it's dubious that AI should be the only reference class here (as opposed to human brains vis-a-vis animal brains, most notably).

I have not yet found any argument in favour of AI Risk being real that remained convincing after the above translation.

If so, then you won't find any argument in favor of human risk being real after you translate "free will" to "acting on the basis of social influences and deterministic neurobiology", and then you will realize that there is nothing to worry about when it comes to terrorism, crime, greed or other problems. (Which is absurd.)

Also, I don't see how the arguments in favor of AI risk rely on language like this; are you referring to the real writing that explains the issue (e.g. papers from MIRI, or Bostrom's book) or are you just referring to simple things that people say on forums?

It seems absurd to assign AI-risk less than 0.0000000000000000000000000000001% probability because that would be a lot of zeros.

The reality is actually the reverse: people are prone to assert arbitrarily low probabilities because it's easy, but justifying a model with such a low probability is not. See: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/

And, after reading this, you are likely to still underestimate the probability of AI risk, because you've anchored yourself at 0.00000000000000000000000000000000000001% and won't update sufficiently upwards.

Anchoring points everywhere depending on context and it's infeasible to guess its effect in a general sense.

I'm not sure about your blog post because you are talking about "bits" which nominally means information, not probability, and it confuses me. If you really mean that there is, say, a 1 - 2^(-30) probability of extinction from some cause other than x-risk then your guesses are indescribably unrealistic. Here again, it's easy to arbitrarily assert "2^(-30)" even if you don't grasp and justify what that really means.

Comment by kbog on A system for scoring political candidates. RFC (request for comments) on methodology and positions · 2019-02-14T02:11:26.037Z · score: 3 (2 votes) · EA · GW

Yes, and I will pay attention to everything anyway unless this thread gets super unwieldy. I am mainly suggesting that people focus on the things that will make the biggest difference.

Comment by kbog on Why do you reject negative utilitarianism? · 2019-02-13T14:09:13.457Z · score: 4 (2 votes) · EA · GW

When I know someone closely, I value their life and experiences, intrinsically. I don't feel as if I wish they had never been born, nor do I wish to kill them.

And it's straightforward to presume that, with people who I don't know closely, I would feel similarly about them if I knew them well.

So if I want to treat people consistently with my basic inclinations, I should not be NU towards them.

Comment by kbog on The Narrowing Circle (Gwern) · 2019-02-13T10:58:12.921Z · score: 7 (5 votes) · EA · GW

It's hard to generalize across times and cultures, but ephebophiles and hebephiles seem to be treated much more harshly these days. Often they are placed in the category of pedophiles (who also might have been more tolerated in the past, I'm not sure).

I think historical immigrants to the US had to deal with more frequent racism at the social level. Historical immigration policy might have been guided by economic need rather than moral values.

Comment by kbog on The Narrowing Circle (Gwern) · 2019-02-13T10:52:16.907Z · score: 7 (5 votes) · EA · GW

It seems like a fair assumption that prisoners are broadly treated better today (in the West) than they used to be. Sexual abuse and solitary confinement were probably more common back in the day.

A system for scoring political candidates. RFC (request for comments) on methodology and positions

2019-02-13T10:35:46.063Z · score: 24 (11 votes)
Comment by kbog on Vox's "Future Perfect" column frequently has flawed journalism · 2019-02-13T09:47:16.558Z · score: 6 (2 votes) · EA · GW

Re: #1, the overall distribution of articles on different topics is not particularly impressive. There are other outlets (Brookings, at least) which focus more on global poverty.

I think it is fair to say that several moral theories are concerned with grave injustices such as the current state of racial inequity in the United States. Closing the race-wealth gap will only be a "strange thing to focus on" if you assume, with great confidence, utilitarianism to be true.

I think that arguing from moral theories is not really the right approach here, instead we can focus on the immediate moral issue - whether it is better to help someone merely because they or their ancestors were historically mistreated, holding welfare changes equal. There is a whole debate to be had there, which has plenty of room for eclectic arguments that don't assume utilitarianism per se.

The idea that it's not better is consistent with any consequentialism which looks at aggregate welfare rather than group fairness, and some species of nonconsequentialist ethics (there is typically a lot of leeway and vagueness in how these informal ethics are interpreted and applied, and academic philosophers tend to interpret them in ways that reflect their general political and cultural alignment).

I totally agree with you that "unequal racial distribution can have important secondary effects", and this is why there is a solid case for paying attention to the race-wealth gap, even on utilitarian grounds.

Sure, but practically everything should get attention by this rationale. The real question is - how do we want to frame this stuff? What do we want to implicitly suggest to be the most important thing?

Comment by kbog on Vocational Career Guide for Effective Altruists · 2019-02-13T09:26:56.003Z · score: 3 (2 votes) · EA · GW

Go ahead and write one! Do some research/modeling and share your findings. I did, and you can too.

Comment by kbog on Climate Change Is, In General, Not An Existential Risk · 2019-02-13T09:24:24.750Z · score: 2 (1 votes) · EA · GW

Highlight your text and then select the hyperlink icon in the pop-up bar.

Vocational Career Guide for Effective Altruists

2019-01-26T11:16:20.674Z · score: 26 (19 votes)

Vox's "Future Perfect" column frequently has flawed journalism

2019-01-26T08:09:23.277Z · score: 33 (30 votes)

A spreadsheet for comparing donations in different careers

2019-01-12T07:32:51.218Z · score: 6 (1 votes)

An integrated model to evaluate the impact of animal products

2019-01-09T11:04:57.048Z · score: 33 (19 votes)

Response to a Dylan Matthews article on Vox about bipartisanship

2018-12-20T15:53:33.177Z · score: 56 (35 votes)

Quality of life of farm animals

2018-12-14T19:21:37.724Z · score: 3 (5 votes)

EA needs a cause prioritization journal

2018-09-12T22:40:52.153Z · score: 3 (13 votes)

The Ethics of Giving Part Four: Elizabeth Ashford on Justice and Effective Altruism

2018-09-05T04:10:26.243Z · score: 5 (5 votes)

The Ethics of Giving Part Three: Jeff McMahan on Whether One May Donate to an Ineffective Charity

2018-08-10T14:01:25.819Z · score: 2 (2 votes)

The Ethics of Giving part two: Christine Swanton on the Virtues of Giving

2018-08-06T11:53:49.744Z · score: 4 (4 votes)

The Ethics of Giving part one: Thomas Hill on the Kantian perspective on giving

2018-07-20T20:06:30.020Z · score: 7 (7 votes)

Nothing Wrong With AI Weapons

2017-08-28T02:52:29.953Z · score: 14 (20 votes)

Selecting investments based on covariance with the value of charities

2017-02-04T04:33:04.769Z · score: 5 (7 votes)

Taking Systemic Change Seriously

2016-10-24T23:18:58.122Z · score: 7 (11 votes)

Effective Altruism subreddit

2016-09-25T06:03:27.079Z · score: 9 (9 votes)

Finance Careers for Earning to Give

2016-03-06T05:15:02.628Z · score: 9 (11 votes)

Quantifying the Impact of Economic Growth on Meat Consumption

2015-12-22T11:30:42.615Z · score: 22 (30 votes)