On AI Weapons 2019-11-13T12:48:16.351Z · score: 36 (13 votes)
New and improved Candidate Scoring System 2019-11-12T08:49:34.392Z · score: 14 (18 votes)
Four practices where EAs ought to course-correct 2019-07-30T05:48:57.665Z · score: 52 (57 votes)
Extinguishing or preventing coal seam fires is a potential cause area 2019-07-07T18:42:22.548Z · score: 55 (28 votes)
Should we talk about altruism or talk about justice? 2019-07-03T00:20:40.213Z · score: 22 (19 votes)
Consequences of animal product consumption (combined model) 2019-06-15T14:46:19.564Z · score: 15 (15 votes)
A vision for anthropocentrism to supplant wild animal suffering 2019-06-06T00:01:43.953Z · score: 28 (14 votes)
Candidate Scoring System, Fifth Release 2019-06-05T08:10:38.845Z · score: 11 (10 votes)
Overview of Capitalism and Socialism for Effective Altruism 2019-05-16T06:12:39.522Z · score: 39 (19 votes)
Structure EA organizations as WSDNs? 2019-05-10T20:36:19.032Z · score: 8 (7 votes)
Reasons to eat meat 2019-04-21T20:37:51.671Z · score: 45 (54 votes)
Political culture at the edges of Effective Altruism 2019-04-12T06:03:45.822Z · score: 8 (22 votes)
Candidate Scoring System, Third Release 2019-04-02T06:33:55.802Z · score: 11 (8 votes)
The Political Prioritization Process 2019-04-02T00:29:43.742Z · score: 9 (3 votes)
Impact of US Strategic Power on Global Well-Being (quick take) 2019-03-23T06:19:33.900Z · score: 13 (9 votes)
Candidate Scoring System, Second Release 2019-03-19T05:41:20.022Z · score: 30 (15 votes)
Candidate Scoring System, First Release 2019-03-05T15:15:30.265Z · score: 11 (6 votes)
Candidate scoring system for 2020 (second draft) 2019-02-26T04:14:06.804Z · score: 11 (5 votes)
kbog did an oopsie! (new meat eater problem numbers) 2019-02-15T15:17:35.607Z · score: 31 (19 votes)
A system for scoring political candidates. RFC (request for comments) on methodology and positions 2019-02-13T10:35:46.063Z · score: 24 (11 votes)
Vocational Career Guide for Effective Altruists 2019-01-26T11:16:20.674Z · score: 29 (20 votes)
Vox's "Future Perfect" column frequently has flawed journalism 2019-01-26T08:09:23.277Z · score: 33 (30 votes)
A spreadsheet for comparing donations in different careers 2019-01-12T07:32:51.218Z · score: 6 (1 votes)
An integrated model to evaluate the impact of animal products 2019-01-09T11:04:57.048Z · score: 36 (20 votes)
Response to a Dylan Matthews article on Vox about bipartisanship 2018-12-20T15:53:33.177Z · score: 56 (35 votes)
Quality of life of farm animals 2018-12-14T19:21:37.724Z · score: 3 (5 votes)
EA needs a cause prioritization journal 2018-09-12T22:40:52.153Z · score: 3 (13 votes)
The Ethics of Giving Part Four: Elizabeth Ashford on Justice and Effective Altruism 2018-09-05T04:10:26.243Z · score: 6 (6 votes)
The Ethics of Giving Part Three: Jeff McMahan on Whether One May Donate to an Ineffective Charity 2018-08-10T14:01:25.819Z · score: 2 (2 votes)
The Ethics of Giving part two: Christine Swanton on the Virtues of Giving 2018-08-06T11:53:49.744Z · score: 4 (4 votes)
The Ethics of Giving part one: Thomas Hill on the Kantian perspective on giving 2018-07-20T20:06:30.020Z · score: 7 (7 votes)
Nothing Wrong With AI Weapons 2017-08-28T02:52:29.953Z · score: 17 (21 votes)
Selecting investments based on covariance with the value of charities 2017-02-04T04:33:04.769Z · score: 5 (7 votes)
Taking Systemic Change Seriously 2016-10-24T23:18:58.122Z · score: 7 (11 votes)
Effective Altruism subreddit 2016-09-25T06:03:27.079Z · score: 9 (9 votes)
Finance Careers for Earning to Give 2016-03-06T05:15:02.628Z · score: 9 (11 votes)
Quantifying the Impact of Economic Growth on Meat Consumption 2015-12-22T11:30:42.615Z · score: 22 (30 votes)


Comment by kbog on Applying EA to climate change · 2019-11-18T04:35:01.588Z · score: 12 (4 votes) · EA · GW

FYI, some prior work (which partially agrees with you):

Also, a bit of press (and research) on trees:

It's encouraging to see agreements from multiple estimates on the effectiveness of forestry. On clean energy, note that spending a few dollars to lobby the government to in turn spend a lot of money might be a lot more effective (2nd link), perhaps that explains the disparity.

I think your calculation for the cost of promoting plant diets is conceptually mistaken. The amount of lost meat industry revenue is irrelevant. We want to know how much of our money will have to be spent. For that, it's relatively straightforward to look up the amount of advertising expenses required to reduce meat consumption (Animal Charity Evaluators has done this research).

This in turn makes me suspect that some of your cost estimates for other technologies might have a similar problem, measuring 'cost' in some other economic or business sense besides what we really care about. So these calculations are important things to include in the analysis.

Also, the 1800x spread between silvopasture and roofs should considered a significant overestimate due to the optimizer's curse.

Comment by kbog on New and improved Candidate Scoring System · 2019-11-16T02:52:18.881Z · score: 4 (3 votes) · EA · GW

For one thing, it's not clear how to translate polls into probabilities. Let's assume for sake of argument that when Jack is at 5% in the polls and Jill is at 4%, then Jack is necessarily more likely to win the primaries than Jill. But we still don't know: how much more likely? What are their probabilities of victory? Translating the polling into expectations (which is necessary for calculating expected value) is difficult. Better to let the prediction markets do that efficiently than to rely on my own guess.

Second, prediction markets take into account lots of other information besides the superficial polling numbers, like: who are voters 2nd and 3rd choices? Which candidate has more momentum? Which candidate is doing best in Iowa/NH? Who has more money? etc.

The reason prediction markets ever tell you something different from the polls is because they are taking into account issues like the above. Their track record is very good as I understand.

Comment by kbog on On AI Weapons · 2019-11-13T21:21:17.478Z · score: 5 (3 votes) · EA · GW

See "crime and terrorism" section.

Comment by kbog on Candidate Scoring System, Third Release · 2019-11-12T08:49:40.679Z · score: 2 (1 votes) · EA · GW

Yes, I am killing old files and now have just have a permanent link to the newest version. Sorry for the confusion. See here:

Comment by kbog on Overview of Capitalism and Socialism for Effective Altruism · 2019-11-12T08:20:25.983Z · score: 2 (1 votes) · EA · GW

But since the end of the Cold War, America has had little reason to pursue regime change in Cuba. In fact we would probably prefer to avoid a refugee crisis.

Consider how the US acted towards China after the Sino-Soviet split. We warmed relations quite a bit, pressing mildly for liberalization but not for regime change. From the Cuban perspective I wouldn't see it as an existential threat, unless I simply refused to tolerate the loss of my personal political power (which, admittedly, may be their reasoning).

Comment by kbog on Overview of Capitalism and Socialism for Effective Altruism · 2019-11-12T08:09:33.165Z · score: 2 (1 votes) · EA · GW
Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-11-12T08:01:36.553Z · score: 2 (1 votes) · EA · GW
In a lot of your analysis though, you do seem to caricature Keynesian economics as non-mainstream.

In the old version of the report, in maybe a couple of sentences, I incorrectly conflated the status of Keynesianism in general with Post-Keynesian in particular. In reality, New Keynesianism is accepted whereas Post-Keynesian ideas are heterodox, as I describe in the above comment. I have already updated the language in revisions. But this error of mine didn't matter anyway because I wasn't yet judging politicians for their stances on economic stimulus bills (although it is something to be added in the future). If I had been judging politicians on Keynesian stimulus then I would have looked more carefully before judging anything.

If Post-Keynesian ideas are correct, that could change a lot of things because it would mean that lots of government spending all the time will stimulate the economy. However, I am pretty sure this is not commonly accepted.

I am glad you agree on Drive vs OneDrive.

Comment by kbog on Some personal thoughts on EA and systemic change · 2019-11-12T07:49:39.265Z · score: 3 (2 votes) · EA · GW

Shulman's not speaking only in terms of donations. You must recognize this since you quote "cost ... per marginal vote". It seems like you're taking issue with some of the basic economic concepts like efficiency and marginalism. This is something that other critics have done. However I have not seen any good defense of that point of view.

I think the EA community currently has a limited amount to say to anyone with power.

Please let this myth die. For yet another example, I have 200 pages judging policies & politicians:!At2KcPiXB5rkyABaEsATaMrRDxwj?e=VvVnl2

Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-10-13T06:52:41.650Z · score: 2 (1 votes) · EA · GW

Yes, policy can be changed for sure. I was just referring to actually changing minds in the community, as he said - "Probably the best starting point would be to get the AI community on board with such a thing. It seems impossible today that consensus could be built about such a thing, but the presidency is a large pulpit."

Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-10-12T19:13:48.757Z · score: 2 (1 votes) · EA · GW

There are numerous minor, subtle ways that EAs reduce AI risk. Small in comparison to a research career, but large in comparison to voting. (Voting can actually be one of them.)

Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-10-12T18:56:40.678Z · score: 4 (2 votes) · EA · GW
Your analysis seems to rely heavily on the judgement of r/neoliberal.

Very little, actually. Only time I actually cite the group is the one thread where people are discussing baby bonds.

It's true that in many cases, we've arrived at similar points of view, but I have real citations in those cases.

As I say in the beginning of the "evaluations of sources" sections, a group like that is more for "secondary tasks such as learning different perspectives and locating more reliable sources." The kind of thing that usually doesn't get mentioned at all in references. I'm just doing more work to make all of it explicit.

And frankly I don't spend much time there anyway. I haven't gone to their page in months.

Do you think that they are poor even by the standards of social media groups? Like, compared to r/irstudies or others?

I would have thought that actually it's the social democracies which follow very technocratic keynesian economics that produce better economic outcomes (idk, greater growth, less unemployment, more entrepreneurship, haha I have no idea how true any of this is tbh - I just presume).

It seems like most economists approve of them in times of recession (New Keynesianism), but don't believe in blasting stimulus all the time (which would be post-Keynesianism). I'm a little unsure of the details here and may be oversimplifying. Frankly, it's beyond my credentials to get into the weeds of macroeconomic theory (though if someone has such credentials, they are welcome to become involved...). I'm more concerned about the judgment of specific policy choices - it's not like any of the candidates have expressed views on macroeconomic theory, they just talk about policies.

I found research indicating that countries with more public spending have less economic growth:

Perhaps the post-Keynesian economists have some reasons to disagree with this work, but I would need to see that they have some substantive critique or counterargument.

In any case, if 90% of the field believes something and 10% disagrees, we must still go by the majority, unless somehow we discover that the minority really has all the best arguments and evidence.

Of course, just because economic growth is lower, doesn't mean that a policy is bad. Sometimes it's okay to sacrifice growth for things like redistribution and public health. But remember that (a) we are really focusing on the long run here, where growth is a bit more important, and (b) we also have to consider the current fiscal picture. Debt in the US is quite bad right now, and higher spending would worsen the matter.

Espescially now considering that the US/globe is facing possible recession, I would think fiscal stimulus would be even more ideal.

Candidates are going to come into office in January 2021 - no one has a clue what the economy will look like at that time. Now if a candidate says "it's good to issue large economic stimulus packages in times of recession," I suppose I would give them points for that, but none have recently made such a statement as far as I know. For those politicians who were around circa 2009, I could look to see whether they endorsed the Recovery and Reinvestment Act (and, on a related note, TARP)... maybe I will add that, now that you mention it, I'll think about it/look into it.

Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-10-12T06:19:27.981Z · score: 2 (1 votes) · EA · GW

I meant that it's definitely more efficient to grow the EA movement than to grow Yang's constituency. That's how it seems to me, at least. It takes millions of people to nominate a candidate.

Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-10-12T06:11:47.886Z · score: 2 (1 votes) · EA · GW

FWIW I don't think that would be a good move. I don't feel like fully arguing it now, but main points (1) sooner AGI development could well be better despite risk, (2) such restrictions are hard to reverse for a long time after the fact, as the story of human gene editing shows, (3) AGI research is hard to define - arguably, some people are doing it already.

Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-10-12T04:52:57.401Z · score: 4 (2 votes) · EA · GW
create a treaty for countries to sign that ban research into AGI.

You only mean this as a possibility in the future, if there is any point where AGI is believed to be imminent, right?

Still, I think you are really overestimating the ability of the president to move the scientific community. For instance, we've had two presidents now who actively tried to counteract mainstream views on climate-change, and they haven't budged climate scientists at all. Of course, AI alignment is substantially more scientifically accepted and defensible than climate skepticism. But the point still stands.

Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-10-12T04:35:09.433Z · score: 2 (1 votes) · EA · GW

What about simply growing the EA movement? That clearly seems like a more efficient way to address x-risk, and something where funding could be used more readily.

Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-10-12T04:29:38.438Z · score: 2 (1 votes) · EA · GW

If you read it, go by the 7th version as I linked in another comment here - most recent release.

I'm going to update on a single link from now on, so I don't cause this confusion anymore.

Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-10-12T02:44:12.963Z · score: 7 (4 votes) · EA · GW

I think that's too speculative a line of thinking to use for judging candidates. Sure, being intelligent about AI alignment is a data point for good judgment more generally, but so is being intelligent about automation of the workforce, and being intelligent about healthcare, and being intelligent about immigration, and so on. Why should AI alignment in particular should be a litmus test for rational judgment? We may perceive a pattern with more explicitly rational people taking AI alignment seriously as patently anti-rational people dismiss it, but that's a unique feature of some elite liberal circles like those surrounding EA and the Bay Area; in the broader public sphere there are plenty of unexceptional people who are concerned about AI risk and plenty of exceptional people who aren't.

We can tell that Yang is open to stuff written by Bostrom and Scott Alexander, which is nice, but I don't think that's a unique feature of Rational people, I think it's shared by nearly everyone who isn't afflicted by one or two particular strands of tribalism - tribalism which seems to be more common in Berkeley or in academia than in the Beltway.

Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-10-12T02:40:15.806Z · score: 2 (1 votes) · EA · GW

moved comment to another spot.

Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-10-12T01:11:36.579Z · score: 10 (7 votes) · EA · GW
I think that the most likely strongly negative outcome is that AI safety becomes attached to some standard policy tug-o-war and mostly people learn to read it as a standard debate between republicans and democrats

I don't think this is very likely (see my other comment) but also want to push back on the idea that this is "strongly negative".

Plenty of major policy progress has come from partisan efforts. Mobilizing a major political faction provides a lot of new support. This support is not limited to legislative measures, but also to small bureaucratic steps and efforts outside the government. When you have a majority, you can establish major policy; when you have a minority, you won't achieve that but still have a variety of tools at your disposal to make some progress. Even if the government doesn't play along, philanthropy can still continue doing major work (as we see with abortion and environmentalism, for instance).

A bipartisan idea is more agreeable, but also more likely to be ignored.

Holding everything equal, it seems wise to prefer being politically neutral, but it's not nearly clear enough to justify refraining from making policy pushes. Do we refrain from supporting candidates who endorse any other policy stance, out of fear that they will make it into something partisan? For instance, would you say this about Yang's stance to require second-person authorization for nuclear strikes?

It's an unusual view, and perhaps reflects people not wanting their personal environments to be sucked into political drama more than it reflects shrewd political calculation.

Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-10-12T00:57:41.077Z · score: 6 (3 votes) · EA · GW

By 'polarized partisan issue' do you merely mean that people have very different opinions and settle into different camps and make it hard for rational dialogue across the gap? That comes about naturally in the process of intellectual change, it has already happened with AI risk, and I'm not sure that a political push will worsen it (as the existing camps are not necessarily coequal with the political parties).

I was referring to the possibility that, for instance, Dems and the GOP take opposing party lines on the subject and fight over it. Which definitely isn't happening.

Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-10-12T00:09:01.821Z · score: 4 (2 votes) · EA · GW

I don't think that making alignment a partisan issue is a likely outcome. The president's actions would be executive guidance for a few agencies. This sort of thing often reflects partisan ideology, but doesn't cause it. And Yang hasn't been pushing AI risk as a strong campaign issue, he only acknowledged it modestly. If you think that AI risk could become a partisan battle, you might want to ask yourself why automation of labor - Yang's loudest talking point - has NOT become subject to partisan division (even though some people disagree with it).

Comment by kbog on X-risk dollars -> Andrew Yang? · 2019-10-12T00:08:27.872Z · score: 14 (6 votes) · EA · GW

If you are looking at presidential candidates, why restrict your analysis to AI alignment?

If you're super focused on that issue, then it will definitely be better to spend your money on actual AI research, or on some kind of direct effort to push the government to consider the issue (if such an effort exists).

When judging among the presidential candidates, other issues matter too! And in this context, they should be weighted more by their sheer importance than by philanthropic neglectedness. So AI risk is not obviously the most important.

With some help from other people I comprehensively reviewed 2020 candidates here:

The conclusion is that yes, Yang is one of the best candidates to support - alongside Booker, Buttigieg, and Republican primary challengers. Partially due to his awareness of AI risk. But in the updates I've made for the 8th edition (and what I'm about to change now, seeing some other comments here about the lack of tractability for this issue), Buttigieg seems to move ahead to being the best Democrat by a small margin. Of course these judgments are pretty uncertain so you could argue that they are wrong if you find some flaw or omission in the report. Very early on, I decided that both Yang and Buttigieg were not good candidates, but that changed as I gathered new information about them.

But it's wrong to judge a presidential candidate merely by their point of view on any single issue, including AI alignment.

Comment by kbog on Extinguishing or preventing coal seam fires is a potential cause area · 2019-08-01T23:54:28.970Z · score: 2 (1 votes) · EA · GW

This is too ad hoc, dividing three or four cause areas into two or three categories, to be a reliable explanation.

Comment by kbog on Four practices where EAs ought to course-correct · 2019-08-01T09:28:13.216Z · score: 6 (3 votes) · EA · GW

OK, sounds like the biggest issue is not the recognition algorithm itself (can be replicated or bought quickly) but the acquisition of databases of people's identities (takes time and maybe consent earlier on). They can definitely come together, but otherwise, consider the possibilities (a) a city only uses face recognition for narrow cases like comparing video footage to a known suspect while not being able to do face-rec for the general population, and (b) a city has profiles and the ability to identify all its citizens for some other purpose but just doesn't have the recognition algorithms (yet).

Comment by kbog on Four practices where EAs ought to course-correct · 2019-08-01T09:20:23.987Z · score: 3 (2 votes) · EA · GW

Well, I'm not trying to convince everyone that society needs a looser approach to AI. Just that this activism is dubious, unclear, plausibly harmful etc.

Comment by kbog on Four practices where EAs ought to course-correct · 2019-08-01T09:18:02.169Z · score: 6 (4 votes) · EA · GW
  • This need not be about ruthlessness directed right at your interlocutor, but rather towards a distant or ill-specified other.
  • I think it would be uncontroversial that a better approach is not to present yourself as authoritative, but instead present a conception of general authority in EA scholarship and consensus, and demand that it be recognized, engaged with, cited and so on.
  • Ruthless content drives higher exposure and awareness in the very first place.
  • There seems like an inadequate sticking rate of people who are just exposed to EA, consider for instance the high school awareness project.
  • Also, there seems like a shortage of new people who will gather other new people. When you just present the nice message, you just get a wave of people who may follow EA in their own right but don't go out of their way to continue pushing it further. Because it was presented to them merely as part of their worldview rather than as part of their identity. (Consider whether the occasionally popular phrase "aspiring Effective Altruist" obstructs one from having an real EA identity.) How much movement growth is being done by people who joined in the recent few years compared to the early core?
Comment by kbog on Four practices where EAs ought to course-correct · 2019-08-01T08:53:14.390Z · score: 4 (2 votes) · EA · GW

I am also thinking of how there has been more back-and-forth about the optimizer's curse, people saying it needs to be taken more seriously etc.

I don't think that the prescriptive vs descriptive nature really changes things, descriptive philosophizing about methodology is arguably not as good as just telling EAs what to do differently and why.

I grant that #3 on this list is the rarest out of the 4. The established EA groups are generally doing fine here AFAIK. There is a CSER writeup on methodology here which is perfectly good: it's about a specific domain that they know, rather than EA stuff in general.

Comment by kbog on Four practices where EAs ought to course-correct · 2019-08-01T08:42:46.165Z · score: 3 (2 votes) · EA · GW

I've long preferred expressing EA as a moral obligation and support the main idea of that article.

Comment by kbog on Four practices where EAs ought to course-correct · 2019-07-31T06:24:12.399Z · score: 5 (4 votes) · EA · GW

Here's some support for that claim which I didn't write out.

There was a hypothesis called "risk homeostasis" where people always accept the same level of risk. E.g. it doesn't matter that you give people seatbelts, because they will drive faster and faster until the probability of an accident is the same. This turned out to be wrong; for instance people did drive faster, but not so much faster as to meet or exceed the safety benefits. The idea of moral hazard from victory leading to too many extra wars strikes me as very similar to this. It's a superficially attractive story that allows one to simplify the world and not have to think about complex tradeoffs as much. In both cases you are taking another agent and oversimplifying their motivations. The driver - just has a fixed risk constraint, and beyond that wants nothing but speed. The state - just wants to avoid bleeding too much, and beyond that threshold it wants nothing but foreign influence. But the driver has a complex utility function or maybe a more inconsistent set of goals about the relative value of more safety vs less safety, more speed vs less speed; therefore, when you give her some new capacities, she isn't going to spend all of it on going faster. She'll spend some on going faster, then some on being safer.

Likewise the state does not want to spend too much money, does not want to lose its allies and influence, does not want to face internal political turmoil, etc. When you give the state more capacities, it spends some of it on increasing bad conquests, but also spends some of it on winning good wars, on saving money, on stabilizing its domestic politics, and so on. The benefits of improved weaponry for the state are fungible, as it can e.g. spend less on the military while obtaining a comparable level of security.

Security dilemmas throw a wrench into this picture, because what improves security for one state harms the security of another. However in the ultimate theoretical case I feel that this just means that improvements in weaponry have neutral impact. Then in the real world, where some US goals are more positive sum in nature, the impacts of better weapons will be better than neutral.

Comment by kbog on Four practices where EAs ought to course-correct · 2019-07-31T04:23:29.389Z · score: 5 (2 votes) · EA · GW

Yes, the "slaughterbots" video produced by Stuart Russell and FLI presented a dystopian scenario about drones that could be swatted down with tennis rackets. Because the idea is that they would plaster to your head with an explosive.

Not like banning drones stops someone flying a drone from somewhere else.

Yes, but it means that on the rare occasion that you see a drone, you know it's up to no good and then you will readily evade or shoot it down.

And political leaders sure you can speak behind the glass, but are you going to spend your whole life behind a screen?

No... but so what? I don't travel in an armored limousine either. If someone really wants to kill me, they can.

More donations for movement growth: I would tentatively agree.

Comment by kbog on Four practices where EAs ought to course-correct · 2019-07-30T23:21:28.889Z · score: 6 (6 votes) · EA · GW

Okay, very well then. But if a polity wanted to do something really bad like ethnic cleansing, they would just allow facial recognition again, and get it easily from elsewhere. If a polity is liberal and free enough to keep facial recognition banned then they will not tolerate ethnic cleansing in the first place.

It's like the Weimar Republic passing a law forbidding the use of Jewish Star armbands. Could provide a bit of beneficial inertia and norms, but not much besides that.

Comment by kbog on Four practices where EAs ought to course-correct · 2019-07-30T23:10:49.278Z · score: 3 (5 votes) · EA · GW

I've recently started experimenting with that, I think it's good. And Twitter really is not as bad a website as people often think.

Comment by kbog on Four practices where EAs ought to course-correct · 2019-07-30T17:59:17.945Z · score: 4 (4 votes) · EA · GW

But who is talking about banning facial recognition itself? It is already too widespread and easy to replicate.

Comment by kbog on Four practices where EAs ought to course-correct · 2019-07-30T17:56:04.185Z · score: 16 (5 votes) · EA · GW

To be sure it is better than unfortified cereal (ceteris paribus), but they usually have a lot of refined grains + added sugar.

Comment by kbog on Four practices where EAs ought to course-correct · 2019-07-30T17:55:27.580Z · score: 4 (2 votes) · EA · GW

Sorry. This is it:

Comment by kbog on Extinguishing or preventing coal seam fires is a potential cause area · 2019-07-30T06:16:36.458Z · score: 2 (1 votes) · EA · GW

If we had a cap-and-trade system then presumably it could allow for that (no idea if they actually do, in the few countries where cap-and-trade is implemented).

Comment by kbog on Extinguishing or preventing coal seam fires is a potential cause area · 2019-07-30T06:14:37.666Z · score: 2 (1 votes) · EA · GW

Reducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same. And climate change does have some philosophical issues with model parameters like discount rates. Admittedly, they are a little more messy and applied in nature than talking about formal agent behavior.

Comment by kbog on Consequences of animal product consumption (combined model) · 2019-07-30T01:09:49.674Z · score: 2 (1 votes) · EA · GW
One reason might be because you’re only accounting for one year of the meat eater problem when I’ve accounted for a lifetime’s worth of impact (which I believe is the more complete counterfactual comparison).

I did that because I was only looking at one year of welfare improvement. One year for one year is simpler and more robust than comparing lifetimes. If you want to look at lifetimes, you have to scale up the welfare impacts as well.

Comment by kbog on Effective Altruism is an Ideology, not (just) a Question · 2019-07-28T22:08:38.275Z · score: 2 (4 votes) · EA · GW

I've met a great number of people in EA who disagree with utilitarianism and many people who aren't particularly statistically minded. Of course it is not equal to the base rates of the population, but I don't really see philosophically dissecting moderate differences as productive for the goal of increasing movement growth.

If you're interested in ethnologies, sociology, case studies, etc - then consider how other movements have effectively overcome similar issues. For instance, the contemporary American progressive political movement is heavily driven by middle and upper class whites, and faces dissent from substantial portions of the racial minority and female identities. Yet it has been very effective in seizing institutions and public discourse surrounding race and gender issues. Have they accomplished this by critically interrogating themselves about their social appeal? No, they hid such doubts as they focused on hammering home their core message as strongly as possible.

If we want to assist movement growth, we need to take off our philosopher hats, and put on our marketer and politician hats. But you didn't write this essay with the framing of "how to increase the uptake of EA among non-mathematical (etc) people" (which would have been very helpful); eschewing that in favor of normative philosophy was your implicit, subjective judgment of which questions are most worth asking and answering.

Comment by kbog on Should we talk about altruism or talk about justice? · 2019-07-07T11:42:06.446Z · score: 2 (1 votes) · EA · GW
Why should I assume that this time is different?

The foundation of free trade is that it is mutually beneficial, since both parties agree to it.

With slavery, the slaves did not agree to be enslaved and transported. The enslavers used force and this allowed them to make other people worse off. Today, traded goods don't include forced laborers, though you could include livestock in this category and I would actually be in favor of restricting that.

With opium, the story was more complicated. Users wanted opium, but it's an addictive drug that damaged them and Chinese society in the long run. So the Chinese tried to restrict its import, but the British forcibly compelled them to lift the restrictions. Today, we don't try to use military force to get other countries to accept harmful goods. We do exercise some leverage where we offer trade and finance deals to developing countries in exchange for them changing some of their economic policies; there is debate over this practice with some people arguing that we shouldn't have these strings attached, but the countries are still willingly taking these deals so they are better than nothing.

I think you may find pro-free-trade people in favor of IP reform, these are rather separate issues. However I kind of doubt that many people of any stripe would want to remove IP rights entirely - that would eliminate the incentive to pursue research and development.

Comment by kbog on Should we talk about altruism or talk about justice? · 2019-07-05T08:27:55.052Z · score: 5 (4 votes) · EA · GW
None of give well top charities focus on women or girls, given that women and girls are valued less in poor countries, from a strictly utilitarian perspective, this is a miss for the EA movement

But maybe the best interventions aren't easy or efficient to target towards women only - if you give out bed nets, best to give them to everyone. If we extend this logic, we're going to ask "why do none of our charities focus on ugly girls in poor countries?" and it never ends because you can always find a sub-group of people that is in still more dire straits on average (but it gets unlikely that that will lead to the best charity).

Generally speaking I don't think you can easily empirically confirm or disprove that EA is 'on the right track', either position is going to boil down to a lot of subjective assumptions. Instead I just trust that we're ultimately competent and encourage constant debate and reconsideration of specific charities and causes. That's the most productive route. If thinking about justice helps you make your argument - more power to you. No need for us to worry about how each other thinks.

You may be interested in Founders' Pledge report: Women's Empowerment

Comment by kbog on Should we talk about altruism or talk about justice? · 2019-07-05T08:16:14.641Z · score: 2 (1 votes) · EA · GW

Hi, you're definitely right about Obama and most other Democrats but they are not leftist (more like center-left, American liberals), and not operating on this revisionary sense of justice and fairness.

I am thinking of people like:


As of today I see the "left" position or justice position is to treat asylum seekers and illegal immigrants humanely.

That's definitely a huge part of it, however my worry is that the realities of how they push these politics could have a bad effect of increasing right-wing hostility to legal immigration and preventing policymakers from compromising on comprehensive immigration reform. Let's be clear - altruistic minded people want to treat them humanely too, we are comparing them to each other not comparing them to what America has actually been doing.

An "altruistic" position might be to pass a quick bill, no political strings attached, giving funding to CBP to just improve the conditions at the camps and expedite processing, leaving bigger decisions for later. The "justice" position could be to fight tooth and nail to abolish CBP/ICE instead. Which is better? Eh, I have some personal sympathies, but at the end of the day I don't have the confidence to declare it.

Comment by kbog on Should we talk about altruism or talk about justice? · 2019-07-05T08:09:56.814Z · score: 5 (4 votes) · EA · GW

(warning: longpost)

Not asking how and why we have so much power is a blindness that I see in the EA movement. This also leads to assumptions that "Free Trade" is good.

OK, so let's talk about how and why we have so much power. I'll speak for myself.

The teachings of Jesus of Nazareth were used to establish the religion of Christianity, which was subsequently spread to Armenia via the apostles Jude and Bartholomew and later Gregory the Illuminator. This opened the door to persecution by Armenia's Zoroastrian suzerains in Sassanid Persia, but the right to practice Christianity would be won (for a time) with the Nvarsak Treaty in 484.

At a similar time Christianity became the official religion of the Roman Empire and subsequently the European tribes to the north. These tribes became the foundation of modern Europe, inheriting Roman Christian traditions but occupying a more fragmented existence in a geographically divided continent. The competitive pressures of this regional order led to advanced shipbuilding and other technologies, then expeditions to find new trade routes, which then established Western Europe as the center of global wealth and power able to conquer numerous indigenous nations (aided by diseases) and produce a comparably powerful offshoot called the United States. Throughout this time, Europe remained divided, a situation entrenched by the Catholic-Protestant split in Christianity which forced the pluralist Peace of Westphalia.

This divided Europe relied on a carefully managed balance of power, but German reunification and industrialization threatened to overturn it. European pluralism also laid the seeds for nationalism in Austria-Hungary. These pressures collided to create World War One.

By this time, Armenia was still under foreign religiously-motivated oppression, now by the Muslim Ottomans. The situation of WWI stirred Armenian aspirations towards independence, provoked fear among the Ottomans, and sapped Russia's will to intervene. The result was a genocide of the Armenians and diaspora of the survivors. Some of the survivors made it to Romania, one of the poorest countries of Europe, which was forced into the Soviet bloc as an indirect consequence of the failure of the Western Allies to satisfactorily handle Germany after the conclusion of WWI. Communist policies in Romania sustained a high level of poverty and oppression compared to America, which had profited immensely off its natural resource endowment, geographical location, and sociopolitical heritage (which in turn allowed it to successively defeat the Native Americans, Mexicans, Spanish, Germans and Japanese and then establish its preferred international political and economic order).

A combination of bribes and luck enabled a few of the Armenian Romanians to emigrate to Beirut and then on to 1970s urban America, where men could obtain high salaries in engineering and women could obtain gainful employment in administration and teaching, so that I could then be raised in a stable, upper middle class household with access to a variety of business, political and educational institutions, as the American economy continuously boomed. Also, I won out a bit on the genetic lottery.

Now it's strange to me that anyone would presume that I wouldn't be interested in knowing or talking about this history, because it (like most histories) is a fascinating history and of course I love to talk and read all its brutal and inspiring truths.

But I really don't see its place in Effective Altruism, because for all its ups and downs, it doesn't tell me what to do now. I'm not going to give money to the Native Americans just as I'm not going to demand money from Turkey. I'm going to give money to Malaria Consortium or the Sentience Institute or MIRI, and I'd ask Turks to do the same, because that's what works best. So what if our situation was caused by injustice? And I don't support free trade because I think it worked with slaves or opium, I support it because I think it works now, according to the best economic evidence that we have.

Comment by kbog on Should we talk about altruism or talk about justice? · 2019-07-04T16:54:55.929Z · score: 7 (3 votes) · EA · GW

When you're talking about decisions made in the EA community itself, it's best to focus on concrete issues of effectiveness and not worry so much about discourse and rhetoric. We're equipped to make right decisions regardless of these subtle things.

EAs mostly haven't started doing justice framed policy work. Justice isn't equivalent to institutions and policies per se.

Comment by kbog on Should we talk about altruism or talk about justice? · 2019-07-04T02:08:03.943Z · score: 2 (1 votes) · EA · GW

I can't really tell; x-risks as a monolithic area of study and activism is new.

Society pretty much agrees that extinction is bad so I don't think these ethical and rhetorical ideas matter as much, you can just make good technical arguments about risks and let other people figure out the rest.

Comment by kbog on X-risks of SETI and METI? · 2019-07-03T03:15:04.114Z · score: 6 (4 votes) · EA · GW

Alexey Turchin has said something about downloading an invasive ASI:

Seems pretty implausible but not totally out of the question

Comment by kbog on Effective Altruism is an Ideology, not (just) a Question · 2019-07-02T09:01:27.531Z · score: 8 (5 votes) · EA · GW

If you understand economic and political history well enough to know what's really gotten you where you are today, then you already have the tools to make those judgments about a much larger class of people. Actually I think that if you were to make the arguments for exactly how D-Day or women's rights for instance helped you then you would be relying on a broader generalization about how they helped large classes of people.

Comment by kbog on Effective Altruism is an Ideology, not (just) a Question · 2019-07-02T04:55:22.475Z · score: 5 (3 votes) · EA · GW
It's important to listen to people outside the community in case people are self-selecting in or out based on incidental factors.

Yet anything which is framed as an attack or critique on EA is itself something that causes people to self-select in or out of the community. If someone says "EAs have statistics ideology" then people who don't like statistics won't join. It becomes an entrenched problem from founder effects. Sort of a self-fulfilling prophecy.

What is helpful is to showcase people who actual work on things like ethnography. That's something that makes EA more methodologically diverse.

But stuff like this is just as apt to make anyone who isn't cool with utilitarianism / statistics / etc say they want to go elsewhere.

Comment by kbog on Effective Altruism is an Ideology, not (just) a Question · 2019-07-02T04:32:03.416Z · score: 3 (2 votes) · EA · GW

Normative commitments aren't sufficient to show that something is an ideology. See my comment. Arguably 'science-aligned' is methodological instead but it's very vague and personally I would not include it as part of the definition of EA.

Comment by kbog on Effective Altruism is an Ideology, not (just) a Question · 2019-07-02T04:25:32.702Z · score: 8 (4 votes) · EA · GW

It's better to look at impacts on the broad human population rather than just one person.