Posts

A comparison of American political parties 2020-12-09T20:45:14.844Z
American policy platform for total welfare 2020-12-03T18:07:45.068Z
EA politics mini-survey results 2020-12-01T18:41:38.603Z
Taking Self-Determination Seriously 2020-11-27T13:49:14.108Z
Please take my survey! 2020-11-27T09:00:09.942Z
Instability risks of the upcoming U.S. election and recommendations for EAs 2020-11-03T01:19:13.673Z
2020 United States/California election recommendations 2020-10-31T23:15:12.901Z
Super-exponential growth implies that accelerating growth is unimportant in the long run 2020-08-11T07:20:19.242Z
Idea: statements on behalf of the general EA community 2020-06-11T07:02:08.317Z
kbog's Shortform 2020-06-11T02:58:51.376Z
An Effective Altruist "Civic Handbook" for the USA (draft, calling for comments and assistance) 2020-03-23T23:06:18.709Z
Short guide to 'prepping' for global catastrophes 2020-03-23T22:51:23.876Z
Voting is today (Tuesday March 3) in California and other states - here are recommendations 2020-03-03T10:40:32.995Z
An Informal Review of Space Exploration 2020-01-31T13:16:00.960Z
Candidate Scoring System recommendations for the Democratic presidential primaries 2020-01-31T12:25:00.682Z
Concrete Foreign Policy Recommendations for America 2020-01-20T21:52:03.860Z
Responding to the Progressive Platform of “Foreign Policy Generation” 2020-01-19T20:24:00.971Z
A small observation about the value of having kids 2020-01-19T02:37:59.391Z
Love seems like a high priority 2020-01-19T00:41:51.617Z
Tentative Thoughts on Speech Policing 2020-01-06T19:20:36.485Z
Response to recent criticisms of EA "longtermist" thinking 2020-01-06T04:31:07.614Z
Welfare stories: How history should be written, with an example (early history of Guam) 2020-01-02T23:32:10.940Z
Tentative thoughts on which kinds of speech are harmful 2020-01-02T22:44:58.055Z
On AI Weapons 2019-11-13T12:48:16.351Z
New and improved Candidate Scoring System 2019-11-12T08:49:34.392Z
Four practices where EAs ought to course-correct 2019-07-30T05:48:57.665Z
Extinguishing or preventing coal seam fires is a potential cause area 2019-07-07T18:42:22.548Z
Should we talk about altruism or talk about justice? 2019-07-03T00:20:40.213Z
Consequences of animal product consumption (combined model) 2019-06-15T14:46:19.564Z
A vision for anthropocentrism to supplant wild animal suffering 2019-06-06T00:01:43.953Z
Candidate Scoring System, Fifth Release 2019-06-05T08:10:38.845Z
Overview of Capitalism and Socialism for Effective Altruism 2019-05-16T06:12:39.522Z
Structure EA organizations as WSDNs? 2019-05-10T20:36:19.032Z
Reasons to eat meat 2019-04-21T20:37:51.671Z
Political culture at the edges of Effective Altruism 2019-04-12T06:03:45.822Z
Candidate Scoring System, Third Release 2019-04-02T06:33:55.802Z
The Political Prioritization Process 2019-04-02T00:29:43.742Z
Impact of US Strategic Power on Global Well-Being (quick take) 2019-03-23T06:19:33.900Z
Candidate Scoring System, Second Release 2019-03-19T05:41:20.022Z
Candidate Scoring System, First Release 2019-03-05T15:15:30.265Z
Candidate scoring system for 2020 (second draft) 2019-02-26T04:14:06.804Z
kbog did an oopsie! (new meat eater problem numbers) 2019-02-15T15:17:35.607Z
A system for scoring political candidates. RFC (request for comments) on methodology and positions 2019-02-13T10:35:46.063Z
Vocational Career Guide for Effective Altruists 2019-01-26T11:16:20.674Z
Vox's "Future Perfect" column frequently has flawed journalism 2019-01-26T08:09:23.277Z
A spreadsheet for comparing donations in different careers 2019-01-12T07:32:51.218Z
An integrated model to evaluate the impact of animal products 2019-01-09T11:04:57.048Z
Response to a Dylan Matthews article on Vox about bipartisanship 2018-12-20T15:53:33.177Z
Quality of life of farm animals 2018-12-14T19:21:37.724Z
EA needs a cause prioritization journal 2018-09-12T22:40:52.153Z

Comments

Comment by kbog on Why are party politics not an EA priority? · 2021-01-04T00:40:31.578Z · EA · GW

I agree with you. You may appreciate my articles:

https://eapolitics.org/handbook.html

https://eapolitics.org/parties.html

Comment by kbog on Two Nice Experiments on Democracy and Altruism · 2020-12-31T07:58:59.969Z · EA · GW

the environmental success of democracies relative to autocracies

I want to read this but the link doesn't work

Comment by kbog on [Crosspost] Relativistic Colonization · 2020-12-31T05:10:13.942Z · EA · GW

If it is to gather resources en route, it must accelerate those resources to its own speed. Or alternatively, it must slow down to a halt, pick up resources and then continue. This requires a huge expenditure of energy, which will slow down the probe.

Bussard ramjets might be viable. But I'm skeptical that it could be faster than the propulsion ideas in the Sandberg/Armstrong paper. Anyway you seem to be talking about spacecraft that will consuming planets, not Bussard ramjets.

Going from 0.99c to 0.999c requires an extraordinary amount of additional energy for very little increase in distance over time. At that point, the sideways deviations required to reach waypoints (like if you want to swing to nearby stars instead of staying in a straight line) would be more important. It would be faster to go 0.99c in a straight line than 0.999c through a series of waypoints.

If we are talking about going from 0.1c to 0.2c then it makes more sense.

Comment by kbog on Are we living at the most influential time in history? · 2020-12-31T00:02:16.858Z · EA · GW

I think this argument implicitly assumes a moral objectivist point of view.

I'd say that most people in history have been a lot closer to the hinge of history when you recognize that the HoH depends on someone's values.

If you were a hunter-gatherer living in 20,000 BC then you cared about raising your family and building your weir and you lived at the hinge of history for that.

If you were a philosopher living in 400 BC then you cared about the intellectual progress of the Western world and you lived at the hinge of history for that.

If you were a theologian living in 1550 then you cared about the struggle of Catholic and Protestant doctrines and you lived at the hinge of history for that.

If you're an Effective Altruist living in 2020 then you care about global welfare and existential risk, and you live at the hinge of history for that.

If you're a gay space luxury communist living in 2100 then you care about seizing the moons of production to have their raw materials redistributed to masses, and you live at the hinge of history for that.

This isn't a necessary relationship. We may say that some of these historical hinges actually were really important in our minds, and maybe a future hinge will be more important. But generally speaking, the rise and fall of motivations and ideologies is correlated with the sociopolitical opportunity for them to matter. So most people throughout history have lived in hingy times. 

Comment by kbog on Big List of Cause Candidates · 2020-12-30T23:31:26.153Z · EA · GW

Thanks for the comments. Let me clarify about the terminology. What I mean is that there are two kinds of "pulling the rope harder". As I argue here:

The appropriate mindset for political engagement is described in the book Politics Is for Power, which is summarized in this podcast. We need to move past political hobbyism and make real change. Don’t spend so much time reading and sharing things online, following the news and fomenting outrage as a pastime. Prioritize the acquisition of power over clever dunking and purity politics. See yourself as an insider and an agent of change, not an outsider. Instead of simply blaming other people and systems for problems, think first about your own ability to make productive changes in your local environment. Get to know people and build effective political organizations. Implement a long-term political vision.

A key aspect of this is that we cannot be fixated on culture wars. Complaining about the media or SJWs or video game streamers may be emotionally gratifying in the short run but it does nothing to fix the problems with our political system (and it usually doesn't fix the problems with media and SJWs and video game streamers either). It can also drain your time and emotional energy, and it can stir up needless friction with people who agree with you on political policy but disagree on subtle cultural issues. Instead, focus on political power.

To illustrate the point, the person who came up with the idea of 'pulling the rope sideways', Robin Hanson, does indeed refrain from commenting on election choices and most areas of significant public policy, but has nonetheless been quite willing to state opinions on culture war topics like political correctness in academia, sexual inequality, race reparations, and so on.

I think that most people who hear 'culture wars' think of the purity politics and dunking and controversies, but not stuff like voting or showing up to neighborhood zoning meetings.

So even if you keep the same categorization, just change the terminology so it doesn't conflate those who are focused on serious (albeit controversial) questions of policy and power with those who are culture warring. 

Comment by kbog on Big List of Cause Candidates · 2020-12-30T06:14:39.995Z · EA · GW

You could add this post of mine to space colonization: An Informal Review of Space Exploration - EA Forum (effectivealtruism.org).

I think the 'existential risks' category is too broad and some of the things included are dubious. Recommender systems as existential risk? Autonomous weapons? Ideological engineering? 

Finally, I think the categorization of political issues should be heavily reworked, for various reasons. This kind of categorization is much more interpretable and sensible:

  • Electoral politics
  • Domestic policy
    • Housing liberalization
    • Expanding immigration
    • Capitalism
    • ...
  • Political systems
    • Electoral reform
    • Statehood for Puerto Rico
    • ...
  • Foreign policy and international relations
    • Great power competition
    • Nuclear arms control
    • Small wars
    • Democracy promotion
    • Self-determination
    • ...

I wouldn't use the term 'culture war' here, it means something different than 'electoral politics'.

Comment by kbog on The case for delaying solar geoengineering research · 2020-12-29T10:27:32.138Z · EA · GW

I don't think the pernicious mitigation obstruction argument is sound. It would be equally plausible for just about any other method of addressing air pollution. For instance, if we develop better solar power, that will reduce the incentive for countries and other actors to work harder at implementing wind power, carbon capture, carbon taxes, tree planting, and geoengineering. All climate solutions substitute for each other to the extent that they are perceived as effective. But we can't reject all climate solutions for fear that they will discourage other climate solutions, that would be absurd. Clearly, this mitigation obstruction effect is generally smaller than the benefits of actually reducing emissions.

The pernicious mitigation obstruction argument could make more sense if countries only care about certain consequences of pollution. Specifically, if countries care about protecting the climate but don't care about protecting public health and crops from air pollution, then geoengineering would give them an option to mitigate one problem while comfortably doing nothing to stop the other, whereas if they have to properly decarbonize then they would end up fixing both problems. However, if anything the reverse is true. To the extent that the politics of climate change mitigation are hampered by the global coordination problem (which is dubious), and to the extent that the direct harms of air pollution are concentrated locally, countries will worry too little about the climate impacts while being more rational about direct pollution impacts. So geoengineering would mitigate the politically difficult problem (climate change) while still leaving countries with full incentives to fix the politically easy problem (direct harms of pollution), making it less of a mitigation obstruction risk than something like wind turbines.

Additionally, given the contentious side effects of geoengineering, the prospect of some actors doing it if climate change gets much worse may actually encourage other actors to do more to mitigate climate change using conventional methods. It's still the case that researching or deploying geoengineering would reduce the amount of other types of mitigation, but it would do so to a lesser degree than that caused by comparable amounts of traditional mitigation.

Another note: I think if we had a better understanding of the consequences of solar geoengineering, then the security consequences of unilateral deployment would be mitigated. Disputes become less likely when both sides can agree on the relevant facts.

Comment by kbog on American policy platform for total welfare · 2020-12-08T22:31:52.777Z · EA · GW

My main point: By not putting "EA" into the name of your project, you get free option value: If you do great, you can still always associate with EA more strongly at a later stage; if you do poorly, you have avoided causing any problems for EA. 

I've already done this. I have shared much of this content for over a year without having this name and website. My impression was that it didn't do great nor did it do poorly (except among EAs, who have been mostly positive). One of the problems was that some people seemed confused and suspicious because they didn't grasp who I was and what point of view I was coming from. 

I agree with this. As far as I know, none of these orgs and individuals currently use an EA branding. 

A few do. And most may not literally have "EA" in their name, but they still explicitly invoke it, and audiences are smart enough to know that they are associated with the EA movement. 

And they get far larger audiences and attention than me, so they are the dominant images in the minds of people who have political perceptions of EA. Whatever I do to invoke EA will create a more equal diversity of public political faces of the movement, not a monolithic association of the EA brand with my particular view.

 

RE: the rest of your points, I won't go point by point because you are making some general arguments which don't necessarily apply to your specific worry about the presence or absence of "EA" in the name. It would be more fruitful to first clarify exactly which types of people are going to have different perceptions on this basis. Then after that we can talk about whether the differences in perception for those particular people will be good or bad. 

You already say that you are mainly worried about "public intellectuals, policy professionals, and politicians." Any of these who reads my website in detail or understands the EA movement well will know that it relates to EA without necessarily being the only EA view. So we are imagining a political elite who knows little about EA and looks briefly at my website. A lot of the general arguments don't apply here, and to me it seems like a good idea to (a) give this person a hook to take the content seriously and (b) show this person that EA can be relevant to their own line of work.

Or maybe we are imagining someone who previously didn't know about EA at all, in which case introducing them to the idea is a good thing.

Comment by kbog on American policy platform for total welfare · 2020-12-07T23:33:04.856Z · EA · GW

I think there are countervailing reasons in favor of doing so publicly, described here

Additionally, prominent EA organizations and individuals have already displayed enough politically contentious behavior that a lot of people already perceive EA in certain political ways. Restricting politically contentious public EA behavior to those few  orgs and individuals maximizes the problems of 1) and 2) whereas having a wider variety of public EA points of view mitigates them. I'd use a different branding if I were less convinced that politically engaged audiences already perceive EA as having political aspects.

Comment by kbog on EA politics mini-survey results · 2020-12-01T19:35:04.811Z · EA · GW

The Civic Handbook presents a more simplified view on the issue that sticks to making the least controversial claims that nearly all EAs should be able to get on board with. My full justification for why I believe we should maintain the defense budget, written earlier this year, is here:  https://eapolitics.org/platform.html#mozTocId629955 

Comment by kbog on Taking Self-Determination Seriously · 2020-11-29T15:18:46.160Z · EA · GW

I will think more about Brexit (noting that the EU is a supranational organization not a nation-state) but keep in mind that under the principle of self-determination, Scotland, which now would likely prefer to leave the UK and stay in the EU, should be allowed to do so.

Comment by kbog on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-26T12:55:41.064Z · EA · GW

welcome any evidence you have on these points, but your scenario seems to a) assume limited offensive capability development, b) willingness and ability to implement layers of defensive measures at all “soft” targets, c) focus only on drones, not many other possible lethal AWSs, and d) still produces considerable amount of cost--both in countermeasures and in psychological costs--that would seem to suggest a steep price to be paid to have lethal AWSs even in a rosy scenario.

I'm saying there are substantial constraints on using cheap drones to attack civilians en masse, some of them are more-or-less-costly preparation measures and some of them are not. Even without defensive preparation, I just don't see these things as being so destructive.

If we imagine offensive capability development then we should also imagine defensive capability development.

What other AWSs are we talking about if not drones?

In addition to potentially being more precise, lethal AWSs will be less attributable to their source, and present less risk to use (both in physical and financial costs).

Hmm. Have there been any unclaimed drone attacks so far, and would that change with autonomy? Moreover, if such ambiguity does arise, would that not also mitigate the risk of immediate retaliation and escalation? My sense here is that there are conflicting lines of reasoning going on here. How can AWSs increase the risks of dangerous escalation, but also be perceived as safe and risk-free by users?

I'm not sure how to interpret this. The lower end of the ranges are the lower end of ranges given by various estimators. The mean of this range is somewhere in the middle, depending how you weight them.

I mean, we're uncertain about the 1-7Bn figure and uncertain about the 0.5-20% figure. When you multiply them together the low x low is implausibly low and the high x high is implausibly high. But the mean x mean would be closer to the lower end. So if the means are 4Bn and 10% then the product is 40M which is closer to the lower end of your 0.5-150M range. Yes I realize this makes little difference (assuming your 1-7Bn and 0.5-0.20% estimates are normal distributions). It does seem apparent to me now that the escalation-to-nuclear-warfare risk is much more important than some of these direct impacts.

The question of whether small-scale conflicts will increase enough to counterbalance the life-saving of substituting AWs for soldiers is, I agree, hard predict. But unless you take the optimistic end of the spectrum (as I guess you have) I don't see how the numbers can balance at all when including large-scale wars.

I think they'd probably save lives in a large-scale war for the same reasons. You say that they wouldn't save lives in a total nuclear war, that makes sense if civilians are attacked just as severely as soldiers. But large-scale wars may not be like this. Even nuclear wars may not involve major attacks on cities (but yes I realize that the EV is greater for those that do).

This is a very strange argument to me. Saying something is problematic, and being willing in principle not to do it, seems like a pretty necessary precursor to making an agreement with others not to do it. 

I suppose that's fine, I was thinking more about concretely telling people not to do it, before any such agreement. 

You also have to be in principle willing to do something if you want to credibly threaten the other party and convince them not to do it.

Moreover, if something is ethically wrong, we should be willing to not do it even if others do it

Well there are some cases where a problematic weapon is so problematic that we should unilaterally forsake it even if we can't get an agreement. But there are also some cases where it's just problematic enough that a treaty would be a good thing, but unilaterally forsaking it would do net harm by degrading our relative military position. (Of course this depends on who the audience is, but this discourse over AWSs seems to primarily take place in the US and some other liberal democracies.)

Comment by kbog on The Case for Space: A Longtermist Alternative to Existential Threat Reduction · 2020-11-19T02:35:29.730Z · EA · GW

You may like to see this post, I agree in theory but don't think that space programs currently are very good at accelerating long run colonization.

https://forum.effectivealtruism.org/posts/xxcroGWRieSQjCw2N/an-informal-review-of-space-exploration 

Comment by kbog on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-18T10:20:25.233Z · EA · GW

Lethal autonomous weapons systems are an early test for AGI safety, arms race avoidance, value alignment, and governance

OK, so this makes sense and in my writeup I argued a similar thing from the point of view of software development. But it means that banning AWSs altogether would be harmful, as it would involve sacrificing this opportunity. We don't want to lay the groundwork for a ban on AGI, we want to lay the groundwork for safe, responsible development. What you actually suggest, contra some other advocates, is to prohibit certain classes but not others... I'm not sure if that would be helpful or harmful in this dimension. Of course it certainly would be helpful if we simply worked to ensure higher standards of safety and reliability.

I'm skeptical that this is a large concern. Have we learned much from the Ottawa Treaty (which technically prohibits a certain class of AWS) that will help us with AGI coordination? I don't know. Maybe.

Seeking to govern deeply unpopular AWSs (which also presently lack strong interest groups pushing for them) provides the easiest possible opportunity for a “win” in coordination amongst military powers.

I don't think this is true at all. Defense companies could support AWS development, and the overriding need for national security could be a formidable force that manifests in domestic politics in a variety of ways. Surely it would easier to achieve wins on coordinating issues like civilian AI, supercomputing, internet connectivity, or many other tech governance issues which affect military (and other) powers? 

Compared to other areas of military  coordination among  military powers, I guess AI weapons look like a relatively easy area right now but that will change in proportion to their battlefield utility. 

While these concerns are not foremost from the perspective of overall expected utility, for these and other reasons we believe that delegating the decision to take a human life to machine systems is a deep moral error, and doing so in the military sets a terrible precedent.

I thought your argument here was just that we need to figure out how to implement autonomous systems in ways that best respond to these moral dilemmas, not that we need to avoid them altogether. AGI/ASI will almost certainly be making such decisions eventually, right? We better figure it out.

In my other post I had detailed responses to these issues, so let me just say briefly here that the mere presence of a dilemma in how to design and implement an AWS doesn't count as a reason against doing it at all. Different practitioners will select different answers to the moral questions that you raise, and the burden of argument is on you to show that we should expect practitioners to pick wrong answers that will make AWSs less ethical than the alternatives.

Lethal autonomous weapons as WMDs

At this point, it's been three years since FLI released their slaughterbots video, and despite all the talk of how it is cheap and feasible with currently available or almost-available technology, I don't think anyone is publicly developing such drones - suggesting it's really not so easy or useful.

A mass drone swarm terror attack would be limited by a few things. First, distances. Small drones don't have much range. So if these are released from one or a few shipping containers, the vulnerable area will be limited. These $100 micro drones have a range of only around 100 meters. The longest range consumer drones apparently go 1-8km but cost several hundred or several thousand dollars. Of course you could do better if you optimize for range, but these slaughterbots cannot be optimized for range, they must have many other features like military payload, autonomous computing, and so on. 

Covering these distances will take time. I don't know how fast these small drones are supposed to go - is 20km/h a good guess, taking into account buildings posing obstacles to them? If so then it will take half an hour to cover a 10 kilometer radius. If these drones are going to start attacking immediately, they will make a lot of noise (from those explosive charges going off) which will alert people, and pretty soon alarm will spread on phones and social media. If they are going to loiter until the drones are dispersed, then people will see the density of drones and still be alerted. Specialized sensors or crowdsourced data might also be used to automatically detect unusual upticks in drone density and send an alert.

So if the adversary has a single dispersal point (like a shipping container) then the amount of area he can cover is fundamentally pretty limited. If he tries to use multiple dispersal points to increase area and/or shorten transit time, then logistics and timing get complicated. (Timing and proper dispersal will be especially difficult if a defensive EW threat prevents the drones from listening to operators or each other.) Either way, the attack must be in a dense urban area to maximize casualties. But few people are actually outside at any given time. Most are either in a building, in a car or public transport, even during rush hour or lunch break. And for every person who gets killed by these drones, there will be many other people watching safely through car or building windows who can see what is going on and alert other people. So people's vulnerability will be pretty limited. If the adversary decides to bring large drones to demolish barriers then it will be a much more expensive and complex operation. Plus, people only have to wait a little while until the drones run out of energy. The event will be over in minutes, probably.

If we imagine that drone swarms are a sufficiently large threat that people prepare ahead of time, then it gets still harder to inflict casualties. Sidewalks could have light coverings (also good for shade and insulation), people could carry helmets, umbrellas,  or cricket bats, but most of all people would just spend more time indoors. It's not realistic to expect this in an ordinary peacetime scenario but people will be quite adept at doing this during military bombardment. 

Also, there are options for hard countermeasures which don't use technology that is more complicated than that which is entailed by these slaughterbots. Fixtures in crowded areas could shoot anti-drone munitions (which could be less lethal against humans) or launch defensive drones to disable the attackers. 

Now, obviously this could all change as drones get better. But defensive measures including defensive drones could improve at the same time. 

I should also note that the idea of delivering a cheap deadly payload like toxins or a dirty bomb via shipping container has been around for a while yet no one has carried out. 

Finally, an order of hundreds of thousands of drones, designed as fully autonomous killing machines, is quite industrially significant. It's just not something that a nonstate actor can pull off. And the idea that the military would directly construct mass murder drones and then lose them to terrorists is not realistic.

The unfortunate flip-side of these differences, however, is that anti-personnel lethal AWSs are much more likely to be used. In terms of “bad actors,” along with the advantages of being safe to transport and hard to detect, the ability to selectively attack particular types of people who have been identified as worthy of killing will help assuage the moral qualms that might otherwise discourage mass killing.

I don't think the history of armed conflict supports the view that people become much more willing to go to war when their weapons become more precise. After all the primary considerations in going to war are matters of national interest, not morality. If there is such a moral hazard effect then it is small and outweighed by the first-order reduction in harm.

Autonomous WMDs would pose all of the same sorts of threats that other ones do,[12]

Just because drones can deploy WMDs doesn't mean they are anything special - you could can also combine chem/bio/nuke weapons with tactical ballistic missiles, with hypersonics, with torpedoes, with bombers, etc. 

Lethal autonomous weapons as destabilizing elements in and out of war

I stand by the point in my previous post that it is a mistake to conflate a lower threshold for conflict with a higher (severity-weighted) expectation of conflict, and military incidents will be less likely to escalate (ceteris paribus) if fewer humans are in the initial losses.

Someone (maybe me) should take a hard look at these recent arguments you cite claiming increases in escalation risk. The track record for speculation on the impacts of new military tech is not good so it needs careful vetting.

A large-scale nuclear war is unbelievably costly: it would most likely kill 1-7Bn in the first year and wipe out a large fraction of Earth’s economic activity (i.e. of order one quadrillion USD or more, a decade worth of world GDP.)Some current estimates of the likelihood of global-power nuclear war over the next few decades range from ~0.5-20%. So just a 10% increase in this probability, due to an increase in the probability of conflict that leads to nuclear war, costs in expectation ~500K - 150m lives and ~$0.1-10Tn (not counting huge downstream life-loss and economic losses). 

The mean expectations are closer to the lower ends of these ranges. 

Currently, 87,000 people die in state-based conflicts per year. If automation cuts this by 25% then in three decades it will add up to 650k lives saved. That's still outweighed if the change in probability is 10%, but for reasons described previously I think 10% is too pessimistic. 

The third is simply that this is “somebody else’s problem,” and low-impact relative to other issues to which effort and resources could be devoted.[21] We’ve argued above against all three positions: the expected utility of widespread autonomous weapons is likely to be highly negative (due to increase probability of large-scale war, if nothing else), the issue is addressable (with multiple examples of past successful arms-control agreements), currently tractable if difficult, and success would also improve the probability of positive results in even more high-stakes arenas including global AGI governance.

As the absolute minimum to address #3, I think advocacy on AWSs should be compared to advocacy on other new military tech like hypersonics and AI-enabled cyber weapons which come with their own fair share of similar worries. 

We leave out disingenuous arguments against straw men such as “But if we give up lethal autonomous weapons and allow others to develop them, we lose the war.” No one serious, to our knowledge, is advocating this – the whole point of multilateral arms control agreements is that all parties are subject to them. 

If you stigmatize them in the Anglosphere popular imagination as a precursor to a multilateral agreement, then that's basically what you're doing.

 

I would like to again mention the Ottawa Treaty, I don't know much about it, but it seems like a rich subject to explore for lessons that can be applied to AWS regulation. 

Comment by kbog on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-22T11:19:16.031Z · EA · GW

I don't have any arguments over cancel culture or anything general like that, but I am a bit bothered by a view that you and others seem to have. I  don't consider Robin Hanson an "intellectual ally" of the EA movement; I've never seen him publicly praise it or make public donation decisions, but he has claimed that do-gooding is controlling and dangerous, that altruism is all signaling with selfish motivations, that we should just save our money and wait for some unspecified future date to give it away, and that poor faraway people are less likely to exist according to simulation theory so we should be less inclined to help them. On top of that he made some pretty uncharitable statements about EA Munich and CEA after this affair. And some of his pursuits suggest that he doesn't care if he turns himself into a super controversial figure who brings negative attention towards EA by association. These things can be understandable on their own, you can rationalize each one, but when you put it all together it paints a picture of someone who basically doesn't care about EA at all. It just happens to be the case that he was big in the rationalist blogosphere and lots of EAs (including me) think he's smart in some ways and has some good ideas. He's just here for the ride, we don't owe him anything.

I'm definitely not trying to character-assassinate or 'cancel' him, I'm just saying that he only deserves as much community respect from us as any other decent academic does, we shouldn't give him the kind of special anti-cancelling loyalty that we would reserve for people who have really worked as allies for us.

Comment by kbog on EA's abstract moral epistemology · 2020-10-22T10:53:20.473Z · EA · GW

The idea that she and some other nonconsequentialist philosophers have is that if you care less about faraway people's preferences and welfare, and care more about stuff like moral intuitions, "critical race theory" and "Marxian social theory" (her words), then it's less abstract. But as you can see here, they're still doing complicated ivory tower philosophy that ordinary people do not pick up. So it's a rather particular definition of the term 'abstract'. 

Let's be clear: you do not have to have abstract moral epistemology to be an EA. You can ignore theoretical utilitarianism, and ignore all the abstract moral epistemology in that letter, and just commit yourself to making the world better through a basic common-sense understanding of effectiveness and the collective good, and that can be EA. If anyone's going to do philosophical gatekeeping for who can or can't count as an EA, it'll be EAs, not a philosopher who doesn't even understand the movement.

Comment by kbog on New and improved Candidate Scoring System · 2020-10-22T09:54:04.571Z · EA · GW

Thank you for your interest. So, I'm moving everything to my website now. Previously I had stabbed at a few House and Senate races, but now that the primaries are over, there's really no point in that - I'm instead working on a general comparison of Republicans vs Democrats, and the conclusion will almost certainly be a straightforward recommendation to vote D for all or nearly all congressional races. 

If people are curious about which races they should help with donations, I think it's generally fine to focus on key pro-Dem opportunities like this and these rather than picking the individual candidates which seem most meritorious from an EA point of view.

I am looking for help with the website and will PM you in a moment.

Comment by kbog on Is value drift net-positive, net-negative, or neither? · 2020-10-22T09:42:52.457Z · EA · GW

"Value drift towards the right values" = transition from our current state of affairs, to a state of affairs where more people have values which are closer/farther from ours.

Comment by kbog on Tax Havens and the case for Tax Justice · 2020-10-22T09:39:22.462Z · EA · GW

There a problem with your importance metric - the importance of malaria funding should be measured not by how much it costs but by how much good it does. $1B of malaria funding is much more important than $1B of , right? If we imagine that all the raised revenue gets used for fighting malaria, then it makes sense, but of course that is not a realistic assumption. 

I think that raising tax revenue for the US (and maybe some other countries) is not as important as it seems at first glance due to our flexibility with the Federal Reserve and record low interest rates, allowing us to take out a lot of debt or print money.

But within the sphere of tax policy, I weakly feel that cracking down on tax avoidance and evasion (both tax havens, and domestic stuff like increasing IRS auditing) is the best I/N/T combination, as opposed to conventional tax policy issues like changing tax rates or considering wealth taxes/VATs/etc. It would be nice to look at this more carefully, seeing which reforms seem harder/easier and comparing how much revenue they would raise. But even if this kind of tax reform is not the easiest path for raising revenue, it may still be best due to the secondary benefits for fairness and law-and-order.

Anyway, good post. Thanks for the contribution.

Comment by kbog on Super-exponential growth implies that accelerating growth is unimportant in the long run · 2020-10-22T09:14:15.973Z · EA · GW

I'm pretty confident that accelerating exponential and never-ending growth would be competitive with reducing x-risk. That was IMO the big flaw with Bostrom's argument (until now). If that's not intuitive let me know and I'll formalize a bit

Comment by kbog on Super-exponential growth implies that accelerating growth is unimportant in the long run · 2020-08-14T07:20:12.533Z · EA · GW

Thanks, fixed. No that's not the post I'm thinking of.

Comment by kbog on An Effective Altruist "Civic Handbook" for the USA (draft, calling for comments and assistance) · 2020-08-13T03:16:49.636Z · EA · GW

Sent

Comment by kbog on Existential Risk and Economic Growth · 2020-08-13T03:08:24.759Z · EA · GW

Neat paper. One reservation I have (aside from whether x-risk depends on aggregate consumption or on tech/innovation, which has already been brought up) is the assumption of the world allocating resources optimally (if impatiently). I don't know if mere underinvestment in safety would overturn the basic takeaways here, but my worry is more that a world with competing nation-states or other actors could have competitive dynamics that really change things.

Comment by kbog on Super-exponential growth implies that accelerating growth is unimportant in the long run · 2020-08-13T03:05:16.804Z · EA · GW

Thanks! Great find. I'm not sure if I trust the model tho.

Comment by kbog on Super-exponential growth implies that accelerating growth is unimportant in the long run · 2020-08-13T03:00:50.323Z · EA · GW

Assume that a social transition is expected in 40 years and the post transition society has 4x times as much welfare as a pre-transition society. Also assume that society will last for 1000 more years.

Increasing the rate of economic growth by a few percent might increase our welfare pre-transition by 5% and move up the transition by 2 years.

Then the welfare gain of the economic acceleration is (0.05*35)+(3*2)=8.

Future welfare without the acceleration is 40+(4*1000)=4040, so a gain of 8 is like reducing 0.2% existential risk.

Obviously the numbers are almost arbitrary but you should see the concepts at play.

Then if you think about a longer run future then the tradeoff becomes very different, with existential risk being far more important.

If society lasts for 1 million more years then the equivalent is 0.0002% X-risk.

Comment by kbog on An Effective Altruist "Civic Handbook" for the USA (draft, calling for comments and assistance) · 2020-08-11T06:06:54.239Z · EA · GW

Hm. (Sorry for delay, I wasn't checking the EA forum) the link seems to work and I haven't moved it.

Do you not have a Microsoft account? maybe if you're not logged in, you won't be able to use OneDrive. I can email a copy to you if you wish

Comment by kbog on An Effective Altruist "Civic Handbook" for the USA (draft, calling for comments and assistance) · 2020-08-11T06:06:33.231Z · EA · GW

Hm. (Sorry for delay, I wasn't checking the EA forum) the link seems to work and I haven't moved it.

Do you not have a Microsoft account? maybe if you're not logged in, you won't be able to use OneDrive. I can email a copy to you if you wish

Comment by kbog on As a small donor, should I donate with an ecosystem approach or focus on one organization? · 2020-06-11T08:00:42.715Z · EA · GW

Hmm, I don't think you can read into the tea leaves of Open Phil's donations like that. They will donate to fill funding gaps, a large donation doesn't mean that ADDITIONAL money will be more or less valuable to that organization. And how recently they donated might be due to how recently they were discovered, or some other unimportant consideration. (But if an org hasn't received Open Phil money in many years, perhaps they are not effective or funding-constrained anymore.)

Out of all the Open Phil grantees, just try to pick the recent one that seems most important or most neglected.

For criminal justice, I think this is straightforward. These causes are getting a lot of attention from liberals and Black Lives Matter, especially given the current surge in interest. So a charity which is a little less appealing to these people will probably be more neglected these days. Looking at a glance, the American Conservative Union's Center for Criminal Justice Reform seems like one that will be more neglected - liberals and BLM won't want to donate to a conservative foundation. I'm not saying this is necessarily the right choice, but it's an example of how I would think about the matter. Yes it is very hard to fully estimate the cost-effectiveness of an organization, but if you have a good suspicion that other donors are biased in a certain way, you can go in the opposite direction to find the more neglected charities.

If you have no idea which charities might be best, you can always just pick at random, or split your donation, or donate to whichever one you like best for small reasons (e.g. you personally appreciate their research or something like that).

Comment by kbog on kbog's Shortform · 2020-06-11T07:45:47.601Z · EA · GW

Shouldn't we collect a sort of encyclopedia or manual of organizational best practices, to help EA organizations? A combination of research, and things we have learned?

Comment by kbog on As a small donor, should I donate with an ecosystem approach or focus on one organization? · 2020-06-11T07:16:03.560Z · EA · GW

It's pretty straightforward: donate to wherever your money can do the most good at the moment. If this month it's Org A then you donate to Org A, and if next month it's Org B then you should switch. Cost-effectiveness rankings can change. This is not about ecosystems in particular. Sometimes we gain new information about charity effectiveness, sometimes a charity fills its funding needs and no longer needs more money.

Glancing at that Open Phil page, it looks like they are saying that they don't only look at how much good an organization is directly doing, but they also look at how effective they are when considering the more general needs of their sector of the nonprofit industry.

I don't know if it's common that Open Phil or anyone correctly identifies an ecosystem consideration that substantially changes the cost-effectiveness of a particular charity, but if you have identified such a consideration, of course you shouldn't simply ignore it from your analysis. If it means the charity does more or less good, of course you should pay attention to it.

Comment by kbog on Will protests lead to thousands of coronavirus deaths? · 2020-06-11T03:00:00.657Z · EA · GW

Here's my cost-benefit analysis. (I also posted it to my shortform, but I don't see a way to link directly to a shortform post.)

https://www.getguesstimate.com/models/16272

Comment by kbog on kbog's Shortform · 2020-06-11T02:58:51.687Z · EA · GW

I just noticed this post and the ensuing discussion. I want to share a model I recently made which seeks to answer the question: are these protests beneficial or harmful.

https://www.getguesstimate.com/models/16272

In summary:

  • The expected deaths caused by COVID spread outnumber the expected lives saved from reducing police brutality by a factor of 16.
  • If we adjust for QALYs (COVID mainly kills older folks), the COVID mortality is still worse than the reduction in police killings, though only by a factor of 5.
  • When I estimate a general positive impact of these protests upon America's political system - specifically, that they'll increase Democratic voteshare this November - it seems that the protests are neutral as far as American citizens are concerned, but (more importantly of course) positive when we include foreigners and animals.

I want to say upfront that this doesn't mean I endorse the protests, I still feel a bit negative about them due to the prisoner's dilemma at play (as in Larks' highly upvoted comment in the other thread, I also came up with the same point).

Comment by kbog on EA and tackling racism · 2020-06-11T01:44:36.371Z · EA · GW
Besides, I think someone should deeply think about how EAs should react to the possibility of social changes – when we are more likely to reach a tipping point leading to a very impactful event (or, in a more pessimistic tone, where it can escalate into catastrophe).

In my head I am playing with the idea of a network/organization that could loosely, informally represent the general EA community and make some kind of public statement, like an open letter or petition, on our general behalf. It would be newsworthy and send a strong signal to policymakers, organizations etc.

Of course it would have to be carried out to high epistemic standards and with caution that we don't go making political statements willy nilly or against the views of significant numbers of EAs. But it could be very valuable if used responsibly.

Comment by kbog on Climate Change Is Neglected By EA · 2020-06-11T01:34:28.794Z · EA · GW
(C) Social cost of carbon is usually computed from an IAM, a practice which has been described as such:
"IAMs can be misleading – and are inappropriate – as guides for policy, and yet they have been used by the government to estimate the social cost of carbon (SCC) and evaluate tax and abatement policies." [Pindyck, 2017, The Use and Misuse of Models for Climate Policy]

You can also use economists' subjective estimates ( https://policyintegrity.org/files/publications/ExpertConsensusReport.pdf ) or model cross validation ( https://www.rff.org/publications/working-papers/the-gdp-temperature-relationship-implications-for-climate-change-damages/ ) and the results are not dissimilar to the IAMs by Nordhaus and Howard & Sterner. (it's 2-10% of GWP for about three degrees of warming regardless.)

In any case I think that picking a threshold (based on what exactly??) and doing whatever it takes to get there will have more problems than IAMs do.

I see that you use GWWC's estimate of tonnes of CO2 per life saved. I critiqued GWWC's approach in this previous post.

Nice, that looks like a good noteworthy post. I will look at it in more detail (would take a while). Until then I'm revising from 258,000 tons down to 40,000 (geometric mean of their estimate and your 15,620 but biased a little towards you).

"40% of Earth’s population lives in the tropics, with 50% projected by 2050 (State of the Tropics 2014) so we estimate 6 billion people affected (climate impacts will last for multiple generations)." - The world population is expected to be ~10 billion by 2050, so 50% would be 5 billion. How are you accounting for multiple generations?

I figured many people will be wealthy and industrialized enough to generally avoid serious direct impacts, so it wasn't an estimate of how many people will live in warming tropical conditions. But looking at it now, I think that's the wrong way to estimate it because of the ambiguity that you raise. I'm switching to all people potentially affected (12 billion), with a lower average QALY loss.

"We discount this to 2 billion to account for the meat eater problem" - What is the meat eater problem?

Described in "short-run, robust welfare" section of "issue weight metrics," it's the fact that increases in wealth for middle-income consumers may be net neutral or harmful in the short run because they increase their meat consumption.

"If each of them suffers -1 QALY over their lifetime from climate change on average" - why did you choose -1 QALY?

Subjective guess. Do you think it is too high or too low? Severely too high, severely too low?

Why did you choose to multiply 550 by ~3.9?

Arbitrary guess based on the quoted factors. Do you feel that is too low or too high.

I agree that this is a plausible possibility, but not one which I'd like to have to rely on.

I'm not saying to rely on it. I'm saying your estimates of climate damages cannot rely on geoengineering not happening. The chance that we see "full" geoengineering by 2100 (restoring the globe to optimal or preindustrial temperature levels) is, hmm 25%? Higher probability for less ambitious measures.

If we were in in the 1980s it would be improper to write a model which assumed that cheap renewable energy would never be developed.


Based on these changes I've increased the weight of air pollution from 15.2 to 16. (It's not much because most of the weight comes from the long run damage, not the short run robust impacts. I've increased short run impact from 2.15 million QALYs to 3 million.)

I already did that: "Review of Climate Cost-Effectiveness Analyses". I would love to get your feedback on that post.

Yes I will look into that and update things accordingly.

Comment by kbog on Climate Change Is Neglected By EA · 2020-05-27T10:45:51.232Z · EA · GW

I find this whole genre of post tedious and not very useful. If you think climate change is a good cause area, just write an actual cause prioritization analysis directly comparing it to other cause areas, and show how it's better! If that's beyond your reach, you can take an existing one and tweak it. This reads like academic turf warring, a demand that your cause area should get more prestige, instead of a serious attempt to help us decide which cause areas are actually most important.

1) There is a lack of evidence for the more severe impacts of climate change, rather than evidence that the impacts will not be severe.

OK, but I don't know if anyone here was previously assuming that the impacts will definitely not be severe. The EA community has long recognized the risks of more severe impact. So this doesn't seem like a point that challenges what we currently believe.

One of the central ideas in effective altruism is that some interventions are orders of magnitude more effective than others. There remain huge uncertainties and unknowns which make any attempt to compute the cost effectiveness of climate change extremely challenging. However, the estimates which have been completed so far don’t make a compelling case that mitigating climate change is actually order(s) of magnitude less effective compared to global health interventions, with many of the remaining uncertainties making it very plausible that climate change interventions are indeed much more effective.

I haven't read those previous posts you've written, but the burden of argument is on showing that a cause is effective, not proving that it's ineffective. We have many causes to choose from, and the Optimizer's Curse means we must focus on ones where we have pretty reliable arguments. Merely speculating "what if climate change is worse than the best evidence suggests???" does nothing to show that we've neglected it. It just shows that further cause prioritization analysis could be warranted.

The EA importance, tractability, neglectedness (ITN) framework discounts climate change because it is not deemed to be neglected (e.g. scoring 2/12 on 80K Hours). I have previously disagreed with this position because it ignores whether the current level of action on climate change is anywhere close to what is actually required to solve the problem (it’s not).

This criticism doesn't make sense to me. The mere fact that a problem will be unsolved doesn't mean it's more important for us to work on it. What matters is how much we can actually accomplish by trying to solve it.

The 80K Hours problem profile makes no mention of the concept of a carbon budget - the amount of of carbon which we can emit before we are committed to a particular level of warming.

That's fine. Marginal/social cost of carbon is the superior way to think about the problem.

4) EA often ignores or downplays the impact of mainstream climate change, focusing on the tail risk instead

I've seen EAs talk about 'mainstream' costs many times. GWWC's early analysis on climate change did this in detail. In any case, my estimate of the long-term economic costs of climate change (detailed writeup in Candidate Scoring System: http://bit.ly/ea-css ) aggregates over the various scenarios.

5) EA appears to dismiss climate change because it is not an x-risk

This phrasing suggests to me that you didn't read, or perhaps don't care, what is actually in many of the links that you're citing. We do not believe that climate change is irrelevant because it's not an x-risk. We do, however, believe that the arguments in favor of mitigating x-risks do not apply to climate change. So that provides one reason to prioritize x-risks over climate change. This is clearly a correct conclusion and you haven't provided arguments to the contrary.

6) EA is in danger of making itself a niche cause by loudly focusing on topics like x-risk

If you think that people will like EA more when they see us addressing on climate change, why don't you highlight all the examples of EAs actually addressing climate change (there are many examples) instead of writing (yet another, we've had many) post making the accusation that we neglect it?

7) EA tries to quantify problems using simple models, leading to undervaluing of action on climate change

Other problems have complex, far-reaching negative consequences too, so it's not obvious that simplistic modeling leads to an under-prioritization of climate change. It is very easy to think of analogous secondary effects for things like poverty.

In any case, estimating the damages of climate change upon the human economy has already addressed by multiple economic metanalyses. Estimating the short- and medium-term deaths has been done by GWWC. Estimating the impacts on wildlife is generally sidelined because we have no idea if they are net positive or net negative for wild animal welfare.

Global health interventions have a climate footprint, which I’ve never seen accounted for in EA cost effectiveness calculations.

I briefly addressed it in Candidate Scoring System, and determined that it was very small. If you look at CO2 emissions per person and compare it to the social cost of carbon, you can see that it's not much for a person in the United States, let alone for people in (much-lower-emissions) developing countries.

Climate change is a problem which is getting worse with time and is expected to persist for centuries. Limiting warming to a certain level gets harder with every year that action is not taken. Many of the causes compared by EA don’t have the same property. For example, if we fail to treat malaria for another ten years, that won’t commit humanity to live with malaria for centuries to come. However, within less than a decade, limiting warming to 1.5C will become impossible.

Climate change being expected to persist for centuries is conditional upon the absence of major geoengineering. But we could quite plausibly see that in the later 21st century or anytime in the 22nd century.

Failing to limit warming to a certain level is a poor way of defining the problem. If we can't stay under 1.5C, we might stay under 2.0C, which is not that much worse. The right way to frame the problem is to estimate how much accumulated damage will be caused by some additional GHGs hanging around the atmosphere for, probably, a century or more. That is indeed a long term cost.

But other cause areas also have major long-run impacts. There is plenty of evidence and arguments for long-run benefits of poverty relief, health improvements and economic growth.

10) Case study: Climate is visibly absent or downplayed within some key EA publications and initiatives

Pick another cause area that's currently highlighted, compare it to climate change, and show how climate change is a more effective cause area.

Comment by kbog on An Effective Altruist "Civic Handbook" for the USA (draft, calling for comments and assistance) · 2020-03-31T02:32:13.662Z · EA · GW

1. But WHY do you believe that the costs outweigh benefits? Again - the paper looking at Ethiopia estimated that benefits of lower prices outweighed costs on average. This seems intuitively sensible, too - if we sell subsidized low-priced goods, it should increase their wealth in the short run at least.

2. It could be - and there are also many other ways to address vulnerability to spikes in global commodity prices, as described in the last paper I linked. Of course none of these solutions is perfect and simple otherwise the problems would not exist anymore. I think we should look at the likely consequences within current regimes rather than assuming that countries/societies will get much better at responding to problems.

3. But you see how it's a tradeoff, right? People can specialize in farming or they can specialize in other trades, not both. There can be different people doing different jobs, but every person who becomes a farmer is neglecting the possibility of specializing in something else. If a country has an industrial policy it will have to make a tough choice of what industries it wants to specialize in.

I am adding these considerations to Candidate Scoring System, which is more of an encyclopedia with all kinds of policy issues, but for the Civic Handbook I think I will leave the matter out as it does not have the kind of clear argumentative support necessary to build an Effective Altruist consensus.

Comment by kbog on An Effective Altruist "Civic Handbook" for the USA (draft, calling for comments and assistance) · 2020-03-28T22:22:53.136Z · EA · GW

Regarding food aid, you showed a couple papers discussing negative impacts from 'food dumping', subsidized agricultural exports from wealthy countries to poor ones. A topic that you studied in detail.

https://sci-hub.tw/10.1017/s1742170519000097

https://digitalcommons.law.seattleu.edu/cgi/viewcontent.cgi?article=1089&context=sjel

I did not read all of the text, but they mainly say: the foreign impact is that it displaces farmers. We send cheap exports, which are in fact cheaper than what a free market would produce, for a combination of reasons but mainly because of our agricultural subsidies. This puts farmers in the aid-receiving country out of work because they cannot compete.

My immediate objection is, why believe that the costs to farmers outweigh the benefits to consumers? If food is lower-priced then that should help many people. I found this paper arguing that the consumer benefits outweighed the hit to farming, on average, for households at all income levels in Ethiopia. It was not cited by either of the papers listed above.

The 1st article also says that dependence on food imports creates vulnerability to price spikes, citing this paper. But local food sources are volatile too, no? Local weather patterns, political instability, plant diseases, etc can create local price spikes. I imagine this would be worse than volatility in global commodity prices. Now, you can have imports step up to cover local price spikes, but you can also have local production step up to cover global price spikes. The former may be easier, but overall I just don't see good reason to believe that dumping increases price volatility.

There is then the long-run question of whether a country should develop its agricultural sector vs other sectors. The 1st paper touches on this. I will have to think/read more on this, or maybe you can better answer it.

Comment by kbog on An Effective Altruist "Civic Handbook" for the USA (draft, calling for comments and assistance) · 2020-03-27T21:06:11.392Z · EA · GW

1. OK, I am emboldening key sentences now. Not entirely sure if I like it though.

2. Thanks, I replied in comments within the document.

Comment by kbog on An Effective Altruist "Civic Handbook" for the USA (draft, calling for comments and assistance) · 2020-03-27T20:54:22.081Z · EA · GW

Done though I still haven't identified a proper catchy name. "Effective Altruist civic handbook" was just meant as a placeholder.

Comment by kbog on Why not give 90%? · 2020-03-24T14:47:59.840Z · EA · GW

You can receive answers to these claims by making a dedicated thread rather than hijacking the current one.

Comment by kbog on Why not give 90%? · 2020-03-24T02:40:31.471Z · EA · GW

To respond to the on-topic part of your post (I also downvoted because it's mostly off-topic), I don't see how you can shrug off the benefits of donating >10% as if 10% is good enough, while also saying that we must interview and read whole swathes of additional papers and people in the hope that some of it might be useful for achieving better cause prioritization. If you really want Effective Altruists to capture the benefits from reading non-Western scientific literature, then clearly you don't think that we can shrug our shoulders and say that we're good enough, and should recognize that donating more money is another way we can similarly do better. The two are actually fungible, as you can donate money to movement growth with advertisements targeted to foreign countries, or you can donate to cause prioritization efforts with researchers hired to survey, review and summarize the fields of literature that you think are valuable.

Comment by kbog on Why not give 90%? · 2020-03-23T21:06:09.110Z · EA · GW

You're assuming that the probability of giving up each year is conditionally independent. In reality, if we can figure out how to give a lot for one or two years without becoming selfish, we are more likely to sustain that for a longer period of time. This boosts the case for making larger donations.

Moreover, I rather doubt that the probability of turning selfish and giving up on Effective Altruism can be nearly as high as 50% in a given year. If it were that high, I think we'd have more evidence of it, in spite of the typical worries regarding how we can hear back from people who aren't interested anymore.

Also, this doesn't break your point, but I think percentages are the wrong way to think about this. In reality, donations should be much more dependent upon local cost of living than upon your personal salary. If COL is $40k and you make $50k then donate up to $10k. If COL is $40k and you make $200k then donate up to $160k.

People whose jobs are higher impact/higher-salary (they are correlated due to donation potential if nothing else) are likely to face more expensive costs of living and are also likely to obtain greater benefits from personal spending (averting a 1% chance of personal burnout is much more important if your job is high-impact, saving an hour out of your week is much more important if your hourly wage is higher, etc). So the appropriate amount of personal spending does scale somewhat with income. However this effect is weak enough that I think it makes more sense to usually think about thresholds rather than percentages.

Comment by kbog on Voting is today (Tuesday March 3) in California and other states - here are recommendations · 2020-03-23T20:46:05.816Z · EA · GW

New candidates have never served in Congress and therefore do not have legislative track records on animal welfare, and it's such a minor issue to most voters that candidates almost never express their views on it while running for office.

Comment by kbog on 80,000 Hours: Anonymous contributors on flaws of the EA community · 2020-03-04T05:04:58.942Z · EA · GW

80k ought to frame this as "room for improvement" or something along those lines instead of "flaws." This is part of being media savvy.

Comment by kbog on Candidate Scoring System recommendations for the Democratic presidential primaries · 2020-03-03T09:07:15.526Z · EA · GW

(Sorry for late reply)

First, did you see the truthfulness part? I rated candidates per their average truth/lies to the public, according to PolitiFact. That's not identical to what you're asking about, but may be of interest.

Biden does relatively poorly. Sanders does well, though (and I haven't factored this in, maybe I should) he seems to have a more specific and serious trend of presenting misleading narratives about the economy. Warren does well, though I did dock some points for some specifically significant lies. Bloomberg seems to be doing quite well, though he has less of a track record so it's harder to be sure.

OTOH, it seems like you're primarily concerned about epistemics within an administration - that there might be some kinds of political correctness standards. I've docked points from Trump because there have been a number of cases of this under his watch. Among Democrats, I feel there would be more risk of it with Sanders, because of how many of his appointments/staff are likely to come from the progressive left. Even though he's perceived as a rather unifying figurehead as far as the culture wars are concerned, he would likely fare worse from your angle. But I feel this is too speculative to include. I can't think of any issues where the 'redpill' story, if true, would be very important for the federal government to know about. And there will not be a lot of difference between candidates here.

EA forum user Bluefalcon has pointed out that Warren's plan to end political appointments to the Foreign Service may actually increase groupthink because the standard recruitment pipeline puts everyone through very similar paces and doctrine. Hence, I've recently given slightly fewer points to Warren's foreign policy than I used to.

Comment by kbog on Candidate Scoring System recommendations for the Democratic presidential primaries · 2020-02-02T15:33:11.145Z · EA · GW

I previously included wild animal suffering in the long run weight of animal welfare. Having looked at some of these links and reconsidering, I think I was over-weighting animal welfare's impact on wild animal suffering.

One objection here is that improving socioeconomic conditions can also broadly improve people's values. Generally speaking, increasing wealth and security promotes self-expression values, which correspond decently well to having a wide moral circle. So there's less general reason to single out moral issues like animal welfare as being a comparatively higher priority.

However, improving socioeconomic conditions also accelerates the date at which technological s-risks will present themselves. So in some cases, we are looking for differential moral progress. So this tells me to increase the weight of animal welfare for the long run. (It's overall slightly higher now than before.)

Another objection: a lot of what we perceive as pure moral concern vs apathy in governance could really be understood as a different tradeoff of freedom versus government control. It's straightforward in the case of animal farming or climate change that the people who believe in a powerful regulatory state are doing good whereas the small-government libertarians are doing harm. But I'm not sure that this will apply generally in the future.

Emerging tech is treated as an x-risk here, so s-risks from tech should be considered separately. In terms of determining weights and priorities I would sooner lump s-risks into growth and progress than into x-risks.

I don't see climate change policy as promoting better moral values. Sure, better moral values can imply better climate change policy, but that doesn't mean there's a link the other way. One of the reasons animal welfare uniquely matters here is that we think there is a specific phenomenon where people care less about animals in order to justify their meat consumption.

At the moment I can't think of other specific changes to make but I will keep it in mind and maybe hit upon something else.

Comment by kbog on Responding to the Progressive Platform of “Foreign Policy Generation” · 2020-02-02T13:28:51.675Z · EA · GW
Migration and Development: Dissecting the Anatomy of the Mobility Transition
The Hypothesis of the Mobility Transition by Wilbur Zelinsky (1971)

Mexico's GDP per capita and Gini coefficient have been about constant for the past decade. I can't find evidence on changes in college education attainment. So it's not apparent that they are pushing forward along this transition. Moreover, Mexico only constitutes ~half of illegal immigration, and many Latin American countries are poorer (in fact they are behind the $6k transition peak).

All the data+papers presented before and in this post.

None of them asked Mexican people how content they are to stay or immigrate.

The obvious, the number of kids being born in Mexico peaked in 1994 at 2.9 million and has fallen to 2.16 million births in 2018. If emigration rates remain same we can expect lower number of Mexicans trying to emigrate.

Mexico's population is still growing. So if the emigration rate per 1000 people remains constant, the number of annual emigres will grow year over year, just at a lower rate than it would grow if fertility were higher.

When fertility rates fall, the pulls of home country are greater for emigres as parents age, + parents are less enthusiastic about kids emigrating in the first place.

Please provide a source. It may be the case that people with aging families to support desire to emigrate in order to send remittances.

Mexican emigration has gone down similar to Ireland, Japan, UK etc

It is still a vastly different country.

Around 5% of those wishing to move to US actually moved.

And many more tried to move but were apprehended at the border, or chose not to move because they were afraid of being apprehended at the border or otherwise policed.

The number of Mexicans attempting to cross the border illegally has crashed from a high of 1.615 million in 2000 to 152,257 in 2018

You're confusing apprehensions with crossing attempts and neglecting to mention the increase in apprehensions of non-Mexican migrants.

However neither FPGen or Democrats are advocating open borders, I doubt that even under the least restrictive proposals US net immigration will exceed 1 million average over the next 20 years

Whether or not a country has open borders is not a question of the quantity of immigrants who enter the country.

I just ignore them.

Fine, but don't then tell me I'm wrong when I'm not.

Second of all, the American right-wing is correct when they perceive that America fails to reliably control the southern border or police the undocumented migrant population.

I look for universal definitions, open borders means that anyone can come and live in USA

That's probably what would happen here: assuming that you make it to the border, then CBP will not have the power to detain you, ICE will not exist, you will be "legally protected," you will not have a criminal record, and you will have a "pathway to citizenship."

Comment by kbog on Candidate Scoring System recommendations for the Democratic presidential primaries · 2020-02-02T12:18:16.872Z · EA · GW

These seem like small impacts on the national level. My comment on this dimension of wealth taxation is simply:

"Wealth taxes would also encourage more rapid spending on luxury consumption, political contributions, and philanthropy. It’s not clear if this is generally good or bad. Of course the tax would also reduce the amount of money that is ultimately available for the rich to use on these things, although the cap on political contributions means that it probably wouldn’t make much difference there."

Comment by kbog on Candidate Scoring System recommendations for the Democratic presidential primaries · 2020-02-01T07:53:10.573Z · EA · GW

Good find, adding this too

Comment by kbog on Candidate Scoring System recommendations for the Democratic presidential primaries · 2020-02-01T07:49:29.666Z · EA · GW

Good point. Increasing the weight by 40% until I or someone else does a better calculation.