Posts

A framework for self-determination 2021-07-21T11:33:44.812Z
Defusing the mitigation obstruction argument against geoengineering and carbon dioxide removal 2021-04-29T03:20:45.697Z
Review: "Why It's OK To Ignore Politics" by Christopher Freiman 2021-02-23T05:38:39.977Z
A comparison of American political parties 2020-12-09T20:45:14.844Z
American policy platform for total welfare 2020-12-03T18:07:45.068Z
EA politics mini-survey results 2020-12-01T18:41:38.603Z
Taking Self-Determination Seriously 2020-11-27T13:49:14.108Z
Please take my survey! 2020-11-27T09:00:09.942Z
Instability risks of the upcoming U.S. election and recommendations for EAs 2020-11-03T01:19:13.673Z
2020 United States/California election recommendations 2020-10-31T23:15:12.901Z
Super-exponential growth implies that accelerating growth is unimportant in the long run 2020-08-11T07:20:19.242Z
Idea: statements on behalf of the general EA community 2020-06-11T07:02:08.317Z
kbog's Shortform 2020-06-11T02:58:51.376Z
An Effective Altruist "Civic Handbook" for the USA (draft, calling for comments and assistance) 2020-03-23T23:06:18.709Z
Voting is today (Tuesday March 3) in California and other states - here are recommendations 2020-03-03T10:40:32.995Z
An Informal Review of Space Exploration 2020-01-31T13:16:00.960Z
Candidate Scoring System recommendations for the Democratic presidential primaries 2020-01-31T12:25:00.682Z
Concrete Foreign Policy Recommendations for America 2020-01-20T21:52:03.860Z
Responding to the Progressive Platform of “Foreign Policy Generation” 2020-01-19T20:24:00.971Z
A small observation about the value of having kids 2020-01-19T02:37:59.391Z
Love seems like a high priority 2020-01-19T00:41:51.617Z
Tentative Thoughts on Speech Policing 2020-01-06T19:20:36.485Z
Response to recent criticisms of EA "longtermist" thinking 2020-01-06T04:31:07.614Z
Welfare stories: How history should be written, with an example (early history of Guam) 2020-01-02T23:32:10.940Z
Tentative thoughts on which kinds of speech are harmful 2020-01-02T22:44:58.055Z
On AI Weapons 2019-11-13T12:48:16.351Z
New and improved Candidate Scoring System 2019-11-12T08:49:34.392Z
Four practices where EAs ought to course-correct 2019-07-30T05:48:57.665Z
Extinguishing or preventing coal seam fires is a potential cause area 2019-07-07T18:42:22.548Z
Should we talk about altruism or talk about justice? 2019-07-03T00:20:40.213Z
Consequences of animal product consumption (combined model) 2019-06-15T14:46:19.564Z
A vision for anthropocentrism to supplant wild animal suffering 2019-06-06T00:01:43.953Z
Candidate Scoring System, Fifth Release 2019-06-05T08:10:38.845Z
Overview of Capitalism and Socialism for Effective Altruism 2019-05-16T06:12:39.522Z
Structure EA organizations as WSDNs? 2019-05-10T20:36:19.032Z
Reasons to eat meat 2019-04-21T20:37:51.671Z
Political culture at the edges of Effective Altruism 2019-04-12T06:03:45.822Z
Candidate Scoring System, Third Release 2019-04-02T06:33:55.802Z
The Political Prioritization Process 2019-04-02T00:29:43.742Z
Impact of US Strategic Power on Global Well-Being (quick take) 2019-03-23T06:19:33.900Z
Candidate Scoring System, Second Release 2019-03-19T05:41:20.022Z
Candidate Scoring System, First Release 2019-03-05T15:15:30.265Z
Candidate scoring system for 2020 (second draft) 2019-02-26T04:14:06.804Z
kbog did an oopsie! (new meat eater problem numbers) 2019-02-15T15:17:35.607Z
A system for scoring political candidates. RFC (request for comments) on methodology and positions 2019-02-13T10:35:46.063Z
Vocational Career Guide for Effective Altruists 2019-01-26T11:16:20.674Z
Vox's "Future Perfect" column frequently has flawed journalism 2019-01-26T08:09:23.277Z
A spreadsheet for comparing donations in different careers 2019-01-12T07:32:51.218Z
An integrated model to evaluate the impact of animal products 2019-01-09T11:04:57.048Z
Response to a Dylan Matthews article on Vox about bipartisanship 2018-12-20T15:53:33.177Z

Comments

Comment by kbog on Against longtermism · 2022-08-11T13:53:21.987Z · EA · GW

Here is the report (at first I'd been unable to find it)

If they are indeed net positive it does seem useful to establish consensus that that is so!

At this section of my policy platform I have compiled sources with all the major arguments I could find regarding nuclear power. Specifically, under the heading "Fission power should be supported although it is expensive and not necessary"

https://happinesspolitics.org/platform.html#cleanenergy

I think with this compilation of pros/cons, and a background understanding that fossil fuel use is harmful, it is easy to see that nuclear is at least better than using fossil fuels.

Comment by kbog on Against longtermism · 2022-08-11T12:04:24.497Z · EA · GW

Some comments on "the road to hell is paved with good intentions"

This podcast is kind of relevant: Tom Moynihan on why prior generations missed some of the biggest priorities of all - 80,000 Hours (80000hours.org)

So people in the Middle Ages believed that the best thing was to save more souls, but I don't think that exactly failed. That is, if a man's goal was to have more people believe in Christianity, and he went with sincerity in the Crusades or colonial missionary expeditions, he probably did help achieve that goal.

Likewise, for people in the 1700s, 1800s and early 1900s, when the dominant paradigm shifted to one of human progress, I think people could reliably find ways to improve long-term progress. New science and technology, liberal politics, etc all would have been straightforward and effective methods to get humanity further on the track of rising population, improved quality of life, and scientific advancement.

Point is, I think people have always tended to be significantly more right than wrong about how to change the world. It's not too too hard to understand how one person's actions might contribute to an overriding global goal. The problem is in the choice of such an overriding paradigm. The first paradigm was that the world was stagnant/repetitive/decaying and just a prelude to the afterlife. The second paradigm was that the world is progressing and things will only get steadily better via science and reason. Today we largely reject both these paradigms, and instead we have a view of precarity - that an incredibly good future is in sight but only if we proceed with caution, wisdom, good institutions and luck. And I think the deepest risk is not that we are unable to understand how to make our civilization more cautious and wise, but that this whole paradigm ends up being wrong.

I don't mean to particularly agree or disagree with your original post, I just think this is a helpful clarification of the point.

Comment by kbog on Against longtermism · 2022-08-11T11:43:13.285Z · EA · GW

I recall the Founder's Pledge report on climate change some years ago discussed nuclear proliferation from nuclear energy and it seemed like nuclear power plants could equally promote proliferation or work against it (the latter by using up the supply of nuclear fuel). Considering how many lives have been taken by fossil fuels, I feel it's clear that nuclear energy has been net good. That said I have a hard time believing that a longtermist in the 1960s would oppose nuclear power plants.

Not that I disagree with the general idea that if you imagine longtermists in the past, they could have come up with a lot of neutral or even harmful ideas.

Comment by kbog on An Informal Review of Space Exploration · 2022-08-11T11:12:51.679Z · EA · GW

Yes, that's a good point. Since writing this post I've become a bit more negative about space colonization in general for humanity, but for the reason you bring up, I remain slightly positive about space colonization by certain countries including the USA.

Comment by kbog on Persistent Democracy: request for early feedback · 2022-08-11T09:56:56.116Z · EA · GW

I think we would agree that just because a reform won't fix everything doesn't count as a reason not to do it. I suppose you're simply saying that better voting methods will only cause a mild improvement in governance, not a major improvement. But I would argue that the characteristics of political institutions are a major explanation for why things like horrendous human rights violations sometimes do or don't happen.

Comment by kbog on Persistent Democracy: request for early feedback · 2022-08-11T09:51:43.871Z · EA · GW
Is the writing intuitive? Are any concepts difficult to grasp?

The presentation of your website looks basically alright, but it seems your format is to start with the problem, go through a bit of a story, explain concepts of voting methods, and then wind up with the solution at the end. That works for some contexts but in a more academic or technical flavored setting, and what I find easier to work with, is to start with the thesis upfront and then unpack it with details lower down. The blogging/rhetorical style is understandable for the front page of the website, but when I click the link to the chapter Persistent Democracy, there at least I expect to immediately jump into a snappy description of what you are proposing. Something like:

"In Persistent Democracy, scheduled elections will be replaced by ______ where voters can _____. This will solve the problems of ______, _______, and ______, by ______ and _____. " Hope you get the idea.

As it stands, I'm a little unsure what you're proposing because on one hand you say it will enable direct democracy but on the other hand you talk about how politicians such as mayors might be elected with your system.

Does the concept seem robust and useful enough that it's worth experimenting with? Are there any serious problems I haven't addressed? Does the book make a compelling and persuasive argument?

I've only looked at a few parts of your website so far, so don't take this as an attack, but a description of my current views so that you can know the challenge of what you'd have to do to overcome my skepticism. I support representative democracy over direct democracy. I think that in some cases, bureaucratic experts should have a bit more power. I think America currently has on average too much public scrutiny of government projects. I am wary of political systems which too heavily reward participatory effort, because it can give too much power to a vocal and well-resourced minority, as we see with NIMBY groups opposing upzoning. I think that people who are very informed and engaged in politics are not necessarily better voters because their knowledge and passion usually comes alongside heightened bias and radicalism. I think that Congress works better when the public doesn't pay close attention to it. I think that recall elections, such as those I observe in my state of California, are a bad system. And at least some EAs who are plugged into politics largely agree with these points (indeed, several of the above citations come from EAs). Putting it all together, this is a less populist point of view which makes me skeptical about your approach.

You also should probably address the question of whether your voting system would be secure when it seems to require electronic voting. I gather that many informed people are very skeptical about the security of electronic voting. I see that you say something about the need to make safe software, but that aspiration won't be enough to convince election experts that such software actually will exist. And by the way, if you can describe the way to make provably correct, hacker-proof voting software, that alone is very impressive and should be presented somewhere else as a big idea in its own right, not just included as a component of this voting reform project. But, cards on the table, I am by default very skeptical of any claim that someone can make such foolproof software. In general, if your idea requires simultaneous breakthroughs or reforms at the same time, it all gets that much harder. Maybe there's a 30% chance of persuading people that your voting idea is theoretically desirable, and a 30% chance of the right software being available, well then the chance of success is only 9%. If you can sketch out governance reforms which don't simultaneously require a breakthrough in software development, they will be more plausible.

Comment by kbog on Military Service as an Option to Build Career Capital · 2022-08-11T08:56:36.787Z · EA · GW

For a while I considered a career path of being a Civil Affairs officer in the Army National Guard alongside graduate economics school and a career in developmental economics. It seemed like it would have fairly good synergy, so you might look into doing that. However, for a junior to try to join the military in the hopes of getting funding for grad school... that is an unusual path. As a junior you may be too late for ROTC or maybe not. You might want to talk directly and immediately to an ROTC department and an Officer Strength Manager.

Comment by kbog on Military Service as an Option to Build Career Capital · 2022-08-11T08:47:33.494Z · EA · GW

Nice post. I largely agree (as someone who was in the US Army National Guard for a few years).

To push back a little, I didn't personally experience what you felt about internalizing a foreign ethic. And while military service does help one understand big national security issues, I think the advantage is generally rather slim and overrated - it's perhaps comparable to volunteering for the Peace Corps not necessarily providing good knowledge about developmental economics.

I certainly got more exposure to diversity (social, ethnic, income, age) through the military than through other venues (as someone who grew up in Glendale and went to a private university). And the military along with its typical demographics are kind of underrepresented within EA, so for diversity's sake we should look to be more connected with the military.

I don't think you fully described one of the best things people can take from the military, which is the spirit of service to a cause. I am confused when I see people shy away from EA commitments because it seems too sacrificial or because they don't feel socially harmony with most of the EA community, when from my perspective it is absolutely expected that an ordinary person can step up to the plate and tolerate such risks and costs when lives are on the line. To me, sticking with the EA community through thick and thin is just obviously the right thing to do. Of course, it's not clear if joining the military will causally improve one's mindset in this manner when it comes to EA.

Also: more EAs should be aware of the option of warrant officer careers. It's a closer cultural fit with more technical focus.

Comment by kbog on Critique of OpenPhil's macroeconomic policy advocacy · 2022-03-27T14:31:01.514Z · EA · GW

I don't quite understand what your view is in your section on macro advocacy and in particular what you think is the relevance of that Weyl quote.

To be clear, I think this episode really shouldn't be taken as a lesson against technocracy. The technocrats were on the right side of this one - sure the Fed was too loose in '21 but if it had been controlled by politicians it probably would have been even worse. The size of the stimulus was also a textbook expression of populism.

Of course you could also argue that Fed tightness prior to 2021 was a failure of the technocrats. Still, I was rather perplexed to see numerous EAs and adjacent folks at the time call to erode Fed independence just because of one (admittedly persistent) mistake that was already being corrected. There is a lot to say in defense of central bank independence and it shouldn't be jettisoned so lightly.

Regardless, Open Phil grantmaking is not technocracy because Open Phil is not a government. Open Phil is a component of civil society. It's important to uphold this distinction because a common tactic of bad faith critics like Weyl is to hyperbolize private actions and judge them by the standards of government actions. Technocracy is a form of government not the mere belief that one has the merits to try to lobby the government to act differently.

Comment by kbog on Why I'm concerned about Giving Green · 2021-07-07T22:35:52.153Z · EA · GW

As Giving Green is still recommending donations to TSM in spite of what seems to be the majority opinion here, I'd like to highlight a recent letter to the White House cosigned by TSM (among dozens of other groups). The letter argues that the United States should be less "antagonistic" towards China in order to focus on cooperating on climate change.

In reality, the United States and China have already agreed to cooperate on climate change. So TSM et al are not proposing any obvious change in US-China climate policy. Apparently they want us to be more generally friendly toward China in other domains, so that the already-agreed-to climate cooperation can run more smoothly.

The first problem with this is that it's not clear that US-China cooperation on climate change can achieve much anyway. The idea that America should cooperate with China on climate change is a trite line that gets repeated constantly as a superficial aspiration but to me seems rather deficient in policy substance. Exactly how this cooperation on climate change is supposed to work is generally a mystery if you try to think beyond vague outlines. This letter states that the US and China can cooperate because they have 'complementary strengths', but this isn't even really true. The letter says "For example, the U.S. is the world leader in clean technology research and controls immense financial resources; China is the world leader in industrial capacity across a number of clean energy industries and is a major source of infrastructure financing across the Global South" but this is almost the same two strengths stated in slightly different ways. Clearly, both are financiers. China does have serious clean tech research and the US does have serious clean tech industry; maybe there is a comparative advantage in American research and Chinese manufacturing, but in practice you cannot separate green research and green manufacturing very easily (most of the recent green tech progress is innovations and scale arising from manufacturing), and there aren't severe trade barriers stopping American green technology ideas and Chinese manufacturing products from crossing the Pacific anyway.

No doubt there is room for some reforms of trade, travel and immigration to improve green technology transfer between the US and China. But in broad strokes, both the US and China can provide both financing and clean technology, this is classic economic competition. It is at least as likely that competition between the United States and China will lead both sides to put more effort into financing clean infrastructure and exporting clean technology. After all, China's motivation for infrastructure projects has been partly geopolitical, and there have been many calls in the US for financing similar infrastructure projects around the world in order to compete with China.

I am not alone in suggesting this. Numerous foreign policy experts have cut through the trite assumption echoed by that letter and shown how climate progress fits equally or better into a framework of competition with China.

Competition With China Can Save the Planet | Foreign Affairs

Why the United States should compete with China on global clean energy finance (brookings.edu)

Want to Compete with China? Deliver on Climate Security for the Indo-Pacific - Just Security

Productive Competition: A Framework for U.S.-China Engagement on Climate Change | Center for Strategic and International Studies (csis.org)

The second problem with TSM et al's idea that America should generally be more friendly with China is that it (obviously) has implications beyond climate policy. It is yet another example of TSM attempting to influence broader political issues besides environmental policy, an activity which can be either good or bad but definitely adds to the complexity and undermines the robustness of Giving Green's recommendation.

While TSM does not say so explicitly, the apparent subtext is that the United States should exercise little or no serious policy response to China's infliction of mass suffering through concentration camps in Xinjiang and its treaty-violating destruction of political rights in Hong Kong. Their only statement on human rights is that the United States should work together with China to support international best practices on human rights... this is a bizarre thing to say considering that China is one of the biggest current violators of international best practices on human rights. It can only suggest that either the letter signatories are ignorant of severe systematic human rights violations in China or they believe that we should turn a blind eye on them in order to focus on cooperating on other issues (almost certainly the latter).

The letter also has the subtext that the United States should exert less effort in deterring a Chinese invasion of Taiwan and should be more reluctant to defend Taiwan in the event that China does invade that island nation, that the United States should tolerate China's allegedly unfair trade practices (I admit that I agree that the US should be more tolerant here, but some climate donors may disagree), and that the United States should tolerate China's efforts to change international institutions and international law. (I delve into the complexity and probable harmfulness of China's international political aspirations in this essay.)

In my estimation this letter is probably net harmful, and I would like to see anyone affiliated with EA exercise extreme caution before recommending donations to an organization which seems to implicitly discourage reasonable efforts to curb ongoing massive human rights violations.

Edit: here's another notable story. TSM cancelled an event in which they were planning to study protest tactics from Hong Kong, because sympathizers with the Chinese Communist Party were offended by the implication of legitimizing Hong Kong protestors. The mere fact that TSM is not holding events to study Hong Kong protest tactics is of course not a problem in itself, but backtracking and capitulating like this suggests that TSM suffers from moral rot and/or excessive influence by unsavory authoritarians.

Edit2: see Matt Yglesias' recent article suggesting that TSM is probably doing more harm than good. For completeness, here is a reply, which seems completely unconvincing, except the link to this article is something noteworthy to think about.

Comment by kbog on On AI Weapons · 2021-06-19T20:48:41.997Z · EA · GW

Sorry, I worded that poorly - my point was the lack of comprehensive weighing of pros and cons, as opposed to analyzing just 1 or 2 particular problems (e.g. swarm terrorism risk).

Comment by kbog on How well did EA-funded biorisk organisations do on Covid? · 2021-06-09T13:46:36.200Z · EA · GW

Hm, certainly the vaccine rollout was in hindsight the second most important thing after success or failure at initial lockdown and containment.

It does seem to have been neglected by preparation efforts and EA funding before the pandemic, but that's understandable considering how much of a surprise this mRNA stuff was.

Comment by kbog on How well did EA-funded biorisk organisations do on Covid? · 2021-06-09T13:43:57.203Z · EA · GW
Prevention definitely helps. (It is a semantic question if you want to count prevention as a type of preparation or not)

I don't think most people would consider prevention a type of preparation. EA-funded biorisk efforts presumably did not consider it that way. And more to the point, I do not want to lump prevention together with preparation because I am making an argument about preparation that is separate from prevention. So it's not about just semantics, but precision on which efforts did well or poorly.

The idea that preparation (henceforth excluding prevention) helps is conventional wisdom and I think I would want to see good evidence against this to stop believing in this.

Conventional wisdom is worth little when it is the product of armchair speculation rather than experience. If people live through half a dozen pandemics and still have that conventional wisdom then we can have a different conversation.

On pandemics specifically the quick containment of SARs seems to be a success story (although I have not looked at how much preparation played a role it does seem to be a part of the story)

Wouldn't preparation seem to be a part of the story of COVID-19 outcomes given a similarly superficial level of inquiry?

I agree but I think that designing systems that can make good decisions in a risk scenario is a form of preparation

Forget semantics. Did EA funding efforts and recipients design systems that made good decisions about COVID-19? Did anyone who talked about "pandemic preparation" pre-2020 use the term to encompass the design of systems like that?

A confusing factor that might make it hard to tell if preparation helped is that, based on the UK experience (eg discussed here) it appears that having bad plans in place may actually be worse than no plans.

Well you can't just define preparation as "good plans", that's a no-true-Scotsman argument. If you have some way of ensuring that your preparation will be good preparation then it's a different story.

Evidence from COVID does suggest to me that specific preparation does help. Notably countries (E Asia, Australasia)  that had SARs and prepared for future SARs type outbreaks managed COIVD better.

That isn't necessarily due to physical preparation, it could easily be intangible changes in the culture and political system, granting that there is in fact a causal connection as opposed to East Asia and Australasia just being better at this stuff.

iirc there was a study which found that American cities that lived through the Spanish Flu (1919) suffered less death early in the COVID19 outbreak. Cannot find the study now but if it's really true then that would be hard to explain through preparation.

Does that seem like a good summary and sufficiently explain your findings. 

I'm not sure exactly what anti-fragile means but that doesn't sound right, decision systems in the US/UK for instance didn't fall apart, they were just apathetic and unresponsive to good ideas just like they are for mundane problems that aren't big crises. In other words they calmly kept operating the way they always do.

I don't have reason to believe that there is a positive interaction between good leadership and good preparation. Maybe good preparation and good leadership act more as substitutes for each other rather than compliments.

Not sure it is useful to say 'prevention helps' since we cannot wish away viruses, we can only take measures to attempt to prevent viruses from emerging, and while those measures may be cost-effective it is a different conversation to which I have nothing to contribute.

I would summarize my view by saying that smart actions by government and civil society in the moment make the most difference, and if plans and preparation are to be helpful they will have to be done in careful ways to avoid the failures documented during COVID-19.

Comment by kbog on How well did EA-funded biorisk organisations do on Covid? · 2021-06-09T00:08:51.987Z · EA · GW

I moved my comment to an answer after learning that the index was directly funded by an Open Phil grant. You'd do better to repost your reply to me there. Sorry about the confusion.

Comment by kbog on How well did EA-funded biorisk organisations do on Covid? · 2021-06-09T00:01:43.499Z · EA · GW

The Global Health Security Index looks like a misfire. This isn't directly about performance during the pandemic, but Nuclear Threat Initiative, funded by Open Phil for this purpose (h/t HowieL for pointing this out) and collaborating with the Johns Hopkins Center for Health Security, made the 2019 Global Health Security Index which seems invalidated by COVID-19 outcomes and may have encouraged actors to take the wrong moves. This ThinkGlobalHealth article describes how its ratings did not predict good performance against the virus. The article relies on official death counts rather than excess mortality, but I made that correction and reached similar results.

Looking through the index, there are some indicators which don't make sense, like praising countries for avoiding travel restrictions (which is perverse), praising them for having more ethical regulations against surveillance and clinical trials (which may be ethically justified but is more likely to make it harder to fight a pandemic), and praising them for gender equality (a noble sentiment but not directly relevant to pandemics).

Even cutting some of those dubious measures out, I found the index was not predictive of excess mortality. In general it appears that effective pandemic response is not about preparation and this may have been systematically overlooked by EA efforts and funding recipients in the realm of biorisk.

Some people have also criticized the index for rating China moderately highly on prevention of pathogen release, considering that COVID-19 came from China, but considering that COVID-19 is just one data point of virus emergence or lab leak and that China is a very large country I don't think this is right.

Comment by kbog on Which non-EA-funded organisations did well on Covid? · 2021-06-08T17:21:42.700Z · EA · GW

EAs have voted in various elections in the United States. This study adjusted for various factors and found that Republican Party power at the state level was associated with modestly higher amounts of death from COVID-19. Since the majority of EA voters have picked the Democratic Party, this can be taken as something of a vindication. Of course, there are many other issues for deciding your vote besides pandemics, and that study might be wrong. It's not even peer reviewed.

The difference might be entirely explained by politically motivated differences in social distancing behavior between Democratic and Republican citizens, although if that's the case it could still somewhat vindicate opposition to the Republican Party.

Also, the study was done before the vaccine rollout; it will be interesting to see a similar analysis from a later date.

Comment by kbog on How well did EA-funded biorisk organisations do on Covid? · 2021-06-08T16:59:54.027Z · EA · GW

I discuss the GHS index at greater length in my answer.

Comment by kbog on How well did EA-funded biorisk organisations do on Covid? · 2021-06-08T14:26:42.039Z · EA · GW

Edit: I've reposted this comment as an answer, and am self-downvoting this.

Comment by kbog on Help me find the crux between EA/XR and Progress Studies · 2021-06-03T05:33:47.612Z · EA · GW

OK, sorry for misunderstanding.

I make an argument here that marginal long run growth is dramatically less important than marginal x-risk. I'm not fully confident in it. But the crux could be what I highlight - whether society is on an endless track of exponential growth, or on the cusp of a fantastical but fundamentally limited successor stage. Put more precisely, the crux of the importance of x-risk is how good the future will be, whereas the crux of the importance of progress is whether differential growth today will mean much for the far future.

I would still ceteris paribus pick more growth rather than less, and from what I've seen of Progress Studies researchers, I trust them to know how to do that well.

It's important to compare with long-term political and social change too. Arguably a higher priority than either effort, but also something that can be indirectly served by economic progress. One thing the progress studies discourse has persuaded me of is that there is some social and political malaise that arises when society stops growing. Healthy politics may require fast nonstop growth (though that is a worrying thing if true).

Comment by kbog on Help me find the crux between EA/XR and Progress Studies · 2021-06-02T21:13:46.452Z · EA · GW

"EA/XR" is a rather confusing term. Which do you want to talk about, EA or x-risk studies?

It is a mistake to consider EA and progress studies as equivalent or mutually exclusive. Progress studies is strictly an academic discipline. EA involves building a movement and making sacrifices for the sake of others. And progress studies can be a part of that, like x-risk.

Some people in EA who focus on x-risk may have differences of opinion with those in the field of progress studies.

Comment by kbog on Defusing the mitigation obstruction argument against geoengineering and carbon dioxide removal · 2021-05-22T16:16:12.139Z · EA · GW

I think I don't really buy your conceptual logic as the mitigation obstruction argument is about the degree to which particular solutions will be over or underestimated relative to their actual value, not about how absolutely good/cheap/fast/etc they are. When considered through that lens, it's not clear (at least to me) what to make of distinctions between big actions and small actions or easy actions and hard actions.

Geoengineering is cheap but Halstead argues that it's not such a bargain as was suggested by earlier estimates.

Comment by kbog on Defusing the mitigation obstruction argument against geoengineering and carbon dioxide removal · 2021-04-30T02:58:14.400Z · EA · GW
I fear that we need to do Geoengineering right away or we will be locked into never undoing the warming. Problem is a few countries like russia massively benefit from warming and once they see that warming and then take advantage of the newly opened land they will see any attempt to artificially lower temps as an attack they will respond to with force and they have enough fossil fuels to maintain the warm temps even if everyone else stops carbon emissions (which they can easily scuttle).

Deleted my previous comment - I have some little doubts and don't think the international system will totally fail but some problems along these lines seem plausible to me

Comment by kbog on Defusing the mitigation obstruction argument against geoengineering and carbon dioxide removal · 2021-04-30T02:50:54.869Z · EA · GW
I fear that we need to do Geoengineering right away or we will be locked into never undoing the warming. Problem is a few countries like russia massively benefit from warming and once they see that warming and then take advantage of the newly opened land they will see any attempt to artificially lower temps as an attack they will respond to with force and they have enough fossil fuels to maintain the warm temps even if everyone else stops carbon emissions (which they can easily scuttle).

The problem with this theory is, if they would benefit from higher temperatures and are willing to sacrifice the global environment for that purpose, why haven't they realized that now and already started?

No doubt it would make the most sense for them to pretend to be environmentalist while making superficial progress, keeping up appearances for as long as possible. But I think by now we should be able to tell which countries really are decarbonizing better than others.

Comment by kbog on Defusing the mitigation obstruction argument against geoengineering and carbon dioxide removal · 2021-04-30T02:44:33.773Z · EA · GW

I'm not sure if immediacy of the problem really would lead to a better response: maybe it would lead to a shift from prevention to adaptation, from innovation to degrowth, and from international cooperation to ecofascism. Immediacy could clarify who will be the minority of winners from global warming, whereas distance makes it easier to say that we are all in this together.

At the very least, geoengineering does make the future more complicated, in that on top of the traditional combination of atmospheric uncertainties and emission uncertainties, we have to add uncertainty about how the geoengineering regime will proceed. And most humans don't do a great job of responding to uncertain problems like this.

But I don't think we understand these psychological and political dynamics very well. This all reminds me of public health researchers, pre-COVID, theorizing about the consequences of restricting international travel during a pandemic.

I'll think a bit more on this.

Comment by kbog on Defusing the mitigation obstruction argument against geoengineering and carbon dioxide removal · 2021-04-30T01:59:08.455Z · EA · GW

Hm, I suppose I don't have reason to be confident here. But as I understand it:

Stratospheric aerosol injection removes a certain wattage of solar radiation per square meter.

The additional greenhouse effect from human emissions only constitutes a tiny part of our overall temperature balance, shifting us from 289 K to 291 K for instance. SAI cuts nearly the entire energy input from the Sun (excepting that which is absorbed above the stratosphere). So maybe SAI could be slightly more effective in terms of watts per square meter or CO2 tonnes offset under a high-emissions scenario, but it will be a very small difference.

Would like to see an expert chime in here.

Comment by kbog on On AI Weapons · 2021-04-20T09:25:43.494Z · EA · GW

Hi Tommaso,

If I think about the poor record the International Criminal Court has of bringing war criminals to justice, and the fact that the use of cluster bombs in Laos or Agent Orange in Vietnam did not lead to major trials, I am skeptical on whether someone would be hold accountable for crimes committed by LAWs. 

But the issue here is whether responsibility and accountability is handled worse with LAWs as compared with normal killing. You need a reason to be more skeptical for crimes committed by LAWs than you are for crimes not committed by LAWs. That there is so little accountability for crimes committed without LAWs even suggests that we have nothing to lose.

What evidence do we have that international lawmaking follows suit when a lethal technology is developed as the writer assumes it will happen?

I don't think I make such an assumption?  Please remind me (it's been a while since I wrote the essay), you may be mistaking a part where I assume that countries will figure out safety and accountability for their own purposes. They will figure out how to hold people accountable for bad robot weapons just as they hold people accountable for bad equipment and bad human soldiers, for their own purposes without reference to international laws.

However, in order for the comparison to make more sense I would argue that the different examples should be weighted according to the number of victims. 

I would agree if we had a greater sample of large wars, otherwise the figure gets dominated by the Iran-Iraq War, which is doubly worrying because of the wide range of estimates for that conflict. You could exclude it and do a weighted average of the other wars. Either way, seems like civilians are still just a significant minority of victims on average. 

Intuitively to me, the case for LAWs increasing the chance of overseas conflicts such as the Iraq invasion is a very relevant one, because of the magnitude of civilian deaths.

Yes, this would be similar to what I say about the 1991 Gulf War - the conventional war was relatively small but had large indirect costs mostly at civilians. Then, "One issue with this line of reasoning is that it must also be applied to alternative practices besides warfare..." For Iraq in particular, while the 2003 invasion certainly did destabilize it, I also think it's a mistake to think that things would have been decent otherwise (imagine Iraq turning out like Syria in the Arab Spring; Saddam had already committed democide once, he could have done it again if Iraqis acted on their grievances with his regime).

From what the text says I do not see why the conclusion is that banning LAWs would have a neutral effect on the likelihood of overseas wars, given that the texts admits that it is an actual concern.

My 'conclusion' paragraph states it accurately with the clarification of 'conventional conflicts' versus 'overseas counterinsurgency and counterterrorism'

I think the considerations about counterinsurgencies operations being positive for the population is at the very least biased towards favoring Western intervention. 

Well, the critic of AI weapons needs to show that such interventions are negative for the population. My position in this essay was that it's unclear whether they are good or bad. Yes, I didn't give comprehensive arguments in this essay. But since then I've written about these wars in my policy platform where you can see me seriously argue my views, and there I take a more positive stance (my views have shifted a bit in the last year or so). 

The considerations about China and the world order in this section seem simplistic and rely on many assumptions. 

Once more, I got you covered! See my more recent essay here about the pros and cons (predominately cons) of Chinese international power. (Yes it's high time that I rewrote and updated this article)

Comment by kbog on Why EA groups should not use “Effective Altruism” in their name. · 2021-03-07T23:37:55.015Z · EA · GW

But the answers to a survey like that wouldn't be easy interpret. We should give the same message under organization names to group A and group B and see which group is then more likely to endorse the EA movement or commit to taking a concrete altruistic action.

Comment by kbog on Objectives of longtermist policy making · 2021-03-04T00:49:47.045Z · EA · GW

No I agree on 2!  I'm just saying even from a longtermist perspective, it may not be as important and tractable as improving institutions in orthogonal ways.

Comment by kbog on Objectives of longtermist policy making · 2021-02-21T03:51:58.089Z · EA · GW

I think it's really not clear that reforming institutions to be more longtermist has an outsized long run impact compared to many other axes of institutional reform.

We know what constitutes good outcomes in the short run, so if we can design institutions to produce better short run outcomes, that will be beneficial in the long run insofar as those institutions endure into the long run. Institutional changes are inherently long-run.

Comment by kbog on A love letter to civilian OSINT, and possibilities as a tool in EA · 2021-02-21T03:50:00.394Z · EA · GW

I saw OSINT results frequently during the Second Karabkh War (October 2020). The OSINT evidence of war crimes from that conflict has been adequately recognized and you can find info on that elsewhere. Beyond that, it seems to me that certain things would have gone better if certain locals had been more aware of what OSINT was revealing about the military status of the conflict, as a substitute for government claims and as a supplement to local RUMINT (rumor intelligence). False or uncertain perceptions about the state of a war can be deadly. But there is a language barrier and an online/offline barrier so it is hard to get that intelligence seen and believed by the people who need it.

Beyond that, OSINT might be used to actually influence the military course of conflicts if you can make a serious judgment call of which side deserves help, although this partisan effort wouldn't really fit the spirit of "civilian" OSINT.  Presumably the US and Russia already know the location of each other's missile silos, but if you look for stuff that is less important or something which is part of a conflict between minor groups who lack good intelligence services, then you might produce useful intelligence. For a paramount example of dual use risks, during this war, someone geolocated Armenia's Iskander missile base and shared it on Twitter, and it seems unlikely to me that anyone in Azerbaijan had found it already. I certainly don't think it was responsible of him, and Azerbaijan did not strike the base anyway, but it suggests that there is a real potential to influence conflicts. You also might feed that intelligence to the preferred party secretly rather than openly, though that definitely violates the spirit of civilian OSINT. Regardless, OSINT may indeed shine when it is rushed in the context of an active military conflict where time is of the essence, errors notwithstanding. Everyone likes to makes fun of Reddit for the Boston Bomber incident but to me it seems like the exception that tests the rule. While there were a few OSINT conclusions during the war which struck me as dubious, never did I see evidence that someone's geolocation later turned out to be wrong. 

Also, I don't know if structure and (formal) training are important. Again, you can pick on those Redditors, but lots of other independent open source geeks have been producing reliable results. Imposing a structure takes away some of the advantages of OSINT. That's not to say that groups like Bellingcat don't also do good work, of course.

To me, OSINT seems like a crowded field due to the number of people who do it as a hobby. So I doubt that the marginal person makes much difference. But since I haven't seriously tried to do it, I'm not sure. 

Comment by kbog on Why EA groups should not use “Effective Altruism” in their name. · 2021-02-21T02:04:44.103Z · EA · GW

There is a lot of guesswork involved here. How much would it cost for someone, like the CEA, to run a survey to find out how popular perception differs depending on these kinds of names? It would be useful to many of us who are considering branding for EA projects. 

Comment by kbog on Super-exponential growth implies that accelerating growth is unimportant in the long run · 2021-02-14T05:44:26.499Z · EA · GW

Updates to this: 

Nordhaus paper argues that we don't appear to be approaching a singularity. Haven't read it. Would like to see someone find the crux of the differences with Roodman.

Blog 'Outside View' with some counterarguments to my view:

Thus, the challenge of building long term historical GDP data means we should be quite skeptical about turning around and using that data to predict future growth trends. All we're really doing is extrapolating the backwards estimates of some economists forwards. The error bars will be very large.

Well, Roodman tests for this in his paper, see 5.2, and finds that systematic moderate overestimation or underestimation only changes the expected explosion date by +/- 4 years.

I guess things could change more if  the older values are systematically misestimated differently from more recent values? If very old estimates are all underestimates but recent estimates are not, then that could delay the projection further. Also, maybe he should test for more extreme magnitudes of misestimation. But based on the minor extent to which his other tests changed the results, I doubt this one would make much difference either.

But if it's possible, or even intuitive, that specific institutions fundamentally changed how economic growth occurred in the past, then it may be a mistake to model global productivity as a continuous system dating back thousands of years. In fact, if you took a look at population growth, a data set that is also long-lived and grows at a high rate, the growth rate fundamentally changed over time. Given the magnitude of systemic economic changes of the past few centuries, modeling the global economy as continuous from 10,000 BCE to now may not give us good predictions. The outside view becomes less useful at this distance.

Fair, but at the same time, this undercuts the argument that we should prioritize economic growth as something that will yield social dividends indefinitely into the future. If our society has fundamentally transformed so that marginal economic growth in 1000 BC makes little difference to our lives, then it seems likely that marginal economic growth today will make little difference to our descendants in 2500 AD.

It's possible that we've undergone discontinuous shifts in the past but will not in the future. Just seems unlikely.

Comment by kbog on Objectives of longtermist policy making · 2021-02-14T05:15:27.605Z · EA · GW

I'm skeptical of this framework because in reality part 2 seems optional - we don't need to reshape the political system to be more longtermist in order to make progress. For instance, those Open Phil recommendations like land use reform can be promoted thru conventional forms of lobbying and coalition building.

In fact, a vibrant and policy-engaged EA community that focuses on understandable short and medium term problems can itself become a fairly effective long-run institution, thus reducing the needs in part 1.

Additionally, while substantively defining a good society for the future may be difficult, we also have the option of defining it procedurally. The simplest example is that we can promote things like democracy or other mechanisms which tend to produce good outcomes. Or we can increase levels of compassion and rationality so that the architects of future societies will act better. This is sort of what you describe in part 2, but I'd emphasize that we can make political institutions which are generically better rather than specifically making them more longtermist.

This is not to say that anything in this post is a bad idea, just that there are more options for meeting longtermist goals.

Comment by kbog on A brief explanation of the Myanmar coup · 2021-02-03T22:39:03.691Z · EA · GW

This may help address your question about South Africa Lecture 12: Business and Democratic Reform: A Case Study of South Africa - YouTube

Comment by kbog on Should patient investors try to correlate portfolio holdings with potential cause areas? · 2021-02-01T07:48:10.997Z · EA · GW

Old discussion about this: Selecting investments based on covariance with the value of charities - EA Forum (effectivealtruism.org)

Comment by kbog on Religious Texts and EA: What Can We Learn and What Can We Inform? · 2021-01-31T04:37:47.800Z · EA · GW

These lectures on historical analysis of the New Testament are neat and might be of interest to you. They give good context for understanding the contemporaneous interpretation of scripture.

Comment by kbog on EA and the Possible Decline of the US: Very Rough Thoughts · 2021-01-30T05:14:55.609Z · EA · GW

The issue with these interventions suggested for preventing collapse is that they generally have much more pressing impacts besides this. For instance, of course approval voting is great, but its impacts on other political issues (both ordinary political problems, and other tail scenarios like dictatorship) are much more significant. More generally, stuff that makes America politically healthier reduces the probability that it will collapse, and the converse is almost always true. So not only is the collapse possibility relatively unimportant, it's mostly unnecessary baggage to carry in your cognitive model.

As for movement infrastructure, a similar logic probably applies as EA organizations have many other priorities with these things.

Comment by kbog on Why I'm concerned about Giving Green · 2021-01-30T04:47:29.505Z · EA · GW

There are more problems with The Sunrise Movement (TSM) which don't seem to have been raised yet in this discussion.

  • I think they have an underappreciated propensity to actively oppose progress in environmental policy. Others have brought up their opposition to a carbon tax in Washington, as well as their hostility to nuclear power, but here one Sunrise local group is opposing cap-and-trade in Oregon, and here Sunrise is opposing carbon capture on fossil fuel emissions. Also, the same environmentalist-NIMBY problem we have seen with nuclear power is likely to repeat with geothermal energy: certain kinds of geothermal power are a bit controversial because they use technology which is similar to fracking, and as geothermal technology and industry mature this will likely become a bigger battleground where Sunrise may work for the wrong side. I also have reservations about how Sunrise-type activists react to natural gas and waste-to-energy technologies, two things which are legitimately controversial but still might be net positive. I can't find a source for whether Sunrise has actually opposed waste-to-energy but it seems probable (others like them have). They also gave Biden an F for his climate plan; personally, I thought Biden deserved 2.2 points on air pollution on a -3 to +3 scale. Giving an F to someone with a pretty good environmental plan is a big red flag.
  • Second, TSM is not very focused on climate change; they perform activism and lobbying for a wider range of political issues. Insofar as TSM spends time and energy on other stuff besides climate change, this probably reduces their effectiveness on climate issues relative to more focused groups. Some of those specific political activities are discussed below.
  • Third, TSM's non-climate-change impacts are plausibly harmful.
    • Housing policy - TSM has engaged in NIMBY opposition to upzoning, and here is Sunrise Honolulu commenting that all housing investment should be banned. I've heard that they have a bigger pattern of this. Such behavior is certainly bad for both economic and environmental reasons; see my writeup on residential zoning. At the same time they have promoted new housing in other contexts, it's not clear if the good outweighs the bad.
    • Police reform - TSM has promoted Defund the Police. As I describe here, defunding police departments is a bad policy idea, in fact hiring more police officers is probably a good idea. That said, Sunrise has also promoted Black Lives Matter and perhaps some more reasonable forms of police reform, and this is more likely to be a good thing.
    • Deliberate electoral politics - TSM has endorsed political campaigns with farther-reaching impacts beyond climate policy, generally because they are a progressive left-wing group who wants to achieve a variety of progressive left-wing political goals. Some notable ones which stick out to me are:
      • They supported an unsuccessful primary campaign against Sen. Dianne Feinstein, which was probably good because Feinstein is a pretty bad senator, tho defeating her probably would have achieved nothing good for climate policy. In fact, Feinstein has sponsored a carbon tax bill.
      • They supported a successful primary campaign against Rep. Eliot Engel, who had been a strong congressional proponent of effective foreign aid programs including PEPFAR. Removing Engel has no discernible impact on the climate. He has since been replaced in his position as the chair of the Foreign Affairs Committee with Rep. Gregory Meeks who has no such record on foreign aid, altho hopefully he will become more active with his new position.
      • They supported Sen. Ed Markey against a primary challenge. Again this had no discernible impact on the climate, nor on most other policy issues frankly. I am happy that Markey won, but it is not a big deal.
      • They supported Bernie Sanders in his 2020 presidential primary campaign. On the merits, Sanders was pretty comparable to other Democratic candidates including Biden. But in terms of electability, he was inferior (see this essay where I use his campaign as a case study of electability). So this was a bad decision.
    • Inadvertent electoral politics - as other commentators have touched upon, some of Sunrise's advocacy can inadvertently harm the Democratic Party.  This is especially a consequence of calls to defund the police. As I argue here, the Democratic Party is generally superior to the Republican Party, so preventing the Democratic Party from winning elections constitutes harm.
    • Deprioritization of other issues - if TSM's mechanism of change is to make Democratic politicians expend more political capital on climate change, that implies that the politicians will expend less political capital on other issues. It's one thing to say that we need more action on climate change, but quite another to say that Democratic politicians should focus on climate policy before or instead of  other things like healthcare, immigration and tax policy. I do lean towards saying that air pollution should indeed get more priority on the margin, but the downside for other issues still chips away at the expected value. Additionally, insofar as TSM pressures Democratic politicians to place more priority on other issues like criminal justice and public housing, that similarly detracts from alternative priorities, and here I'd be still less optimistic about the impact.

Certainly there is a difference between everything that TSM does, and the marginal impact of GG's recommendation for their education fund. And certainly it is possible that the good parts of TSM's environmental activism outweigh these downsides. And you might disagree with me on some of these political issues. But we must see strong arguments along these lines before prioritizing TSM for donations.  And while I haven't taken a close or systematic look at TSM's activities, given all the red flags I tentatively expect that the Sunrise Movement does more harm than good.

Other commenters here have framed this stuff as a tension between the left and conservatives/moderates, but there are plenty of Democrats who criticize TSM too. Here's Matt Yglesias saying "The problem with funding Sunrise is not that there is an objective scarcity of funds and other people need the money more, it’s that Sunrise is bad and should get $0." And such views about TSM are pretty common at least on left-leaning Twitter. Recommending TSM without having awareness and counterarguments to these criticisms does not imply a need to listen more to conservatives or moderates (tho I don't necessarily oppose the idea of listening more to conservatives or moderates), it suggests a more general need to keep closer tabs on the current political discourse. The synthesis of "EA should generally strive to be apolitical" and "some good causes are inherently political" should not be for us to naively support interventions because of the way that they attack one political problem while we ignore the risky impacts of those interventions on other parts of the political system.

Finally, I am less confident about this point, but I suspect that GG is being too credulous about TSM achieving change. Just because they demand that Democratic politicians do something, and the Democratic politicians do that something, with TSM claiming that they were responsible for making the Democratic politicians do that something, doesn't mean TSM actually was responsible for making the politicians change. If a Democratic politician does major climate stuff in office after being criticized by TSM during their election campaign for something symbolic like not bringing up the Green New Deal, that's only very weak evidence that TSM actually changed the politician's behavior; it is better evidence for the claim that Democratic politicians are generally both serious on climate policy and savvy at election messaging and TSM was just making unfounded criticisms all along.

Here it is worth distinguishing two theories of how the Democratic Party works. Some people (like TSM and others on the progressive left) think the elites of the Democratic Party are centrist corporatists who don't really want to implement leftist policies but will do it if their base pressures them hard enough. Other people think that Democratic Party elites are actually very ideologically liberal and would intrinsically like to implement ambitious reforms on the environment and other issues, but are stymied by right-wing and centrist political forces. AFAICT the second theory is much more accurate, and David Shor (the leftist data whiz) seems to agree. 

I hope this does not come across too negative,  since I am glad Giving Green exists and I just think this recommendation is a mistake.

Comment by kbog on Why are party politics not an EA priority? · 2021-01-04T00:40:31.578Z · EA · GW

I agree with you. You may appreciate my articles:

https://eapolitics.org/handbook.html

https://eapolitics.org/parties.html

Comment by kbog on Two Nice Experiments on Democracy and Altruism · 2020-12-31T07:58:59.969Z · EA · GW

the environmental success of democracies relative to autocracies

I want to read this but the link doesn't work

Comment by kbog on [Crosspost] Relativistic Colonization · 2020-12-31T05:10:13.942Z · EA · GW

If it is to gather resources en route, it must accelerate those resources to its own speed. Or alternatively, it must slow down to a halt, pick up resources and then continue. This requires a huge expenditure of energy, which will slow down the probe.

Bussard ramjets might be viable. But I'm skeptical that it could be faster than the propulsion ideas in the Sandberg/Armstrong paper. Anyway you seem to be talking about spacecraft that will consuming planets, not Bussard ramjets.

Going from 0.99c to 0.999c requires an extraordinary amount of additional energy for very little increase in distance over time. At that point, the sideways deviations required to reach waypoints (like if you want to swing to nearby stars instead of staying in a straight line) would be more important. It would be faster to go 0.99c in a straight line than 0.999c through a series of waypoints.

If we are talking about going from 0.1c to 0.2c then it makes more sense.

Comment by kbog on Are we living at the most influential time in history? · 2020-12-31T00:02:16.858Z · EA · GW

I think this argument implicitly assumes a moral objectivist point of view.

I'd say that most people in history have been a lot closer to the hinge of history when you recognize that the HoH depends on someone's values.

If you were a hunter-gatherer living in 20,000 BC then you cared about raising your family and building your weir and you lived at the hinge of history for that.

If you were a philosopher living in 400 BC then you cared about the intellectual progress of the Western world and you lived at the hinge of history for that.

If you were a theologian living in 1550 then you cared about the struggle of Catholic and Protestant doctrines and you lived at the hinge of history for that.

If you're an Effective Altruist living in 2020 then you care about global welfare and existential risk, and you live at the hinge of history for that.

If you're a gay space luxury communist living in 2100 then you care about seizing the moons of production to have their raw materials redistributed to masses, and you live at the hinge of history for that.

This isn't a necessary relationship. We may say that some of these historical hinges actually were really important in our minds, and maybe a future hinge will be more important. But generally speaking, the rise and fall of motivations and ideologies is correlated with the sociopolitical opportunity for them to matter. So most people throughout history have lived in hingy times. 

Comment by kbog on Big List of Cause Candidates · 2020-12-30T23:31:26.153Z · EA · GW

Thanks for the comments. Let me clarify about the terminology. What I mean is that there are two kinds of "pulling the rope harder". As I argue here:

The appropriate mindset for political engagement is described in the book Politics Is for Power, which is summarized in this podcast. We need to move past political hobbyism and make real change. Don’t spend so much time reading and sharing things online, following the news and fomenting outrage as a pastime. Prioritize the acquisition of power over clever dunking and purity politics. See yourself as an insider and an agent of change, not an outsider. Instead of simply blaming other people and systems for problems, think first about your own ability to make productive changes in your local environment. Get to know people and build effective political organizations. Implement a long-term political vision.

A key aspect of this is that we cannot be fixated on culture wars. Complaining about the media or SJWs or video game streamers may be emotionally gratifying in the short run but it does nothing to fix the problems with our political system (and it usually doesn't fix the problems with media and SJWs and video game streamers either). It can also drain your time and emotional energy, and it can stir up needless friction with people who agree with you on political policy but disagree on subtle cultural issues. Instead, focus on political power.

To illustrate the point, the person who came up with the idea of 'pulling the rope sideways', Robin Hanson, does indeed refrain from commenting on election choices and most areas of significant public policy, but has nonetheless been quite willing to state opinions on culture war topics like political correctness in academia, sexual inequality, race reparations, and so on.

I think that most people who hear 'culture wars' think of the purity politics and dunking and controversies, but not stuff like voting or showing up to neighborhood zoning meetings.

So even if you keep the same categorization, just change the terminology so it doesn't conflate those who are focused on serious (albeit controversial) questions of policy and power with those who are culture warring. 

Comment by kbog on Big List of Cause Candidates · 2020-12-30T06:14:39.995Z · EA · GW

You could add this post of mine to space colonization: An Informal Review of Space Exploration - EA Forum (effectivealtruism.org).

I think the 'existential risks' category is too broad and some of the things included are dubious. Recommender systems as existential risk? Autonomous weapons? Ideological engineering? 

Finally, I think the categorization of political issues should be heavily reworked, for various reasons. This kind of categorization is much more interpretable and sensible:

  • Electoral politics
  • Domestic policy
    • Housing liberalization
    • Expanding immigration
    • Capitalism
    • ...
  • Political systems
    • Electoral reform
    • Statehood for Puerto Rico
    • ...
  • Foreign policy and international relations
    • Great power competition
    • Nuclear arms control
    • Small wars
    • Democracy promotion
    • Self-determination
    • ...

I wouldn't use the term 'culture war' here, it means something different than 'electoral politics'.

Comment by kbog on The case for delaying solar geoengineering research · 2020-12-29T10:27:32.138Z · EA · GW

I don't think the pernicious mitigation obstruction argument is sound. It would be equally plausible for just about any other method of addressing air pollution. For instance, if we develop better solar power, that will reduce the incentive for countries and other actors to work harder at implementing wind power, carbon capture, carbon taxes, tree planting, and geoengineering. All climate solutions substitute for each other to the extent that they are perceived as effective. But we can't reject all climate solutions for fear that they will discourage other climate solutions, that would be absurd. Clearly, this mitigation obstruction effect is generally smaller than the benefits of actually reducing emissions.

The pernicious mitigation obstruction argument could make more sense if countries only care about certain consequences of pollution. Specifically, if countries care about protecting the climate but don't care about protecting public health and crops from air pollution, then geoengineering would give them an option to mitigate one problem while comfortably doing nothing to stop the other, whereas if they have to properly decarbonize then they would end up fixing both problems. However, if anything the reverse is true. To the extent that the politics of climate change mitigation are hampered by the global coordination problem (which is dubious), and to the extent that the direct harms of air pollution are concentrated locally, countries will worry too little about the climate impacts while being more rational about direct pollution impacts. So geoengineering would mitigate the politically difficult problem (climate change) while still leaving countries with full incentives to fix the politically easy problem (direct harms of pollution), making it less of a mitigation obstruction risk than something like wind turbines.

Additionally, given the contentious side effects of geoengineering, the prospect of some actors doing it if climate change gets much worse may actually encourage other actors to do more to mitigate climate change using conventional methods. It's still the case that researching or deploying geoengineering would reduce the amount of other types of mitigation, but it would do so to a lesser degree than that caused by comparable amounts of traditional mitigation.

Another note: I think if we had a better understanding of the consequences of solar geoengineering, then the security consequences of unilateral deployment would be mitigated. Disputes become less likely when both sides can agree on the relevant facts.

Comment by kbog on American policy platform for total welfare · 2020-12-08T22:31:52.777Z · EA · GW

My main point: By not putting "EA" into the name of your project, you get free option value: If you do great, you can still always associate with EA more strongly at a later stage; if you do poorly, you have avoided causing any problems for EA. 

I've already done this. I have shared much of this content for over a year without having this name and website. My impression was that it didn't do great nor did it do poorly (except among EAs, who have been mostly positive). One of the problems was that some people seemed confused and suspicious because they didn't grasp who I was and what point of view I was coming from. 

I agree with this. As far as I know, none of these orgs and individuals currently use an EA branding. 

A few do. And most may not literally have "EA" in their name, but they still explicitly invoke it, and audiences are smart enough to know that they are associated with the EA movement. 

And they get far larger audiences and attention than me, so they are the dominant images in the minds of people who have political perceptions of EA. Whatever I do to invoke EA will create a more equal diversity of public political faces of the movement, not a monolithic association of the EA brand with my particular view.

 

RE: the rest of your points, I won't go point by point because you are making some general arguments which don't necessarily apply to your specific worry about the presence or absence of "EA" in the name. It would be more fruitful to first clarify exactly which types of people are going to have different perceptions on this basis. Then after that we can talk about whether the differences in perception for those particular people will be good or bad. 

You already say that you are mainly worried about "public intellectuals, policy professionals, and politicians." Any of these who reads my website in detail or understands the EA movement well will know that it relates to EA without necessarily being the only EA view. So we are imagining a political elite who knows little about EA and looks briefly at my website. A lot of the general arguments don't apply here, and to me it seems like a good idea to (a) give this person a hook to take the content seriously and (b) show this person that EA can be relevant to their own line of work.

Or maybe we are imagining someone who previously didn't know about EA at all, in which case introducing them to the idea is a good thing.

Comment by kbog on American policy platform for total welfare · 2020-12-07T23:33:04.856Z · EA · GW

I think there are countervailing reasons in favor of doing so publicly, described here

Additionally, prominent EA organizations and individuals have already displayed enough politically contentious behavior that a lot of people already perceive EA in certain political ways. Restricting politically contentious public EA behavior to those few  orgs and individuals maximizes the problems of 1) and 2) whereas having a wider variety of public EA points of view mitigates them. I'd use a different branding if I were less convinced that politically engaged audiences already perceive EA as having political aspects.

Comment by kbog on EA politics mini-survey results · 2020-12-01T19:35:04.811Z · EA · GW

The Civic Handbook presents a more simplified view on the issue that sticks to making the least controversial claims that nearly all EAs should be able to get on board with. My full justification for why I believe we should maintain the defense budget, written earlier this year, is here:  https://eapolitics.org/platform.html#mozTocId629955 

Comment by kbog on Taking Self-Determination Seriously · 2020-11-29T15:18:46.160Z · EA · GW

I will think more about Brexit (noting that the EU is a supranational organization not a nation-state) but keep in mind that under the principle of self-determination, Scotland, which now would likely prefer to leave the UK and stay in the EU, should be allowed to do so.

Comment by kbog on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-26T12:55:41.064Z · EA · GW

welcome any evidence you have on these points, but your scenario seems to a) assume limited offensive capability development, b) willingness and ability to implement layers of defensive measures at all “soft” targets, c) focus only on drones, not many other possible lethal AWSs, and d) still produces considerable amount of cost--both in countermeasures and in psychological costs--that would seem to suggest a steep price to be paid to have lethal AWSs even in a rosy scenario.

I'm saying there are substantial constraints on using cheap drones to attack civilians en masse, some of them are more-or-less-costly preparation measures and some of them are not. Even without defensive preparation, I just don't see these things as being so destructive.

If we imagine offensive capability development then we should also imagine defensive capability development.

What other AWSs are we talking about if not drones?

In addition to potentially being more precise, lethal AWSs will be less attributable to their source, and present less risk to use (both in physical and financial costs).

Hmm. Have there been any unclaimed drone attacks so far, and would that change with autonomy? Moreover, if such ambiguity does arise, would that not also mitigate the risk of immediate retaliation and escalation? My sense here is that there are conflicting lines of reasoning going on here. How can AWSs increase the risks of dangerous escalation, but also be perceived as safe and risk-free by users?

I'm not sure how to interpret this. The lower end of the ranges are the lower end of ranges given by various estimators. The mean of this range is somewhere in the middle, depending how you weight them.

I mean, we're uncertain about the 1-7Bn figure and uncertain about the 0.5-20% figure. When you multiply them together the low x low is implausibly low and the high x high is implausibly high. But the mean x mean would be closer to the lower end. So if the means are 4Bn and 10% then the product is 40M which is closer to the lower end of your 0.5-150M range. Yes I realize this makes little difference (assuming your 1-7Bn and 0.5-0.20% estimates are normal distributions). It does seem apparent to me now that the escalation-to-nuclear-warfare risk is much more important than some of these direct impacts.

The question of whether small-scale conflicts will increase enough to counterbalance the life-saving of substituting AWs for soldiers is, I agree, hard predict. But unless you take the optimistic end of the spectrum (as I guess you have) I don't see how the numbers can balance at all when including large-scale wars.

I think they'd probably save lives in a large-scale war for the same reasons. You say that they wouldn't save lives in a total nuclear war, that makes sense if civilians are attacked just as severely as soldiers. But large-scale wars may not be like this. Even nuclear wars may not involve major attacks on cities (but yes I realize that the EV is greater for those that do).

This is a very strange argument to me. Saying something is problematic, and being willing in principle not to do it, seems like a pretty necessary precursor to making an agreement with others not to do it. 

I suppose that's fine, I was thinking more about concretely telling people not to do it, before any such agreement. 

You also have to be in principle willing to do something if you want to credibly threaten the other party and convince them not to do it.

Moreover, if something is ethically wrong, we should be willing to not do it even if others do it

Well there are some cases where a problematic weapon is so problematic that we should unilaterally forsake it even if we can't get an agreement. But there are also some cases where it's just problematic enough that a treaty would be a good thing, but unilaterally forsaking it would do net harm by degrading our relative military position. (Of course this depends on who the audience is, but this discourse over AWSs seems to primarily take place in the US and some other liberal democracies.)