An Effective Altruist Course for Capitalism

2019-05-24T00:34:21.858Z · score: 6 (1 votes)

An Effective Altruist Plan for Socialism

2019-05-24T00:30:19.653Z · score: 8 (2 votes)
Comment by kbog on Overview of Capitalism and Socialism for Effective Altruism · 2019-05-18T23:01:38.311Z · score: 9 (6 votes) · EA · GW

I'd disagree, the EA movement should push economic change if such change is in fact valuable. Just happens to be the case that there isn't good enough reason to substantiate that cause area in most cases. Of course even if it is a good cause area, the idea that short-term charity is therefore bad/neutral is just nonsensical.

Comment by kbog on Will we eventually be able to colonize other stars? Notes from a preliminary review · 2019-05-18T08:35:19.890Z · score: 4 (2 votes) · EA · GW

[I know this post is extremely old but] check this out for some more skeptic points. https://www.antipope.org/charlie/blog-static/2018/07/canned-monkeys-dont-ship-well-.html

Comment by kbog on Overview of Capitalism and Socialism for Effective Altruism · 2019-05-17T23:02:25.281Z · score: 4 (2 votes) · EA · GW

Should be interesting, looking forward to it.

Comment by kbog on Overview of Capitalism and Socialism for Effective Altruism · 2019-05-17T22:55:48.089Z · score: 1 (3 votes) · EA · GW

Thanks for the comments.

That is to say, I don't see any reason to consider it unusually trustworthy with respect to things like which candidates are best for specific cause areas such as climate change, education, abortion, etc let alone broad ideological evaluations of "capitalism vs. socialism" as general philosophies.

The main issue is that the policy positions selected here are not (& should not be) always the same as what other pundits/think tanks want. A good example is immigration; even though we're pro-immigration we care a lot more about expanding legal immigration and aren't trying to prevent border enforcement (compared to other immigration advocates). But if other evaluations are valid and reliable, then we do defer to them. We just haven't found many of them yet. Suggestions are always welcome.

This is not meant to be discouraging - creating voting guides is a crowded field, being the "most trustworthy" isn't necessarily easy, though I do wonder if it might be better going forward to place a greater focus on evaluating the less crowded areas.

Well we have to put all the issues in the model. Otherwise the final rating isn't very meaningful. It's much less helpful to get a report that says "Cory Booker is the best candidate for animal welfare, but you might want to look up what other people have say about his tax proposals and idk if he's best candidate overall."

If we can fill the various issues up with reliable external sources then we do that, and it's quick and easy. If we can't, then we have to put more focus in examining and writing on our own. So in a sense we are putting more focus in topics that others have neglected to examine.

I think an EA framework evaluating mainstream politics should include interventions (e.g. plans of organizing, activism),

The furthest we've gone so far is to pick specific candidates to support or oppose in the run up to the primaries, and recommended some $1 donations to help them qualify for debates. More detailed guidance would be a good thing to include, I agree. Personally I don't know what to write though (again, suggestions/contributions welcome).

a sense of the "tractability" when possible...

If you're referring to the judgments on political issues, that's implicit in the "weight" sections. We look at how much good or bad could be done by government actions, given a certain amount of decision making power. But it's not framed in a manner that makes a lot of sense for people who are specifically trying to influence a single political issue, it's for selecting politicians.

If you're referring to the judgments on candidates, I think no one has a good idea of how to evaluate the tractability of making them win. Whatever inclinations we do have (like, "don't worry much about Delaney because he will presumably lose no matter what") get put near the end where it currently says "To be added: final conclusions and recommendations for activism." Then they factor into the conclusions selected for the Summary for Voters and Activists.

Comment by kbog on Overview of Capitalism and Socialism for Effective Altruism · 2019-05-17T22:02:42.193Z · score: 9 (4 votes) · EA · GW
It wasn't terrible. It made some good points.

Beg to differ.

Outsourcing causes (some) domestic workers to lose wages and bargaining power, but wages and bargaining power grow in the developing world, so it's not a global race to the bottom. Also, the benefits to consumers and companies make it Kaldor-Hicks efficient even within the domestic country.

Trade also inhibits investment in labor-saving technology

Such investment is just overspending if it only happens when companies are compelled to overspend on labor.

But this wage increase doesn’t necessarily make these workers happier.

It generally does, and it's evident from their behavior. "Necessarily" is a naughty word in social science.

Medieval peasants didn’t build trade unions, and neither did the rural peasants of today’s developing states.

There are obvious reasons for this besides the ridiculous idea that they're content with their status. Medieval peasants fought actual rebellions for better treatment.

We might point out that given the reality of climate change, the choice is suicidal—it’s not possible for everyone to live like Americans

Funny how he equivocates between "getting out of extreme poverty" and "living like Americans".

Yet at the same time, we are socialists and that means we’re meant to care about American workers.

Everyone cares about American workers. If he means to care more, that's nationalism, which is exactly as harmful and no better justified than saying that we're meant to care about white people, men, etc in priority over others.

Rich states should demand, as a condition of trade agreements, adjustments in wages, taxes, and regulations to reduce or eliminate disparities in the treatment of rich workers and poor workers.

There's a lot of Western hubris going on here. Developing countries have many challenges and they are not economically or institutionally equipped to skip ahead in modernizing their regulations and welfare. Demanding such political concessions in exchange for economic reciprocation is a textbook example of the kind of neocolonialism that people like Robinson like to bloviate about.

To be sure, sometimes these kinds of demands are OK - they should just be applied cautiously and sparingly. "Kagame should really listen to his central bank and install a minimum wage" is OK. "African governments are all corrupt and need the gentle hand of enlightened Western socialists to tell them to reform" is not OK. There's a fine line between the two.

Anyway:

one possibility that comes to mind is that there be worker cooperatives in both the developed and developing worlds, or some similar way for workers in countries of vastly different economic strata to still benefit from trade agreements. Did you come across anything in your research that went over that consideration?

I didn't see anything like that, though I didn't read deeply. Implementing socialism in both rich and poor countries would not fix the problem. As far as I can tell this is a fundamental barrier to any currently conceivable socialist plan: when capital is held publicly, transferring it to another polity means losing it. Outsourcing would have the same status as foreign aid: a political favor that will happen to only a small degree. There just aren't the right incentives. And of course I'm not making this up because this is literally what socialists want - they consider it an upside of their plans that they will keep production at home.

If socialism were implemented only in poor countries, then it would be less of a problem. But obviously it's quite hubristic for Westerners to try to push such changes in a foreign nation. Moreover, if we're talking about socialism in a poor state, we must face additional worries about whether it will be implemented well.

Comment by kbog on The case for delaying solar geoengineering research · 2019-05-16T07:12:07.222Z · score: 2 (1 votes) · EA · GW

Countries are already willing to bear the local costs of reducing CO2 emissions in service of the global goal of reduced CO2 emissions. Russia already signed the Paris agreement. Poor countries who will lose statistical lives from energy regulations have signed the Paris agreement. So I think you are overstating the governance obstacles a little bit. Maybe this is addressed in those papers which discuss compensatory schemes, idk.

The geopolitical balance of power is not constant. In the next decades we will most likely see increasing geopolitical status for India, Brazil and possibly Africa. These countries have the strongest interests in preventing climate change and will make it easier to push a global movement for geoengineering.

Overview of Capitalism and Socialism for Effective Altruism

2019-05-16T06:12:39.522Z · score: 44 (15 votes)
Comment by kbog on Structure EA organizations as WSDNs? · 2019-05-16T05:22:03.269Z · score: 2 (1 votes) · EA · GW

The 1st paper says that the studies generally do a good job of ruling out reverse causality through econometric techniques.

Comment by kbog on Structure EA organizations as WSDNs? · 2019-05-13T16:36:31.154Z · score: 2 (1 votes) · EA · GW

Don't know for management. For employee ownership some of the studies in https://www.nber.org/books/krus08-1 unpack the causal stories of benefits.

Comment by kbog on Structure EA organizations as WSDNs? · 2019-05-13T16:29:08.551Z · score: 2 (1 votes) · EA · GW

I feel like you could easily say the reverse and argue that hierarchies are more important when workers are disinterested in contributing. Having genuinely motivated workers would make it more feasible to have worker management and capture its benefits.

Structure EA organizations as WSDNs?

2019-05-10T20:36:19.032Z · score: 8 (7 votes)
Comment by kbog on Is EA ignoring significant possibilities for impact? · 2019-05-10T20:05:09.156Z · score: 12 (7 votes) · EA · GW

Every time I see something like this I wonder if it's going to criticize the emphasis on Givewell charities and allege that EA needs to pay less attention to hard evidence, or criticize the emphasis on x-risks and long term trajectories and allege that EA needs to pay more attention to hard evidence. Half the time it's one and half the time it's the other.

I think it's time everyone realized that EAs are already covering all the methodological bases and we should really spend our time on actual evaluations of actual programs.

Comment by kbog on Why we should be less productive. · 2019-05-10T19:58:04.539Z · score: 9 (4 votes) · EA · GW

This idea is sort of sensible when looking at most people, who work for themselves or for relatively ineffective causes. In that case, the reduction in pay and productivity might be compensated by leisure time. Though people still actively prefer to work long hours in our economy and that needs to be explained.

However, we're Effective Altruists, not Most People. Our impacts are generally higher, whereas the value of our leisure is the same. Therefore, reducing our productivity is a very bad idea.

Comment by kbog on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-05-10T07:44:34.338Z · score: 2 (1 votes) · EA · GW

Nah, I'm just saying that a curse applies to every method, so it doesn't tell us to use a particular method. I'm excluding arguments from the issue, not bringing them in. So if we were previously thinking that weird causes are good and common sense/model pluralism aren't useful, then we should just stick to our guns. But if we were previously thinking that common sense/model pluralism are generally more accurate anyway, then we should stick with them.

Comment by kbog on Why animal charities are much more effective than human ones · 2019-05-09T05:29:15.976Z · score: 3 (2 votes) · EA · GW

You can interpret "much more effective" as a claim about the expected value of a charity given current information. Personally, that's what I think when I see such statements.

Comment by kbog on Benefits of EA engaging with mainstream (addressed) cause areas · 2019-05-09T00:54:13.683Z · score: 15 (7 votes) · EA · GW

I think we should be narrower on what concrete changes we discuss. You've mentioned "integration, "embracing and working around"... what does that really mean? Are you suggesting that we spend less money on the effective causes, and more money on the mainstream causes? That would be less effective (obviously) and I don't see how it's supported by your arguments here.

If you are referring to career choice (we might go into careers to shift funding around) I don't know if the large amount of funding on ineffective causes really changes the issue. If I can choose between managing $1M of global health spending or $1M of domestic health spending, there's no more debate to be had.

If you just mean that EAs should provide helpful statements and guidance to other efforts... this can be valuable, and we do it sometimes. First, we can provide explicit guidance, which gives people better answers but faces the problems of (i) learning about a whole new set of issues and (ii) navigating reputational risks. Some examples of this could be Founders' Pledge report on climate change, and Candidate Scoring System. As you can see in both cases, it takes a substantial amount of effort to make respectable progress here.

However, we can also think about empowering other people to apply an EA toolkit within their own lanes. The Future Perfect media column in Vox is mostly an example of this, as they are looking at American politics with a mildly more EA point of view than is typical. I can also imagine articles along the lines of "how EA inspired me to think about X" where X is an ineffective cause area. I'm a big fan of spreading the latter kind of message.

Note: I think your argument is easy enough to communicate by merely pointing out the different quantities of funding in different sectors, and trying to model and graph everything in the beginning is unnecessary complexity.

Comment by kbog on Diversifying money on different charities · 2019-05-09T00:32:17.472Z · score: 6 (4 votes) · EA · GW

Yes, charities are typically presumed to have diminishing marginal utility from money, so at some point you should stop funding one and start funding the next. However in practice charities have large budgets that don't respond much to individual donors, so the right answer can stay the same from your first dollar to your last. Therefore I don't think most of us have to worry about this. What kind of donations are you thinking of? If it's in the low five figures or less, then I would not think about it. Larger budgets can be a different story. Also depends on the size of charity, of course.

Comment by kbog on Is EA unscalable central planning? · 2019-05-08T03:55:23.476Z · score: 10 (3 votes) · EA · GW

EA has always acknowledged that the specific choice of activities, charities, etc is contingent upon social and scientific realities. So it's implicitly clear that our activities can change as we grow.

Comment by kbog on Is EA unscalable central planning? · 2019-05-08T03:53:53.413Z · score: 4 (2 votes) · EA · GW

While it is an aspect of EA to lobby government to consider x and s-risk, it is not (as far as I can tell) the primary focus, nor it is what most people seem to spend their time doing. In other ideologies it might be reasonable to say we are doing what we can whilst carrying on, but since we are about finding the most effective way to do things, if this were the most important thing to do, we should all do it. We should found or convince a political party and either campaign or pay others to. Why don’t we?

Well in America, perhaps the main reason is that the implicit two-party system makes this intractable. Even in multiparty countries, I suspect EA is not yet large enough to get any power. Eventually, however, this could be a nice thing to add to the EA ecosystem.

If people choose work based on 80k hours advice rather than their own desires/market incentives, this makes a break from market allocation and backtracks towards central planning which has always been worse in the past. Why is it better here?
  • EA is acting on the margin with a small number of actors, not the whole economy.
  • EAs are individually deciding which jobs will have the highest social impact, they are not being commanded.
  • When comparing different jobs in a free market, the disparities in social impact are greater than the disparities in contributions to economic growth. Therefore it is easier to improve upon the market's social impact than it is to improve upon the market's economic growth.
Likewise the 80k hours advice isn’t scalable. If 10% of the workforce was reading 80k hours, it would make much more sense at to change government than to tell each individual which job they ought to be doing

The same thing can be said about career advice that is given by other websites. If 10% of the workforce read Mergers and Inquisitions and tried to become an investment banker, then trying to become an investment banker wouldn't make sense anymore. But it would be silly to complain about Mergers and Inquisitions that way.

I can’t help but think it seems as if much of EA is a stopgap right now to demonstrate our legitimacy so that we can convince others to join and eventually move to our real purpose - wholescale legislative change. If that’s the case we should be honest about it.

EA doesn't have a "real purpose", it just does whatever works best. Legislative change is something we may add to our toolkit; that doesn't mean we would abandon other things like charitable donations.

Comment by kbog on Should Effective Altruism be at war with North Korea? · 2019-05-06T06:18:11.498Z · score: 2 (1 votes) · EA · GW

It's different because they have the right approach on how to compromise. They work on compromises that are grounded in political interests rather than moral values, and they work on compromises that solve the task at hand rather than setting the record straight on everything. And while they have failures, the reasons for those failures are structural (problems of commitment, honesty, political constraints, uncertainty) so you cannot avoid them just by changing up the ideologies.

Comment by kbog on Should Effective Altruism be at war with North Korea? · 2019-05-06T06:07:55.498Z · score: 2 (1 votes) · EA · GW

It wasn't obvious to make GiveWell, until people noticed a systematic flaw (lack of serious impact analysis) that warranted a new approach. In this case, we would need to identify a systematic flaw in the way that regular diplomacy and deterrence efforts are approaching things. Professionals do regard North Korea as a threat, but not in a naive "oh they're just evil and crazy aggressors" sort of sense, they already know that deterrence is a mutual problem. I can see why one might be cynical about US government efforts, but there are more players besides the US government.

The Logan Act doesn't present an obstacle to aid efforts. You're not intervening in a dispute with the US government, you're just supporting the foreign country's local programs.

EAs have a perfectly good working understanding of the microeconomic impacts of aid. At least, Givewell etc do. Regarding macroeconomic and institutional effects, OK not as much, but I still feel more confident there than I do when it comes to international relations and strategic policy. We have lots of economists, very few international relations people. And I think EAs show more overconfidence when they talk about nuclear security and foreign policy.

Comment by kbog on Should Effective Altruism be at war with North Korea? · 2019-05-06T05:47:28.385Z · score: 2 (1 votes) · EA · GW

It is perceived, that doesn't mean the perception is beneficial. It's better if people perceive EA as having weaker philosophical claims, like maximizing welfare in the context of charity, as opposed to taking on the full utilitarian theory and all it says about trolleys and torturing terrorists and so on. Quantitative optimization should be perceived as a contextual tool that comes bottom-up to answer practical questions, not tied to a whole moral theory. That's really how cost-benefit analysis has already been used.

Comment by kbog on [Link] Totalitarian ethical systems · 2019-05-06T05:40:02.437Z · score: 2 (1 votes) · EA · GW

It's conceptually sensible, but not practically sensible given the level of effort that EAs typically put into cause prioritization. Actually measuring Total Utils would require a lot more work.

Comment by kbog on [Link] Totalitarian ethical systems · 2019-05-06T05:36:31.596Z · score: 2 (1 votes) · EA · GW

OK, the issue here is you are assuming that metrics have to be the same in moral philosophy and in cause prioritization. But there's just no need for that. Cause prioritization metrics need to have validity with respect to moral philosophy, but that doesn't mean they need to be identical.

Comment by kbog on [Link] Totalitarian ethical systems · 2019-05-06T05:30:07.309Z · score: 2 (1 votes) · EA · GW

Still sounds like their metric was just economic utility from production, that does not encompass many other policy goals (like security, criminal justice etc).

Comment by kbog on Should Effective Altruism be at war with North Korea? · 2019-05-05T23:04:06.588Z · score: 8 (4 votes) · EA · GW

There are many problems here:

  • There is not a clear distinction between preparations for offense and preparations for defense. The absence of this distinction is precisely what gives rise to threats and instability in cases like North Korea. The ambiguity is due to structural problems with limited information and the nature of military forces, not ideologies in the current milleu.
  • The most obvious way for EAs to fix the deterrence problem surrounding North Korea is to contribute to the mainstream discourse and efforts which already aim to improve the situation on the peninsula. While it's possible for alternative or backchannel efforts to be positive, they are far from being the "obvious" choice.
  • Backchannel diplomacy may be forbidden by the Logan Act, though it has not really been enforced in a long time.
  • The EA community currently lacks expertise and wisdom in international relations and diplomacy, and therefore does not currently have the ability to reliably improve these things on its own.
  • The idea of making a compromise by coming up with a different version of utilitarianism is absurd. First, the vast majority of the human race does not care about moral theories, this is something that rarely makes a big dent in popular culture let alone the world of policymakers and strategic power. Second, it makes no sense to try to compromise with people by solving every moral issue under the sun when instead you could pursue the much less Sisyphean task of merely compromising on those things that actually matter for the dispute at hand. Finally, it's not clear if any of the disputes with North Korea can actually be cruxed to disagreements of moral theory.
  • The idea that compromising with North Korea is somehow neglected or unknown in the international relations and diplomacy communities is false. Compromise is ubiquitously recognized as an option in such discourse. And there are widely recognized barriers to it, which don't vanish just because you rephrase it in the language of utilitarianism and AGI.
  • Academia has influence on policymakers when it can help them achieve their goals, that doesn't mean it always has influence. There is a huge difference between the practical guidance given by regular IR scholars and groups such as FHI, and ivory tower moral philosophy which just tells people what moral beliefs they should have. The latter has no direct effect on government business, and probably very little indirect effect.
  • The QALY paradigm does not come from utilitarianism. It originated in economics and healthcare literature, to meet the goals of policymakers and funders who already had utilitarian-ish goals.
  • Your perception that the EA community profits from the perception of utilitarianism is the opposite of the reality; utilitarianism is more likely to have a negative perception in popular and academic culture, and we have put nontrivial effort into arguing that EA is safe and obligatory for non-utilitarians. You're also ignoring the widely acknowledged academic literature on how axiology can differ from decision methods; sequence and cluster thinking are the latter.
  • Talking about people or countries as rational agents with utility functions does not mean we have to pretend that they act on the basis of moral theories like utilitarianism.
Comment by kbog on Is value drift net-positive, net-negative, or neither? · 2019-05-05T18:18:04.617Z · score: 2 (3 votes) · EA · GW

Value drift towards the right values (i.e. Effective Altruism) is good, value drift away from them is bad. Value drift among EAs is likely to be a bad thing due to regression to the mean. We can imagine better values within EA, but there's no reason to expect value drift to go in the right direction. Of course we can identify better values and promote them among EAs, but that seems notably distinct from value drift.

On the other hand, we can imagine people with bad values who should regress to the mean, and would encourage value drift there.

Comment by kbog on [Link] Totalitarian ethical systems · 2019-05-05T18:06:00.375Z · score: 4 (4 votes) · EA · GW

I think it's odd how we have spent so much time burying old chestnuts like "I don't want to be an EA because I'm a socialist" or "I don't want to be an EA because I don't want to earn to give" and yet now we have people saying they are abandoning the community because of some amateur personal theory they've come up with on how they can do cause prioritization better than other people.

The idea that EAs use a single metric measuring all global welfare in cause prioritization is incorrect, and raises questions about this guy's familiarity with reports from sources like Givewell, ACE, and amateur stuff that gets posted around here. And that's odd because I'm pretty sure I've seen this guy around the discourse for a while.

Only if you go all the way to the extreme of total central planning do you really need a single totalizing metric

This is incorrect anyway. First, even total central planners don't really need a totalizing metric; actual totalitarian governments have existed and they have not used such a metric (AFAIK).

Second, as long as your actions impact everything, a totalizing metric might be useful. There are non-totalitarian agents whose actions impact everything. In practice though it's just not really worth the effort to quantify so many things.

so to some extent proposing such a metric is proposing a totalitarian central planner, or at least a notional one like a god

LOL, yes, if we agree and disagree with him in just the right combination of ways to give him an easy counterpunch. Wow, he really got us there!

Comment by kbog on Candidate Scoring System, Third Release · 2019-04-30T23:42:42.635Z · score: 2 (1 votes) · EA · GW

I didn't want to keep posting each revision here because it felt like filling up the forum. I did CSS4 and it has Clinton, because it seemed vaguely possible that she might enter this race. I should have commented it here though.

https://1drv.ms/b/s!At2KcPiXB5rkvRQycEqvwFPVYKHa

https://1drv.ms/x/s!At2KcPiXB5rkvRJGwOIYZ6dJeSEx

Not including Obama; if I evaluate people who aren't potential candidates then I think I'd like to do a lot of them at a time, perhaps as a separate project.

Comment by kbog on Reasons to eat meat · 2019-04-30T04:06:17.841Z · score: 12 (4 votes) · EA · GW

Very few people in EA are actively pushy/bothersome about anything. Strong claims about the moral importance and social viability of vegetarian/vegan diets are the target here, not judging people harshly.

Comment by kbog on Reasons to eat meat · 2019-04-25T21:15:22.829Z · score: 7 (3 votes) · EA · GW

Thanks, yes I consider it a bit of a canard.

Comment by kbog on Reasons to eat meat · 2019-04-25T20:57:56.120Z · score: 14 (7 votes) · EA · GW
In terms of satire, I'm not sure that satirising the choice to not eat animal products is the funniest topic.

Right, it's not supposed to be funny. I hope that reading this post makes one feel a sense of revulsion at covering up moral obligations with so many levels of rationalization. The point is that we should feel equally strongly about donations to charity.

Comment by kbog on Reasons to eat meat · 2019-04-25T20:55:41.514Z · score: 3 (2 votes) · EA · GW

I've seen a few people say it, not on this forum but in other EA associated spaces. "I only give to charity because I desire to do so."

Comment by kbog on Reasons to eat meat · 2019-04-25T20:49:30.045Z · score: 3 (2 votes) · EA · GW

You can say similar stuff about donations. E.g., there's no good evidence that donating lots of money makes you burn out of donating in the future, or makes you eat more meat or leave EA or things like that. It's all unproven.

Comment by kbog on Reasons to eat meat · 2019-04-25T00:58:09.624Z · score: 2 (1 votes) · EA · GW

Yes you are right about that.

Comment by kbog on Reasons to eat meat · 2019-04-25T00:57:03.241Z · score: 5 (3 votes) · EA · GW

See https://forum.effectivealtruism.org/posts/bhGReNjGCoJjRCXo9/an-integrated-model-to-evaluate-the-impact-of-animal and the sources/welfare estimates therein.

Basically, yes their lives seem positive, but between small increases in demand for other kinds of meat (cross price elasticity) and long-run economic costs of climate change I consider it bad.

Reasons to eat meat

2019-04-21T20:37:51.671Z · score: 44 (53 votes)
Comment by kbog on Political culture at the edges of Effective Altruism · 2019-04-21T04:45:10.178Z · score: 5 (3 votes) · EA · GW
I'm skeptical friction between EA and actors who misunderstand so much has consequences bad enough to worry about, since I don't expect the criticism would be taken so seriously by anyone else to the point it would have much of an impact at all.

Assuming that one cares about their definition of "disability rights" - i.e., disabled people have a right to lots of healthcare and social services, and any de-emphasis for the sake of helping more able people is a violation - their criticism and understanding of EA are correct. In the public eye, it's definitely catchy, this sort of suspicion of utilitarian cost-benefit analysis runs deep. Some weeks ago the opinion journalist Dylan Matthews mentioned that he wanted to write an article about it, and I expect that he would give a very kind platform to the detractors.

Depending on what considers an x-risk, popular support for right-wing politicians that pursue counterproductive climate change or other anti-environmental policies, or who tend to be more hawkish, jingoistic, and nationalistic in ways that will increase the chances of great-power conflict, negatively impacts x-risk reduction efforts. It's not clear that this has a direct impact on any EA work focused on x-risks, though, which is the kind of impacts you meant to assess.

Right, for that broad sort of thing, I would direct people to my Candidate Scoring System: https://1drv.ms/b/s!At2KcPiXB5rkvRQycEqvwFPVYKHa

Comment by kbog on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-21T04:18:23.349Z · score: 4 (2 votes) · EA · GW
It doesn’t actually tell you that your posterior distributions will tend to better predict values you will later measure in the real world (e.g. by checking if they fall in your 95% credence intervals), because there need not be any connection between your models or priors and the real world.

This is an issue of the models and priors. If your models and priors are not right... then you should update over your priors and use better models. Of course they can still be wrong... but that's true of all beliefs, all reasoning, etc.

you will tend to find the posterior EV of your chosen coin to be greater than 1/2, but since the coins are actually fair, your estimate will be too high more than half of the time on average.

If you assume from the outside (unbeknownst to the agent) that they are all fair, then you're not showing a problem with the agent's reasoning, you're just using relevant information which they lack.

you could have a uniform prior on the true future long-run average frequency of heads for the unbiased coins

My prior would not be uniform, it would be 0.5! What else could "unbiased coins" mean? This solves the problem, because then a coin with few head flips and zero tail flips will always have posterior of p > 0.5.

If you build enough models each trial, you might find the models you select are actually overfitting to the validation set (memorizing it), sometimes to the point that the models with highest validation accuracy will tend to have worse test accuracy than models with validation accuracy in a lower interval.

In this case we have a prior expectation that simpler models are more likely to be effective.

Do we have a prior expectation that one kind of charity is better? Well if so, just factor that in, business as usual. I don't see the problem exactly.

3. Due to the related satisficer’s curse, when doing multiple hypothesis tests, you should adjust your p-values upward or your p-value cutoffs (false positive rate, significance level threshold) downward in specific ways to better predict replicability.
4. The satisficer’s curse also guarantees that empirical study publication based on p-value cutoffs will cause published studies to replicate less often than their p-values alone would suggest.

Bayesian EV estimation doesn't do hypothesis testing with p-value cutoffs. This is the same problem popping up in a different framework, yes it will require a different solution in that context, but they are separate.

Now, if you treat your priors as posteriors that are conditional on a sample of random observations and arguments you’ve been exposed to or thought of yourself, you’d similarly find a bias towards interventions with “lucky” observations and arguments. For the intervention you do select compared to an intervention chosen at random, you’re more likely to have been convinced by poor arguments that support it and less likely to have seen good arguments against it, regardless of the intervention’s actual merits, and this bias increases the more interventions you consider. The solution supported by Proposition 2 doesn’t correct for the number of interventions under consideration.

The proposed solution applies here too, just do (simplistic, informal) posterior EV correction for your (simplistic, informal) estimates.

Of course that's not going to be very reliable. But that's the whole point of using such simplistic, informal thinking. All kinds of rigor get sacrificed when charities are dismissed for sloppy reasons. If you think your informally-excluded charities might actually turn out to be optimal then you shouldn't be informally excluding them in the first place.

Comment by kbog on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-21T03:23:18.331Z · score: 4 (2 votes) · EA · GW

Venture capitalists frequently fund things that they're extremely uncertain about. It's my impression that Bayesian calculations rarely play into these situations. Instead, smart VCs think hard and critically and come to conclusions based on processes that they probably don't full understand themselves.

I interned for a VC, albeit a small and unknown one. Sure, they don't do Bayesian calculations, if you want to be really precise. But they make extensive use of quantitative estimates all the same. If anything, they are cruder than what EAs do. As far as I know, they don't bother correcting for the optimizer's curse! I never heard it mentioned. VCs don't primarily rely on the quantitative models, but other areas of finance do. If what they do is OK, then what EAs do is better. This is consistent with what finance professionals told me about the financial modeling that I did.

Plus, this is not about the optimizer's curse. Imagine that you told those VCs that they were no longer choosing which startups are best, instead they now have to select which ones are better-than-average and which ones are worse-than-average. The optimizer's curse will no longer interfere. Yet they're not going to start relying more on explicit Bayesian calculations. They're going to use the same way of thinking as always.

And explicit Bayesian calculation is rarely used by anyone anywhere. Humans encounter many problems which are not about optimizing, and they still don't use explicit Bayesian calculation. So clearly the optimizer's curse is not the issue. Instead, it's a matter of which kinds of cognition and calculation people are more or less comfortable with.

Comment by kbog on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-21T03:23:00.807Z · score: 2 (1 votes) · EA · GW
it's hard to come up with priors that are good enough that choosing actions based explicit Bayesian calculations will lead to better outcomes than choosing actions based on a combination of careful skepticism, information gathering, hunches, and critical thinking.

Explicit Bayesian calculation is a way of choosing actions based on a combination of careful skepticism, information gathering, hunches, and critical thinking. (With math too.)

I'm guessing you mean we should use intuition for the final selection, instead of quantitative estimates. OK, but I don't see how the original post is supposed to back it up; I don't see what the optimizer's curse has to do with it.

Comment by kbog on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-21T03:04:05.673Z · score: 3 (2 votes) · EA · GW
In situations with lots of uncertainty (where the optimizer's curse is liable to cause significant problems), it's worth paying much higher costs to entertain multiple models (or do other things I suggested) than it is in cases where the optimizer's curse is unlikely to cause serious problems.

I don't agree. Why is the uncertainty that comes from model uncertainty - as opposed to any other kind of uncertainty - uniquely important for the optimizer's curse? The optimizer's curse does not discriminate between estimates that are too high for modeling reasons, versus estimates that are too high for any other reason.

The mere fact that there's more uncertainty is not relevant, because we are talking about how much time we should spend worrying about one kind of uncertainty versus another. "Do more to reduce uncertainty" is just a platitude, we always want to reduce uncertainty.

Comment by kbog on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-21T03:01:59.345Z · score: 2 (1 votes) · EA · GW
Somehow, I think you’d want to adjust your posterior downward based on the set or the number of options under consideration and how unlikely the data that makes the intervention look good.

That's the basic idea given by Muelhauser. Corrected posterior EV estimates.

You might also want to spend more effort looking for arguments and evidence against each option the more options you're considering.

As opposed to equal effort for and against? OK, I'm satisfied. However, if I've done the corrected posterior EV estimation, and then my specific search for arguments-against turns up short, then I should increase my EV estimates back towards the original naive estimate.

When considering a larger number of options, you could use some randomness in your selection process

As I recall, that post found that randomized funding doesn't make sense. Which 100% matches my presumptions, I do not see how it could improve funding outcomes.

or spread funding further

I don't see how that would improve funding outcomes.

If I haven’t decided on a prior, and multiple different priors (even an infinite set of them) seem equally reasonable to me.

In Bayesian rationality, you always have a prior. You seem to be considering or defining things differently.

Here we would probably say that your actual prior exists and is simply some kind of aggregate of these possible priors, therefore it's not the case that we should leap outside our own priors in some sort of violation of standard Bayesian rationality.

Comment by kbog on Does climate change deserve more attention within EA? · 2019-04-20T03:19:36.356Z · score: 3 (2 votes) · EA · GW

OK, CSS5 will address this by looking more broadly at the literature and the articles you cite, or maybe I will just focus more on the economist survey.

Comment by kbog on Solution to Housing Crisis · 2019-04-20T03:12:38.120Z · score: 3 (2 votes) · EA · GW

You seem to be trying to analyze these things with personal intuition, imagining what will happen when people get something like housing or a minimum wage. I'd say it's better to look at reliable studies and surveys. Here are some sources:

Additionally, you seem to be looking at this as a matter of consumer choices, recommending rent to own instead of buying or regular renting. I'd say it's usually better to trust that consumers are making smart choices on their own, and instead worry about government policies to empower them.

Comment by kbog on What are people's objections to earning-to-give? · 2019-04-19T01:22:01.645Z · score: 2 (1 votes) · EA · GW

One-sided questions make more sense when there is an established position to question. If it had become a common point of view that we simply feel good about ETG, then it would make sense to seek out opposing views. But, to my knowledge, no one has made such a case for ETG before. Asking for people's feelings about it is mainly a step into new territory. To the extent that gut feelings about ETG are implicitly circulated within EA, they seem to be generally negative, which means that specifically asking people for gut feelings in favor of ETG would make more sense.

A worldview gives specific reasons to support or oppose something, that's different from feelings.

Knowing how ETGers actually feel about their work is different from generically asking how people feel about it. The former of course is useful evidence.

Comment by kbog on Is Modern Monetary Theory a good idea? · 2019-04-19T01:09:22.316Z · score: 4 (3 votes) · EA · GW

If it's crankery then it shouldn't get a fairly neutral report.

Comment by kbog on Does climate change deserve more attention within EA? · 2019-04-17T20:51:54.315Z · score: 2 (1 votes) · EA · GW
If the Burke et al. article that you're largely basing the 26% number on is accurate (which I strongly doubt)

What is wrong with it?

it seems like trying to cause economic activity to move to more moderate climates might be an extremely effective intervention.

Economic activity already goes to wherever it will be the most profitable. I don't see why we would expect companies to predictably err.

And, even if so, I don't share the intuition that it might be extremely effective.

Comment by kbog on Candidate Scoring System, Third Release · 2019-04-17T18:29:55.577Z · score: 2 (1 votes) · EA · GW

He will be in it.

Comment by kbog on Does climate change deserve more attention within EA? · 2019-04-17T07:44:45.692Z · score: 6 (4 votes) · EA · GW

Some of this reasoning about social impacts, nonzero probability of severe collapse, dynamic effects, etc, applies equally well to many other issues. Your comment on S-risks - you could tell a similar story for just about any cause area. And everyone has their own opinion on what kind of biases EAs have. So a basic GDP-loss estimate is not a very bad way to approach things for comparative purposes. You are right though that the expected costs are a lot more than 2% or something tiny like that.

In Candidate Scoring System I gave rough weights to political issues on the basis of long run impact from ideal US federal policy. I expected the global GDP costs of future GHG emissions at 26% by 2090, and used that to give climate change a weight of 2.9. Compare to animal farming (15.6), existential risks from emerging technologies (15), immigration (9), zoning policy (1.5), and nuclear security (1.2).

Whether climate adaptation could also be potentially high value for EAs

For the same game theoretic reasons that make climate change a problem in the first place, I would expect polities to put too much emphasis on adaptation as opposed to prevention.

Comment by kbog on Is Modern Monetary Theory a good idea? · 2019-04-17T05:24:08.878Z · score: 9 (6 votes) · EA · GW

I can only give a secondhand perception - from people (including economists) offhandedly discussing in blogs, social media, etc, MMT appears to be crankery that sometimes violates economic knowledge, but sometimes is so vaguely defined that it's "not even wrong".

If this is true, seeing it published under "Future Perfect" is worrying .

Comment by kbog on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-17T04:15:48.638Z · score: 5 (3 votes) · EA · GW

Also, if your criterion for choosing an intervention is how frequently it still looks good under different models and priors, as people seem to be suggesting in lieu of EV maximization, you will still get similar curses - they'll just apply to the number of models/priors, rather than the number in the EV estimate.

Comment by kbog on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-17T03:31:38.186Z · score: 3 (2 votes) · EA · GW
it seems like the crux is often the question of how easy it is to choose good priors

Before anything like a crux can be identified, complainants need to identify what a "good prior" even means, or what strategies are better than others. Until then, they're not even wrong - it's not even possible to say what disagreement exists. To airily talk about "good priors" or "bad priors", being "easy" or "hard" to identify, is just empty phrasing and suggests confusion about rationality and probability.

Political culture at the edges of Effective Altruism

2019-04-12T06:03:45.822Z · score: 8 (22 votes)

Candidate Scoring System, Third Release

2019-04-02T06:33:55.802Z · score: 11 (8 votes)

The Political Prioritization Process

2019-04-02T00:29:43.742Z · score: 9 (3 votes)

Impact of US Strategic Power on Global Well-Being (quick take)

2019-03-23T06:19:33.900Z · score: 13 (9 votes)

Candidate Scoring System, Second Release

2019-03-19T05:41:20.022Z · score: 30 (15 votes)

Candidate Scoring System, First Release

2019-03-05T15:15:30.265Z · score: 11 (6 votes)

Candidate scoring system for 2020 (second draft)

2019-02-26T04:14:06.804Z · score: 11 (5 votes)

kbog did an oopsie! (new meat eater problem numbers)

2019-02-15T15:17:35.607Z · score: 31 (19 votes)

A system for scoring political candidates. RFC (request for comments) on methodology and positions

2019-02-13T10:35:46.063Z · score: 24 (11 votes)

Vocational Career Guide for Effective Altruists

2019-01-26T11:16:20.674Z · score: 26 (19 votes)

Vox's "Future Perfect" column frequently has flawed journalism

2019-01-26T08:09:23.277Z · score: 33 (30 votes)

A spreadsheet for comparing donations in different careers

2019-01-12T07:32:51.218Z · score: 6 (1 votes)

An integrated model to evaluate the impact of animal products

2019-01-09T11:04:57.048Z · score: 36 (20 votes)

Response to a Dylan Matthews article on Vox about bipartisanship

2018-12-20T15:53:33.177Z · score: 56 (35 votes)

Quality of life of farm animals

2018-12-14T19:21:37.724Z · score: 3 (5 votes)

EA needs a cause prioritization journal

2018-09-12T22:40:52.153Z · score: 3 (13 votes)

The Ethics of Giving Part Four: Elizabeth Ashford on Justice and Effective Altruism

2018-09-05T04:10:26.243Z · score: 5 (5 votes)

The Ethics of Giving Part Three: Jeff McMahan on Whether One May Donate to an Ineffective Charity

2018-08-10T14:01:25.819Z · score: 2 (2 votes)

The Ethics of Giving part two: Christine Swanton on the Virtues of Giving

2018-08-06T11:53:49.744Z · score: 4 (4 votes)

The Ethics of Giving part one: Thomas Hill on the Kantian perspective on giving

2018-07-20T20:06:30.020Z · score: 7 (7 votes)

Nothing Wrong With AI Weapons

2017-08-28T02:52:29.953Z · score: 14 (20 votes)

Selecting investments based on covariance with the value of charities

2017-02-04T04:33:04.769Z · score: 5 (7 votes)

Taking Systemic Change Seriously

2016-10-24T23:18:58.122Z · score: 7 (11 votes)

Effective Altruism subreddit

2016-09-25T06:03:27.079Z · score: 9 (9 votes)

Finance Careers for Earning to Give

2016-03-06T05:15:02.628Z · score: 9 (11 votes)

Quantifying the Impact of Economic Growth on Meat Consumption

2015-12-22T11:30:42.615Z · score: 22 (30 votes)