Comment by davidmanheim on Three Biases That Made Me Believe in AI Risk · 2019-02-14T10:10:05.109Z · score: 3 (3 votes) · EA · GW

The arguments about pre-driven cars seem to draw a sharp line between understanding and doing. The obvious counter seems to be asking if your brain is "pre-programmed" or "self-directed". (If this seems confused, I strongly recommend the book "Good and Real" as a way to better think about this question.)

I'm also confused about why the meaning bias is a counter-argument to specific scenarios and estimates, but that's mostly directed toward my assumption that this claim is related to Pinker's argument. Otherwise I don't understand why "fertile ground for motivated reasoning" isn't a reason to simply find outside view estimates - the AI skeptics mostly say we don't need to worry about SAI for around 40 years, which seems consistent with investing a ton more in risk mitigation now.

Comment by davidmanheim on How should large donors coordinate with small donors? · 2019-02-04T10:55:14.621Z · score: 1 (1 votes) · EA · GW

Wei - a few points in response:

1) There isn't really a lack of funds for new effective charities - there are a variety of grant programs, both those run by CEA and others, that will help such efforts get started.

2) The coordination overhead between major donors, researchers, and non-EA orgs is already prohibitively costly. (Coordination has some costs that expand super-exponentially, and there's already a lot of groups involved.)

3) I'm unsure that there are major costs that would be avoided by coordinating, or opportunities that would be found. Small donors can give to the major charities via Givewell fairly easily, and can choose any other cause on their own.

4) Having a "give here" suggestion/priority list seems to create potentially damaging correlation between givers' priorities - we'd probably prefer to allow donors to make their own allocation. (Though Givewell does publish recommendations for charities they don't support that they nonetheless suggest are worth funding, so I'm not sure anyone else sees this as an issue.)

Comment by davidmanheim on Simultaneous Shortage and Oversupply · 2019-02-04T10:21:18.798Z · score: 3 (2 votes) · EA · GW

I don't think this is quite right. The people working at OpenAI are paid well, but at the same time they are taking huge cuts in salary compared to where they could be working otherwise. (Goodfellow and Sutskever could be making millions anywhere.) And given the distribution of salary, it's very likely that the majority of both OpenAI and Deepmind researchers are making under $200k - not a crazy amount for Deep Learning talent nowadays.

Comment by davidmanheim on Why we look at the limiting factor instead of the problem scale · 2019-02-04T10:12:08.986Z · score: 2 (1 votes) · EA · GW

This is spot-on, and as a matter of decision theory, the question is never "which outcome matters most," but is rather "what action has the highest impact." This incorporates the economic issues with marginal investment, as well as the issues with constraints discussed above. I'd recommend Tiago Forte's series explaining the "Theory of Constraints" (ToC) for a better way to formalize the intuitive model presented in the post; https://praxis.fortelabs.co/theory-of-constraints-101-table-of-contents-8bbb6627915b/

As applied to EA, this notes that we should build clear system models for interventions in order to identify how to help. The ToC model notes that effort expended to help at any point of the system other than the limiting factor is wasted - double the funding but don't fix the logistic constraints on spending it and you've helped not-at-all. (In fact, you might have made the problem worse by increasing the pressure on the logistics management!)

Comment by davidmanheim on The expected value of extinction risk reduction is positive · 2018-12-23T06:27:44.103Z · score: 1 (1 votes) · EA · GW

1) I agree that there is some confusion on my part, and on the part of most others I have spoken to, about how terminal values and morality do or do not get updated.

2) Agreed.

3) I will point to a maybe forthcoming paper / idea of Eric Drexler at FHI that makes this point, which he called "pareto-topia". Despite the wonderful virtues of the idea, I'm unclear if there is a stable game-theoretic mechanism that prevents a race to the bottom outcome when fundamentally different values are being traded off. Specifically in this case, it's possible that different values lead to an inability to truthfully/reliably cooperate - a paved road to pareto-topia seems not to exist, and there might be no path at all.

Comment by davidmanheim on Challenges in Scaling EA Organizations · 2018-12-22T17:27:44.962Z · score: 1 (1 votes) · EA · GW

It's very likely that more organizations help, up to a point. The limit, as I think I failed to make clear, but is implicit, is that coordination pressure/failure always exist. They are either between organizations, or within them. Large organizations have scaling efficiencies because they can coordinate at lower cost than markets. (This is what a couple economists won nobels for recently, for work now referred to as the theory of the firm.) Those efficiencies are greatly reduced when multiple organizations are involved, but I think a few of my suggestions - specialization, referral of promising work, and coordinating bodies - might help somewhat with that.

I would (a bit weakly) agree that as of three years ago, growth of new EA organizations was probably a bit below optimal. I'm not following all of the threads of organizations closely, but from what I have seen, I would (even more weakly) guess that the rate of new organizations forming now is probably at or above the point of effective returns, at least for existential risk organizations. That's why I think coordination is particularly useful now. Still, attempts to find anything like an optimal rate seem like a waste of time. We simply don't understand the questions or the domain well enough to conclusively answer the question, except perhaps approximately and in retrospect. (Even if we did have such understanding or insight, I don't think we would be able to convince anyone to follow the guidelines, given that the optimum rate is almost certainly not a Nash equilibrium.)

Challenges in Scaling EA Organizations

2018-12-21T10:53:27.639Z · score: 37 (18 votes)
Comment by davidmanheim on New web app for calibration training funded by the Open Philanthropy Project · 2018-12-20T12:34:54.500Z · score: 1 (1 votes) · EA · GW

Agreed, but a fairly large questions were so ill-specified that I was basically trying to decide what orders of magnitude was relevant for games I not only knew nothing about, but couldn't find clarity even after knowing the answer, over and over.

A sample example I'm making up, but is similar to some of the questions I saw: "England out-scored France in 1982 by how much?" What sport is being referred to? What series, or single game, or Olympics, or season?

Comment by davidmanheim on The expected value of extinction risk reduction is positive · 2018-12-20T12:30:24.428Z · score: 1 (1 votes) · EA · GW

Thanks for replying.

I'd agree with your points regarding limited scope for the first and second points, but I don't understand how anyone can make prioritization decisions when we have no discounting - it's nearly always better to conserve resources. If we have discounting for costs but not benefits, however, I worry the framework is incoherent. This is a much more general confusion I have, and the fact that you didn't address or resolve it is unsurprising.

Re: S-Risks, I'm wondering whether we need to be concerned about value misalignment leading to arbitrarily large negative utility, given some perspectives. I'm concerned that human values are incoherent, and any given maximization is likely to cause arbitrarily large "suffering" for some values - and if there are multiple groups with different values, this might mean any maximization imposes maximal suffering on the large majority of people's values.

For example, if 1/3 of humanity feels that human liberty is a crucial value, without which human pleasure is worse than meaningless, another 1/3 views earning reward as critical, and the last 1/3 views bliss/pure hedonium as optimal, we would view tiling the universe with human brains maxed out for any one of these as a hugely negative outcome for 2/3 of humanity, much worse than extinction.

Comment by davidmanheim on New web app for calibration training funded by the Open Philanthropy Project · 2018-12-16T10:49:44.601Z · score: 1 (1 votes) · EA · GW

Cool, but a fair number of the questions are vague or lack needed context.

Still, I'd agree for people that aren't used to self-calibration with the above assessment that, if not the most valuable, it's really up there on the list of "most valuable 4 hours of rationality training you can do."

Comment by davidmanheim on The expected value of extinction risk reduction is positive · 2018-12-16T09:52:26.249Z · score: 10 (7 votes) · EA · GW

Great work. A few notes in descending order or importance which I'd love to see addressed at least in brief:

1) This seems not to engage with the questions about short-term versus long-term prioritization and discount rates. I'd think that the implicit assumptions should be made clearer.

2) It doesn't seem obvious to me that, given the universalist assumptions about the value of animal or other non-human species, the long term future is affected nearly as much by the presence or absence of humans. Depending on uncertainties about the Fermi hypothesis and the viability of non-human animals developing sentience over long time frames, this might greatly matter.

3) Reducing the probability of technological existential risks may require increasing the probability of human stagnation.

4) S-risks are plausibly more likely if moral development is outstripped by growth in technological power over relatively short time frames, and existential catastrophe has a comparatively limited downside.

Comment by davidmanheim on Pandemic Risk: How Large are the Expected Losses? Fan, Jamison, & Summers (2018) · 2018-12-16T09:41:08.108Z · score: 1 (1 votes) · EA · GW

You might be interested in my recent paper, "Questioning EStimates of Natural Pandemic Risk" - https://www.liebertpub.com/doi/pdf/10.1089/hs.2018.0039

Comment by davidmanheim on Is Suffering Convex? · 2018-11-14T14:46:29.903Z · score: 1 (1 votes) · EA · GW

As I said in response to a different comment, I don't object to making the claim that we should treat them as morally equal due to ignorance, but that's very different from your claim that we can assume the intensities are equal.

I'm also not sure what to do with the claim that there might be different morally relevant dimensions that we cannot collapse, because if that is true, we are in a situation where 1-point of "artistic sufferring" is incommensurable with 1-billion points of "physical pain." If so, we're punting - because we do in fact make decisions between options on some basis, despite the supposedly "incommensurable" moral issues.

Comment by davidmanheim on Is Suffering Convex? · 2018-11-14T14:41:42.035Z · score: 1 (1 votes) · EA · GW

I agree that it is morally justifiable to treat them as equal absent convincing evidence, but I don't think it's correct to claim we should assume they are equal.

Comment by davidmanheim on Is Suffering Convex? · 2018-10-23T11:59:42.160Z · score: 1 (1 votes) · EA · GW

Nice find, definitely a related point!

Comment by davidmanheim on Is Suffering Convex? · 2018-10-23T11:40:57.358Z · score: 2 (2 votes) · EA · GW

I don't understand how point 1 is possible - sure, given the model the maximum could be higher than all animals, or even than all humans, but this contradicts my experience. My experience is that children suffer more intensely than adults, and given the emotional complexity of many higher mammals, they are in those terms more sophisticated beings than babies, if not toddlers.

Regarding point 2, yes, that could reduce average suffering, which matters for average utilitarians, but does not mitigate experienced suffering for any other beings, which I think most other strains of utilitarianism would care about more.

Comment by davidmanheim on Is Suffering Convex? · 2018-10-22T03:55:58.760Z · score: 1 (1 votes) · EA · GW

I think the adult suffering from anticipation (and from uncertainty) is limited, via both contextualization and hedonic adaptation. I'm unsure how the balance of intense pleasure / pain works for children. They may experience pleasure more intensely, but I don't see it as much. And it's plausible that animals also experience pleasure more intensely, but I'm agnostic about that claim.

Is Suffering Convex?

2018-10-21T11:44:48.259Z · score: 12 (10 votes)