Posts

Updating towards the effectiveness of Economic Policy? 2019-05-29T11:33:17.366Z · score: 11 (9 votes)
Challenges in Scaling EA Organizations 2018-12-21T10:53:27.639Z · score: 37 (18 votes)
Is Suffering Convex? 2018-10-21T11:44:48.259Z · score: 12 (10 votes)

Comments

Comment by davidmanheim on Aid Scepticism and Effective Altruism · 2019-07-30T14:49:12.114Z · score: 2 (2 votes) · EA · GW
In proportion to the needs...

Again, I don't think that's relevant. I can easily ruin systems with a poorly spent $10m regardless of how hard it is to fix them.

I am not sure I understand why international funding should displace local expertise...

You're saying that these failure modes are avoidable, but I'm not sure they are in fact being avoided.

The building of those health institutions takes a long time, the results come slowly with a time lag of 10+ years.

Yes, and slow feedback is a great recipe for not noticing how badly you're messing things up. And yes, classic GiveWell type analysis doesn't work well to consider complex policy systems, which is exactly why they are currently aggressively hiring people with different types of relevant expertise to consider those types of issues.

And speaking of this, here's an interesting paper Rob Wiblin just shared on complexity and difficulty of decisionmaking in these domains; https://philiptrammell.com/static/simplifying_cluelessness.pdf

Comment by davidmanheim on Aid Scepticism and Effective Altruism · 2019-07-30T04:59:51.577Z · score: 2 (2 votes) · EA · GW

Yes, there are plausible tipping points, but I'm not talkin about that. I'm arguing that this isn't "small amounts of money," and it is well into the amounts where international funding displaces building local expertise, makes it harder to focus on building health systems generally instead of focusing narrowly, undermines the need for local governments to take responsibility, etc.

I still think these are outweighed by the good, but the impacts are not trivial.

Comment by davidmanheim on Aid Scepticism and Effective Altruism · 2019-07-17T12:42:44.986Z · score: 1 (1 votes) · EA · GW

I don't understand why your argument responds to mine. They don't need to be big enough to directly solve problems to be large enough to have critical systemic side effects.

Comment by davidmanheim on Running Effective Altruism Groups: A Literature Review · 2019-07-10T06:06:15.935Z · score: 1 (1 votes) · EA · GW

Great work

Note typo/missing word: "Public talks on non-core topics don’t new members or regular attendees."

Comment by davidmanheim on Aid Scepticism and Effective Altruism · 2019-07-10T06:01:14.962Z · score: 4 (3 votes) · EA · GW

We're well past the point where unintended systemic effects can be ignored. Givewell has directly moved or directed a half billion dollars, and the impact on major philanthropic giving is a multiple of that. Malaria and schistosomiasis initiatives are significantly impacted by this, and just as the effects cannot be dismissed, neither can the conclusion that these are large scale initiatives, with all the attendant pitfalls.

Comment by davidmanheim on How likely is a nuclear exchange between the US and Russia? · 2019-06-27T10:20:52.081Z · score: 1 (1 votes) · EA · GW

The second event was elicited as a conditional probability, so the math is correct, though again, it seems that the inputs are not. (But the language used here seems not to have noted that it was conditional. I may just be confused about what it is trying to say, as it seems unclear to me. Also, the GJP report would have explicitly discussed the superforecasters' thoughts on what may cause the question to trigger, so again, I am confused by the footnote.)

Comment by davidmanheim on How likely is a nuclear exchange between the US and Russia? · 2019-06-23T18:41:23.875Z · score: 8 (3 votes) · EA · GW

Note: I can't discuss this, since it's covered by an NDA, and I haven't seen the report that OpenPhil received, but compared to what I see as a superforecaster on the questions it looks like the numbers you have from GJP are wrong.

Comment by davidmanheim on Is there an analysis that estimates possible timelines for arrival of easy-to-create pathogens? · 2019-06-18T04:50:50.071Z · score: 6 (4 votes) · EA · GW

I'm working with the FHI Bio team, and we don't have one, and aren't aware of any. At the same time, building the components of such a forecast is a to-do item on our list, and we have a number of ideas and leads on how this can or should be done well. (I have done some early-stage, very rough expert elicitation on the subject.)

If there are people interested in developing such a timeline with proper treatment of uncertainties, and working on forecasting tools and similar on the topic, I'd be very interested in chatting about how they want to proceed, working with them and/or supporting their work, and finding collaborators and resources for doing so in a way that supports other research in the area.


Comment by davidmanheim on Has your "EA worldview" changed over time? How and why? · 2019-04-07T08:24:00.432Z · score: 3 (2 votes) · EA · GW

I have a paper I've been kicking around in the back of my head for a couple years to formalize essentially this idea via economic/financial modern portfolio theory and economic collective action problem theory - but *someone* has me working on more important problems and papers instead... Gregory. ;)

Comment by davidmanheim on I'll Fund You to Give Away 'Doing Good Better' - Surprisingly Effective? · 2019-03-24T12:11:13.118Z · score: 1 (1 votes) · EA · GW

It's an interesting point. Do you know who you've given it to, and would you consider sending them a separate copy of the survey once it's been released, to gauge effectiveness / relative effectiveness?

Comment by davidmanheim on I'll Fund You to Give Away 'Doing Good Better' - Surprisingly Effective? · 2019-03-24T12:09:58.631Z · score: 5 (3 votes) · EA · GW

Having done a little bit of graduate work on survey design, I'll put on my survey-design hat and offer a bunch of suggestions. If you'd like to discuss further, please feel free to reach out directly on - twitter / @gmail.com / calendly - my username is my full name.

Before getting to the question, a few points on sample design and research ethics:

1) You might want to provide distinct links to the survey for each subset of answerers, i.e. those who you gave the book and are asking directly, or those who each other person has done so for. You may also want to link to a different survey to ask the secondary people who borrowed the book. This allows you to later consider if there are differences in effectiveness.

2) You should clarify that the data will be analyzed and shared, but will first have any personally identifying information removed. (i.e. don't share the raw results with email addresses with ANYONE.)

3) Best-best practice would be to pre-register your analysis plan.

On to the questions! I have a bunch of language and structure nitpicks I'd suggest changing:

re: 1 - "I have't read/finished it yet, but I plan to," I think that's ambiguous - people might have stopped reading, but still be impacted. I'd change to the following options:

  • I read it
  • I haven't begun reading it yet - Please skip to the final 2 questions
  • I just began reading it and plan to finish - Please skip to the final 2 questions
  • I have read a substantial part of the book, but didn't finish it.
  • I do not plan on reading it.

(Also, don't ask for an email address here, instead say "If you have made any changes to your life based on this, please continue. " People are more likely to feel willing to provide extra information as they move later in the survey

Relatedly, re: "2. Since reading the book, have you donated to any effective charities, that you wouldn't have otherwise? If so, please list each charity, with US dollar amount, on its own line."

I'm concerned that people will find the wording or the request pushy / invasive and refuse to answer or stop the survey completely. I'd suggest splitting this into a gentler 3-part version: (Again, people who start complying are more likely to continue filling out more-invasive questions.)

2. Since reading the book, have you donated to any effective charities, that you wouldn't have otherwise? Yes/No

2a. If so, we would be interested in evaluating how effective giving away the book was, and would like to know the total amount you have donated. (# Box)

2b. If you are willing to provide further details please list the charities and US dollar amounts, on separate lines. (Textbox)

For the remaining questions, I'd also recommend refactoring and making the flow better;

3. Have you considered made any other life changes since reading the book? Yes / No

(If not, please skip to question 7.)

3a. If so, did you change or consider changing your: Dietary choices (See Q4) / Career Plans (See Q5) / Other (See Q6)? (Checkboxes)

4. If you changed your Dietary choices, have you become Vegan (4a) / Vegetarian (4a) / Reduced meat consumption (4b) / Other (4c) / Considered this but have not (yet) changed anything (4c)?

4a. How many months ago?

4b. How much less eggs or meat?

4c. What else did you change / consider changing?

5. Have you made or considered making career changes?(Current Q6 options)

5a. Details (text)

6. Other changes (text)

7. Put question 10 here, before asking about the follow up.

Also: Give them the option to provide email addresses instead of asking their friend themselves, if they think it would be less intrusive. Note that the email will be short, and the email addresses will not be kept beyond sending a single email asking them to take the survey. (Note: I ALSO think people are a bit more likely to send the request themselves if given this option as an alternative.)

8. Then, Q9 / If you would be willing to take another follow-up survey, please enter... (Make language include either the "haven't read / finished yet" AND asking for a follow-up - you will know which is which based on the above.)

Comment by davidmanheim on Three Biases That Made Me Believe in AI Risk · 2019-02-14T10:10:05.109Z · score: 3 (3 votes) · EA · GW

The arguments about pre-driven cars seem to draw a sharp line between understanding and doing. The obvious counter seems to be asking if your brain is "pre-programmed" or "self-directed". (If this seems confused, I strongly recommend the book "Good and Real" as a way to better think about this question.)

I'm also confused about why the meaning bias is a counter-argument to specific scenarios and estimates, but that's mostly directed toward my assumption that this claim is related to Pinker's argument. Otherwise I don't understand why "fertile ground for motivated reasoning" isn't a reason to simply find outside view estimates - the AI skeptics mostly say we don't need to worry about SAI for around 40 years, which seems consistent with investing a ton more in risk mitigation now.

Comment by davidmanheim on How should large donors coordinate with small donors? · 2019-02-04T10:55:14.621Z · score: 1 (1 votes) · EA · GW

Wei - a few points in response:

1) There isn't really a lack of funds for new effective charities - there are a variety of grant programs, both those run by CEA and others, that will help such efforts get started.

2) The coordination overhead between major donors, researchers, and non-EA orgs is already prohibitively costly. (Coordination has some costs that expand super-exponentially, and there's already a lot of groups involved.)

3) I'm unsure that there are major costs that would be avoided by coordinating, or opportunities that would be found. Small donors can give to the major charities via Givewell fairly easily, and can choose any other cause on their own.

4) Having a "give here" suggestion/priority list seems to create potentially damaging correlation between givers' priorities - we'd probably prefer to allow donors to make their own allocation. (Though Givewell does publish recommendations for charities they don't support that they nonetheless suggest are worth funding, so I'm not sure anyone else sees this as an issue.)

Comment by davidmanheim on Simultaneous Shortage and Oversupply · 2019-02-04T10:21:18.798Z · score: 3 (2 votes) · EA · GW

I don't think this is quite right. The people working at OpenAI are paid well, but at the same time they are taking huge cuts in salary compared to where they could be working otherwise. (Goodfellow and Sutskever could be making millions anywhere.) And given the distribution of salary, it's very likely that the majority of both OpenAI and Deepmind researchers are making under $200k - not a crazy amount for Deep Learning talent nowadays.

Comment by davidmanheim on Why we look at the limiting factor instead of the problem scale · 2019-02-04T10:12:08.986Z · score: 2 (1 votes) · EA · GW

This is spot-on, and as a matter of decision theory, the question is never "which outcome matters most," but is rather "what action has the highest impact." This incorporates the economic issues with marginal investment, as well as the issues with constraints discussed above. I'd recommend Tiago Forte's series explaining the "Theory of Constraints" (ToC) for a better way to formalize the intuitive model presented in the post; https://praxis.fortelabs.co/theory-of-constraints-101-table-of-contents-8bbb6627915b/

As applied to EA, this notes that we should build clear system models for interventions in order to identify how to help. The ToC model notes that effort expended to help at any point of the system other than the limiting factor is wasted - double the funding but don't fix the logistic constraints on spending it and you've helped not-at-all. (In fact, you might have made the problem worse by increasing the pressure on the logistics management!)

Comment by davidmanheim on The expected value of extinction risk reduction is positive · 2018-12-23T06:27:44.103Z · score: 1 (1 votes) · EA · GW

1) I agree that there is some confusion on my part, and on the part of most others I have spoken to, about how terminal values and morality do or do not get updated.

2) Agreed.

3) I will point to a maybe forthcoming paper / idea of Eric Drexler at FHI that makes this point, which he called "pareto-topia". Despite the wonderful virtues of the idea, I'm unclear if there is a stable game-theoretic mechanism that prevents a race to the bottom outcome when fundamentally different values are being traded off. Specifically in this case, it's possible that different values lead to an inability to truthfully/reliably cooperate - a paved road to pareto-topia seems not to exist, and there might be no path at all.

Comment by davidmanheim on Challenges in Scaling EA Organizations · 2018-12-22T17:27:44.962Z · score: 1 (1 votes) · EA · GW

It's very likely that more organizations help, up to a point. The limit, as I think I failed to make clear, but is implicit, is that coordination pressure/failure always exist. They are either between organizations, or within them. Large organizations have scaling efficiencies because they can coordinate at lower cost than markets. (This is what a couple economists won nobels for recently, for work now referred to as the theory of the firm.) Those efficiencies are greatly reduced when multiple organizations are involved, but I think a few of my suggestions - specialization, referral of promising work, and coordinating bodies - might help somewhat with that.

I would (a bit weakly) agree that as of three years ago, growth of new EA organizations was probably a bit below optimal. I'm not following all of the threads of organizations closely, but from what I have seen, I would (even more weakly) guess that the rate of new organizations forming now is probably at or above the point of effective returns, at least for existential risk organizations. That's why I think coordination is particularly useful now. Still, attempts to find anything like an optimal rate seem like a waste of time. We simply don't understand the questions or the domain well enough to conclusively answer the question, except perhaps approximately and in retrospect. (Even if we did have such understanding or insight, I don't think we would be able to convince anyone to follow the guidelines, given that the optimum rate is almost certainly not a Nash equilibrium.)

Comment by davidmanheim on New web app for calibration training funded by the Open Philanthropy Project · 2018-12-20T12:34:54.500Z · score: 1 (1 votes) · EA · GW

Agreed, but a fairly large questions were so ill-specified that I was basically trying to decide what orders of magnitude was relevant for games I not only knew nothing about, but couldn't find clarity even after knowing the answer, over and over.

A sample example I'm making up, but is similar to some of the questions I saw: "England out-scored France in 1982 by how much?" What sport is being referred to? What series, or single game, or Olympics, or season?

Comment by davidmanheim on The expected value of extinction risk reduction is positive · 2018-12-20T12:30:24.428Z · score: 1 (1 votes) · EA · GW

Thanks for replying.

I'd agree with your points regarding limited scope for the first and second points, but I don't understand how anyone can make prioritization decisions when we have no discounting - it's nearly always better to conserve resources. If we have discounting for costs but not benefits, however, I worry the framework is incoherent. This is a much more general confusion I have, and the fact that you didn't address or resolve it is unsurprising.

Re: S-Risks, I'm wondering whether we need to be concerned about value misalignment leading to arbitrarily large negative utility, given some perspectives. I'm concerned that human values are incoherent, and any given maximization is likely to cause arbitrarily large "suffering" for some values - and if there are multiple groups with different values, this might mean any maximization imposes maximal suffering on the large majority of people's values.

For example, if 1/3 of humanity feels that human liberty is a crucial value, without which human pleasure is worse than meaningless, another 1/3 views earning reward as critical, and the last 1/3 views bliss/pure hedonium as optimal, we would view tiling the universe with human brains maxed out for any one of these as a hugely negative outcome for 2/3 of humanity, much worse than extinction.

Comment by davidmanheim on New web app for calibration training funded by the Open Philanthropy Project · 2018-12-16T10:49:44.601Z · score: 1 (1 votes) · EA · GW

Cool, but a fair number of the questions are vague or lack needed context.

Still, I'd agree for people that aren't used to self-calibration with the above assessment that, if not the most valuable, it's really up there on the list of "most valuable 4 hours of rationality training you can do."

Comment by davidmanheim on The expected value of extinction risk reduction is positive · 2018-12-16T09:52:26.249Z · score: 9 (8 votes) · EA · GW

Great work. A few notes in descending order or importance which I'd love to see addressed at least in brief:

1) This seems not to engage with the questions about short-term versus long-term prioritization and discount rates. I'd think that the implicit assumptions should be made clearer.

2) It doesn't seem obvious to me that, given the universalist assumptions about the value of animal or other non-human species, the long term future is affected nearly as much by the presence or absence of humans. Depending on uncertainties about the Fermi hypothesis and the viability of non-human animals developing sentience over long time frames, this might greatly matter.

3) Reducing the probability of technological existential risks may require increasing the probability of human stagnation.

4) S-risks are plausibly more likely if moral development is outstripped by growth in technological power over relatively short time frames, and existential catastrophe has a comparatively limited downside.

Comment by davidmanheim on Pandemic Risk: How Large are the Expected Losses? Fan, Jamison, & Summers (2018) · 2018-12-16T09:41:08.108Z · score: 1 (1 votes) · EA · GW

You might be interested in my recent paper, "Questioning EStimates of Natural Pandemic Risk" - https://www.liebertpub.com/doi/pdf/10.1089/hs.2018.0039

Comment by davidmanheim on Is Suffering Convex? · 2018-11-14T14:46:29.903Z · score: 1 (1 votes) · EA · GW

As I said in response to a different comment, I don't object to making the claim that we should treat them as morally equal due to ignorance, but that's very different from your claim that we can assume the intensities are equal.

I'm also not sure what to do with the claim that there might be different morally relevant dimensions that we cannot collapse, because if that is true, we are in a situation where 1-point of "artistic sufferring" is incommensurable with 1-billion points of "physical pain." If so, we're punting - because we do in fact make decisions between options on some basis, despite the supposedly "incommensurable" moral issues.

Comment by davidmanheim on Is Suffering Convex? · 2018-11-14T14:41:42.035Z · score: 1 (1 votes) · EA · GW

I agree that it is morally justifiable to treat them as equal absent convincing evidence, but I don't think it's correct to claim we should assume they are equal.

Comment by davidmanheim on Is Suffering Convex? · 2018-10-23T11:59:42.160Z · score: 1 (1 votes) · EA · GW

Nice find, definitely a related point!

Comment by davidmanheim on Is Suffering Convex? · 2018-10-23T11:40:57.358Z · score: 2 (2 votes) · EA · GW

I don't understand how point 1 is possible - sure, given the model the maximum could be higher than all animals, or even than all humans, but this contradicts my experience. My experience is that children suffer more intensely than adults, and given the emotional complexity of many higher mammals, they are in those terms more sophisticated beings than babies, if not toddlers.

Regarding point 2, yes, that could reduce average suffering, which matters for average utilitarians, but does not mitigate experienced suffering for any other beings, which I think most other strains of utilitarianism would care about more.

Comment by davidmanheim on Is Suffering Convex? · 2018-10-22T03:55:58.760Z · score: 1 (1 votes) · EA · GW

I think the adult suffering from anticipation (and from uncertainty) is limited, via both contextualization and hedonic adaptation. I'm unsure how the balance of intense pleasure / pain works for children. They may experience pleasure more intensely, but I don't see it as much. And it's plausible that animals also experience pleasure more intensely, but I'm agnostic about that claim.