Posts

Formalizing the cause prioritization framework 2019-11-05T18:09:24.746Z · score: 20 (13 votes)

Comments

Comment by michael_wiebe on Why and how to start a for-profit company serving emerging markets · 2019-11-10T02:47:48.806Z · score: 7 (3 votes) · EA · GW
Even if you’re in an Anglophone country, you’ll need to be “bilingual” between local and tech-startup norms. At Wave, our internal culture emphasizes honesty, transparency and autonomy, which is very different from a typical, say, Senegalese work environment.

I'm curious to hear more about this. Can you give some examples of how the norms differ?

More generally, how feasible is it to export Silicon Valley's high product standards?

Comment by michael_wiebe on Overview of Capitalism and Socialism for Effective Altruism · 2019-11-08T07:41:21.490Z · score: 4 (3 votes) · EA · GW

This China scholar is pessimistic about the recent pivot to more state intervention.

https://cscc.sas.upenn.edu/podcasts/2019/04/12/ep-17-diagnosing-chinas-state-led-capitalism-yasheng-huang

Comment by michael_wiebe on Summary of my academic paper “Effective Altruism and Systemic Change” · 2019-11-08T07:15:14.467Z · score: 1 (1 votes) · EA · GW

I don't see that IMR poses any challenge to the standard EA cause prioritization method. IMR can be easily modeled as a tractability function that is increasing for some part of its domain. Depending on funding levels, causes with IMR can have the highest marginal utility per dollar, and hence would be prioritized according to the standard framework.

Comment by michael_wiebe on Formalizing the cause prioritization framework · 2019-11-07T01:45:57.033Z · score: 1 (1 votes) · EA · GW

Yes, the difficult part is applying the ITC framework in practice; I don't have any special insight there. But the goal is to estimate importance and the tractability function for different causes.

You can see how 80k tries to rank causes here.

Comment by michael_wiebe on Formalizing the cause prioritization framework · 2019-11-07T00:08:43.948Z · score: 1 (1 votes) · EA · GW

The google docs method worked, but you can't control image size.

I'm now using imgur, which should be recommended somewhere here for authors.

Comment by michael_wiebe on Formalizing the cause prioritization framework · 2019-11-06T04:59:27.646Z · score: 2 (2 votes) · EA · GW

Okay, photos uploaded to Dropbox instead of Google Photos.

For future reference, this is what worked for me, using Dropbox:

  • Share -> Create link
  • Open in incognito browser (regular browser doesn't work)
  • Copy image address
  • Load into post
Comment by michael_wiebe on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T05:15:24.907Z · score: 5 (3 votes) · EA · GW

Note that 80k sometimes takes a softer tone, eg here:

An individual can only focus on one or two areas at a time, but a large group of people working together should most likely spread out over several.
When this happens, there are additional factors to consider when choosing a problem area. Instead of aiming to identify the single most pressing issue at the margin, the aim is to work out:
1. The ideal allocation of people over issues, and which direction that allocation should move in.
2. Where your comparative advantage lies compared to others in the group.
We call this the ‘portfolio approach’.
Comment by michael_wiebe on X-risk dollars -> Andrew Yang? · 2019-10-15T22:16:29.260Z · score: 5 (3 votes) · EA · GW

Yes, you're right that altruists have a more encompassing utility function, since they focus on social instead of individual welfare. But even if altruists will invest more in elections than self-interested individuals, it doesn't follow that it's a good investment overall.

Sorry for being harsh, but my honest first impression was "this makes EAs look bad to outsiders".

Comment by michael_wiebe on The Future of Earning to Give · 2019-10-15T21:20:03.827Z · score: 1 (1 votes) · EA · GW

To add to Ben's argument, uncertainty about which cause is the best will rationalize diversifying across multiple causes. If we use confidence intervals instead of point estimates, it's plausible that the top causes will have overlapping confidence intervals.

Comment by michael_wiebe on X-risk dollars -> Andrew Yang? · 2019-10-11T22:10:51.860Z · score: 2 (2 votes) · EA · GW
From an AI policy standpoint, having the leader of the free world on board would be big.

Can you elaborate on this?

This opportunity is potentially one that makes AI policy money constrained rather than talent constrained for the moment.

Is your claim that AI policy is currently talent-constrained, and having Yang as president would lead to more people working on it, thereby making it money-constrained?

Comment by michael_wiebe on X-risk dollars -> Andrew Yang? · 2019-10-11T22:05:51.154Z · score: -1 (6 votes) · EA · GW
It also seems surprisingly easy to have an outsize influence in the money-in-politics landscape. Peter Thiel's early investment in Trump looks brilliant today (at accomplishing the terrible goal of installing a protectionist).

This is naive. The low amount of money in politics is presumably an equilibrium outcome, and not because everyone has failed to consider the option of buying elections. And the reasonable conclusion is that Thiel got lucky, given how close the election was, not that he single-handedly caused Trump's victory.

Comment by michael_wiebe on The ITN framework, cost-effectiveness, and cause prioritisation · 2019-10-11T04:10:33.687Z · score: 2 (2 votes) · EA · GW

Oops, I was wrong. I had skipped the intro section and was looking at the definitions later in the article.

Comment by michael_wiebe on The ITN framework, cost-effectiveness, and cause prioritisation · 2019-10-11T00:01:47.720Z · score: 3 (2 votes) · EA · GW
Importance = good done / % of problem solved
Neglectedness = % increase in resources / extra $

I don't see how you get this from the 80k article. On my reading, their definition of importance is just the amount of good done (rather than good done per % of problem solved), and their definition of neglectedness is just the level of resources (rather than the percentage change per dollar). You should be clear that you're giving an interpretation of their model, and not just copying it.

Comment by michael_wiebe on The ITN framework, cost-effectiveness, and cause prioritisation · 2019-10-10T21:53:53.195Z · score: 1 (1 votes) · EA · GW

This is how I think about the ITN framework:

What we ultimately care about is marginal utility per dollar, MU/$ (or marginal cost-effectiveness). ITN is a way of proxying MU/$ when we can't easily estimate it directly.

Definitions:

  • Importance = utility gained from solving the entire problem.
  • Tractability = percent of problem solved per dollar.
  • Neglectedness = amount of resources allocated to the problem.

Note that tractability can be a function of neglectedness: the amount of the problem solved per dollar will likely vary depending on how many resources are already allocated. This is to capture diminishing returns, as we expect the first dollar spent on a problem to be more effective in solving it than the millionth dollar.

Then to get MU/$ as a function of neglectedness, we multiply importance and tractability: MU/$ = utility(total problem) * % solved/$ (=f(resources)). Now we have MU/$ as a function of resources, so to figure out where we are on the MU/$ curve, we plug in the value of resources (neglectedness).

Here's an example without diminishing returns: suppose solving an entire problem increases utility by 100 utils, so importance = 100 utils. And suppose tractability is 1% of the problem solved per dollar. Note that this doesn't vary with resources spent, so there aren't diminishing returns. Then MU/$ = 100 utils * 0.01/$ = 1 util/$. Here, neglectedness (defined as resources spent) doesn't matter, except when spending hits $100 and the problem is fully solved.

Now let's introduce diminishing returns. Let's denote resources spent by x. As before, importance = 100 utils. But now, suppose tractability is (1/x)% of the problem solved per dollar. Now we have diminishing returns: the first dollar solves 1% of the problem, but the tenth dollar solves 0.1%. Here MU/$ = 100 utils * (1/x)%/$ = 1/x utils/$. To evaluate the MU/$ of this problem, we need to know how neglected it is, captured by how many resources, x, have already been spent.

Hence, importance and tractability define MU/$ as a function of neglectedness, and neglectedness determines the specific value of MU/$.

Comment by michael_wiebe on [Link] The Case for Charter Cities Within the EA Framework (CCI) · 2019-09-26T03:42:49.540Z · score: 7 (3 votes) · EA · GW
The intuition behind the cost effectiveness of charter cities is that economic growth compounds, improving standards of living. Therefore, over a sufficiently long time horizon, any growth change will dwarf a level change, like those attributable to deworming or anti-malaria efforts.

I think this framing is misleading. A "growth change" just is repeated (increasing) level changes. The figure on p.14 says that constant 6.5% growth over 50 years will increase GDP per capita to $90k. This is an accounting identity—there's no new information in "6.5% growth over 50 years" that's not in "GDP per capita increased from $4k to $90k over 50 years".

I'd prefer to have the discussion purely in levels, with much more detail on what specifically is increasing GDP. For example: "GDP will increase by $X million over the first five years, driven by increases of $A, $B, $C in sectors 1, 2, 3; there will be N1 new firms and N2 new residents..." If you can assume a growth rate, you can fill in these details. Also, I think the assumption of a constant growth rate over fifty years is too strong.

A 35 percent marginal contribution by CCI to the success of a charter city project is also a conservative estimate. CCI is uniquely positioned to bring together government officials, developers, and other interested parties and offer the expertise to plan a charter city and implement a new legal system.

I'd like to see a lot more discussion of what CCI's contribution is. This sounds like a political slogan.

Comment by michael_wiebe on [Link] The Case for Charter Cities Within the EA Framework (CCI) · 2019-09-26T02:53:27.864Z · score: 4 (3 votes) · EA · GW

The 100% P(success) is especially unreasonable given the failed attempts by Paul Romer in Honduras and Madagascar.

Comment by michael_wiebe on Uncertainty and sensitivity analyses of GiveWell's cost-effectiveness analyses · 2019-09-05T18:34:18.853Z · score: 1 (1 votes) · EA · GW

Some quick points:

we see how the output depends on a particular input even in the face of variations in all the other inputs—we don't hold everything else constant. In other words, this is a global sensitivity analysis.
  • I'm a bit confused. In the GiveDirectly case for 'value of increasing consumption', you're still holding the discount rate constant, right?
  • To address the recurring caveat, I wonder if we could plot the posterior mode/stdev against the input confidence interval length. Basically, taking GiveWell's point estimate as the prior mean, how do the cost-effectiveness estimates (and their uncertainty) change as we vary our uncertainty over the input parameters.

More to come!

Comment by michael_wiebe on Uncertainty and sensitivity analyses of GiveWell's cost-effectiveness analyses · 2019-09-05T18:15:19.901Z · score: 1 (1 votes) · EA · GW

Yes, how does the posterior mode differ from GiveWell's point estimates, and how does this vary as a function of the input uncertainty (confidence interval length)?

Comment by michael_wiebe on Ask Me Anything! · 2019-08-20T18:18:47.827Z · score: 11 (8 votes) · EA · GW

Yes, some symbolic activities will turn out to be high-impact, but we have to beware survivorship bias (ie, think of all the symbolic activities that went nowhere).

Comment by michael_wiebe on Ask Me Anything! · 2019-08-15T00:27:25.983Z · score: 14 (8 votes) · EA · GW

Do you think economic growth is key to popular acceptance of longtermism, as increased wealth leads people to adopt post-materialist values?

Comment by michael_wiebe on 'Longtermism' · 2019-08-04T02:26:44.609Z · score: 18 (7 votes) · EA · GW

Yes, it's a bit question-begging to assert that the actions with the highest marginal utility per dollar are those targeting long-term outcomes.

Comment by michael_wiebe on 'Longtermism' · 2019-07-31T19:58:28.043Z · score: 6 (2 votes) · EA · GW

This definition avoids issues with being falsified by empirical questions of tractability, as well as flip-flopping between short- and longtermism.

Comment by michael_wiebe on 'Longtermism' · 2019-07-31T19:49:03.902Z · score: 7 (3 votes) · EA · GW
An alternative minimal definition [...] the (intrinsic) value of an outcome is the same no matter what time it occurs.

Why doesn't this do the job, if combined with the premise that we should maximize social welfare? I like to think in terms of a social planner maximizing welfare over all future generations. By assuming that value doesn't depend on time, we rule out pure time preference and thereby treat all generations equally. And maximizing social welfare gets us to stop privileging current generations (eg, by investing in reducing extinction risk at the expense of current consumption).

So I'd say: longtermism =df maximizing social welfare with no pure time preference.

Comment by michael_wiebe on Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program · 2019-07-24T21:11:26.139Z · score: 3 (2 votes) · EA · GW

Looks like you inputted the wrong table in the 'Indirect Harms' section.

Comment by michael_wiebe on "Why Nations Fail" and the long-termist view of global poverty · 2019-07-17T18:00:00.618Z · score: 7 (6 votes) · EA · GW

Good post!

For distinguishing short-termist from long-termist, I think risk aversion is a key factor. Long-termists are risk-neutral, and are happy to roll the dice in hits-based giving, whereas short-termists are risk-averse, and want a higher certainty of doing good. (And overall, individuals should not identify as one or the other, because the overall portfolio is going to have a mix of both.)

Some points on China:

The way I think about the current slowdown is the same as the Soviet case: they got easy catch-up growth based on investment, but now they're hitting diminishing returns. To maintain growth, they have to switch to innovation, but that requires inclusive political institutions (to protect creative destruction). Also, China and Soviet Russia seem to be in the same boat re: counterexamples to AR's theory, because both achieved catch-up growth under extractive political institutions.

I wouldn't describe China's economic institutions as inclusive. They have a weird system of cronyism, where formal institutions are low quality, so firms use political connections to get stuff done (see this).

I'll note that OpenPhil is hiring researchers to "focus on causes in policy, scientific research, and global development". Hopefully their page on development won't be empty for much longer!

Comment by michael_wiebe on Concept: EA Donor List. To enable EAs that are starting new projects to find seed donors, especially for people that aren’t well connected · 2019-03-17T19:55:41.980Z · score: 2 (2 votes) · EA · GW

How is this different from EA Grants?

Comment by michael_wiebe on Why we look at the limiting factor instead of the problem scale · 2019-02-18T02:24:50.321Z · score: 3 (2 votes) · EA · GW

Good post!

First, tractability is not really currently used in this way. Right now, lots of claims are being made along the lines of “cause X should be focused on more due to it having a huge problem size” with no further reference to tractability.

Charitably, this is an "other things equal" claim. But I agree, it seems like people have just forgotten about tractability.

Comment by michael_wiebe on The Need for and Viability of an Effective Altruism Academy · 2019-02-16T19:30:16.367Z · score: 4 (3 votes) · EA · GW
The school could then be sustained by a Lambda school-style deal (Income Share Agreement): If you get employed outside EA, you owe 10% of your salary for 2 years once you make over $50,000. If you work for a think tank or inside EA, the cost is waived.

I can't imagine anyone signing this contract.

Comment by michael_wiebe on Ben Garfinkel: How sure are we about this AI stuff? · 2019-02-10T23:45:32.960Z · score: 8 (5 votes) · EA · GW
we don't really know how worried to be [about instability risk from AI]. These risks really haven't been researched much, and we shouldn't really take it for granted that AI will be destabilizing. It could be or it couldn't be. We just basically have not done enough research to feel very confident one way or the other.

This makes me worry about tractability. The problem of instability has been known for at least five years now, and we haven't made any progress?

Comment by michael_wiebe on Hit Based Giving for Global Development · 2019-02-07T01:05:34.784Z · score: 3 (3 votes) · EA · GW

How should we think about the big players in the field (World Bank, IMF, DFID, etc) — are they doing hits-based giving?

Comment by michael_wiebe on Tactical models to improve institutional decision-making · 2019-01-11T20:32:07.680Z · score: 2 (2 votes) · EA · GW

This post gives a nice framework, but the article should be half as long.

Also, I wonder how much can be learned from an abstract understanding here. Consider economists studying firms: they can learn some general principles, but they're not in a position to go run a business ("if you're so smart, why aren't you rich?"). Similarly, my prior is that studying institutional decision-making is not going to produce actionable knowledge that can be used in the real world. That would require learning about the specific problems facing (say) DFID.

Comment by michael_wiebe on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-11-01T21:52:46.255Z · score: 0 (2 votes) · EA · GW

My understanding of Myers Briggs is that 'thinking' and 'feeling' are mutually exclusive, at least on average, in the sense that being more thinking-oriented means you're less feeling-oriented. The E vs. A framing is different, and it seems you could have people who score high in both. Is there any personality research on this?

Comment by michael_wiebe on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-11-01T21:22:41.780Z · score: 2 (4 votes) · EA · GW

In all likelihood men just hide their emotions better than women

I think citing this article weakens your overall argument. The study has n=30 and is likely more of the same low-quality non-preregistered social psychology research that is driving the replication crisis. Your argument is strong enough (to think about examples of men being snarky, insulting others, engaging in pissing contests) without needing to cite some flimsy study. Otherwise, people start questioning whether your other citations are trustworthy.

Comment by michael_wiebe on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-31T22:12:03.536Z · score: 2 (2 votes) · EA · GW

Is it true that men score higher than women in 'thinking' vs 'feeling'? If so, the EA community (being dominated by men) might be structured in ways that appeal to 'thinkers' and deter 'feelers'. To reduce the gender gap in EA, we would have to make the community be more appealing to 'feelers' (if women are indeed disproportionately 'feelers').

Comment by michael_wiebe on 5 Types of Systems Change Causes with the Potential for Exceptionally High Impact (post 3/3) · 2017-10-23T16:37:06.165Z · score: 0 (0 votes) · EA · GW

For example, the system of animal agriculture and animal product consumption is pretty complex, but ACE have done a great job

But they didn't use complex systems theory, did they? They just used the regular EA framework of impact/tractability/neglectedness.

Comment by michael_wiebe on Effective Altruism Paradigm vs Systems Change Paradigm (post 2/3) · 2017-10-23T16:28:38.408Z · score: 0 (0 votes) · EA · GW

For what it's worth, I currently think the solution requires modelling the Earth as a complex system, clarifying top-level metrics to optimise the system for, and a probability weighted theory of change for the system as a whole.

I'd be interested in seeing this. Do you have anything written up?

Comment by michael_wiebe on Why to Optimize Earth? (post 1/3) · 2017-10-23T16:21:18.308Z · score: 0 (0 votes) · EA · GW

'Spillover' is a common term in economics, and I'm using it interchangeably with externalities/'how causes affect other causes'.

'Spill-over' suggests that impact can be neatly attributed to one cause or another, but in the context of complex systems (i.e. the world we live in), impact is often more accurately understood as resulting from many factors, including the interplay of a messy web of causes pursued over many decades.

Spillovers can be simple or complex; nothing in the definition says they have to be "neatly attributed". But you're right, long-term flow-through effects can be massive. They're also incredibly difficult to estimate. If you're able to improve on our ability to estimate them, using complexity theory, then more power to you.

Comment by michael_wiebe on 5 Types of Systems Change Causes with the Potential for Exceptionally High Impact (post 3/3) · 2017-10-23T16:07:37.758Z · score: 2 (2 votes) · EA · GW

And if your disagreement is with the scale/tractability/neglectedness framework, then argue against that directly.

Comment by michael_wiebe on 5 Types of Systems Change Causes with the Potential for Exceptionally High Impact (post 3/3) · 2017-10-22T18:20:47.200Z · score: 2 (4 votes) · EA · GW

In general, a cause needs to score high on each of impact, tractability, and neglectedness to be worthwhile. Getting two out of three is no better than zero out of three. You've listed causes with high impact, but they're generally not tractable. For example, changing the political system is highly intractable.

Overall, I think that EA has already incorporated the key insights from systems change, and there's no need to distinguish it as being separate from EA.

Comment by michael_wiebe on Effective Altruism Paradigm vs Systems Change Paradigm (post 2/3) · 2017-10-22T17:48:25.512Z · score: 4 (4 votes) · EA · GW

I think the marginal vs. total distinction is confused. Maximizing personal impact, while taking into account externalities (as EAs do), will be equivalent to maximizing collective impact.

An Effective Altruist, by focusing on impact at the margin, may ask questions such as: What impact will my next $100 donation make in this charity vs that charity?

It seems you're trying to set up a distinction between EA focusing on small issues, and systems change focusing on big issues. But this is a strawman. Even if an individual makes a $100 donation, the cause they're donating to can still target a systemic issue. In any case, there are now EAs making enormous donations: "What if you were in a position to give away billions of dollars to improve the world? What would you do with it?"

This approach invites sustained collective tolerance of deep uncertainty, in order to make space for new cultural norms to emerge. Linear, black-and-white thinking risks compromising this creative process before desirable novel realities have fully formed in a self-sustaining way.

This is pretty mystical.