Posts

Formalizing longtermism 2020-09-16T05:00:04.351Z · score: 9 (4 votes)
Michael_Wiebe's Shortform 2020-08-19T19:19:58.481Z · score: 3 (1 votes)
Formalizing the cause prioritization framework 2019-11-05T18:09:24.746Z · score: 24 (22 votes)

Comments

Comment by michael_wiebe on Michael_Wiebe's Shortform · 2020-09-29T17:51:13.690Z · score: 1 (1 votes) · EA · GW

is also a laughable proposition in the real world

Sure, but not even close to the same extent.

Comment by michael_wiebe on Expected value theory is fanatical, but that's a good thing · 2020-09-25T03:37:02.930Z · score: 1 (1 votes) · EA · GW

I guess the problem is that  is nonsensical. We can talk about , but not equality.

Comment by michael_wiebe on Expected value theory is fanatical, but that's a good thing · 2020-09-23T16:54:17.213Z · score: 1 (1 votes) · EA · GW

Yes, I'm saying that it happens to be the case that, in practice, fanatical tradeoffs never come up.

Furthermore, you'd have to assign  when , which means perfect certainty in an empirical claim, which seems wrong.

Hm, doesn't claiming  also require perfect certainty? Ie, to know that V is literally infinite rather than some large number.

Comment by michael_wiebe on Michael_Wiebe's Shortform · 2020-09-23T09:29:51.838Z · score: 1 (1 votes) · EA · GW

What is ? It seems all the work is being done by having  in the exponent.

Comment by michael_wiebe on Expected value theory is fanatical, but that's a good thing · 2020-09-23T08:46:52.748Z · score: 3 (2 votes) · EA · GW

How about this: fanaticism is fine in principle, but in practice we never face any actual fanatical choices. For any actions with extremely large value V, we estimate p < 1/V, so that the expected value is <1, and we ignore these actions based on standard EV reasoning.

Comment by michael_wiebe on Michael_Wiebe's Shortform · 2020-09-23T05:31:30.683Z · score: 6 (5 votes) · EA · GW

Will says:

in order to assess the value (or normative status) of a particular action we can in the first instance just look at the long-run effects of that action (that is, those after 1000 years), and then look at the short-run effects just to decide among those actions whose long-run effects are among the very best.

Is this not laughable? How could anyone think that "looking at the 1000+ year effects of an action" is workable?

Comment by michael_wiebe on Michael_Wiebe's Shortform · 2020-09-19T15:51:50.454Z · score: 3 (2 votes) · EA · GW

What are the comparative statics for how uncertainty affects decisionmaking? How does a decisionmaker's behavior differ under some uncertainty compared to no uncertainty?

Consider a social planner problem where we make transfers to maximize total utility, given idiosyncratic shocks to endowments. There are two agents, , with endowments   (with probability 1) and  So  either gets nothing or twice as much as .

We choose a transfer  to solve:

For a baseline, consider  and . Then we get an optimal transfer of . Intuitively, as  (if B gets 10 for sure, don't make any transfer from A to B), and as  (if B gets 0 for sure, split A's endowment equally).

So that's a scenario with risk (known probabilities), but not uncertainty (unknown probabilities). What if we're uncertain about the value of ?

Suppose we think , for some distribution  over . If we maximize expected utility, the problem becomes:

Since the objective function is linear in probabilities, we end up with the same problem as before, except with  instead of . If we know the mean of , we plug it in and solve as before.

So it turns out that this form of uncertainty doesn't change the problem very much.

Questions:
- if we don't know the mean of , is the problem simply intractable? Should we resort to maxmin utility?
- what if we have a hyperprior over the mean of ? Do we just take another level of expectations, and end up with the same solution?
- how does a stochastic dominance decision theory work here?

Comment by michael_wiebe on Formalizing longtermism · 2020-09-18T19:54:26.759Z · score: 1 (1 votes) · EA · GW

Do you think Will's three criteria are inconsistent with the informal definition I used in the OP ("what most matters about our actions is their very long term effects")?

Comment by michael_wiebe on Formalizing longtermism · 2020-09-18T06:46:21.199Z · score: 1 (1 votes) · EA · GW

In my setup, I could say  for some large ; ie, generations  contribute basically nothing to total social utility . But I don't think this captures longtermism, because this is consistent with the social planner allocating no resources to safety work (and all resources to consumption of the current generation); the condition puts no constraints on . In other words, this condition only matches the first of three criteria that Will lists:

(i) Those who live at future times matter just as much, morally, as those who live today;

(ii) Society currently privileges those who live today above those who will live in the future; and

(iii) We should take action to rectify that, and help ensure the long-run future goes well.

Comment by michael_wiebe on Modelling the odds of recovery from civilizational collapse · 2020-09-18T06:16:28.786Z · score: 5 (3 votes) · EA · GW

I'm a bit skeptical about the value of formal modelling here. The parameter estimates would be almost entirely determined by your assumptions, and I'd expect the  confidence intervals to be massive.

I think a toy model would be helpful for framing the issue, but going beyond that (to structural estimation) seems not worth it.

Comment by michael_wiebe on Formalizing longtermism · 2020-09-18T00:50:30.262Z · score: 1 (1 votes) · EA · GW

and also a world where shorttermism is true

On Will's definition, longtermism and shorttermism are mutually exclusive.

Comment by michael_wiebe on Formalizing longtermism · 2020-09-17T22:20:48.057Z · score: 1 (1 votes) · EA · GW

Suppose you're taking a one-off action , and then you get (discounted) reward 

I'm a bit confused by this setup. Do you mean that  is analogous to , the allocation for ? If so, what are you assuming about ? In my setup, I can compare  to. , so we're comparing against the optimal allocation, holding fixed .

 where  is some large number.

I'm not sure this works. Consider: this condition would also be satisfied in a world with no x-risk, where each generation becomes successively richer and happier, and there's no need for present generations to care about improving the future. (Or are you defining  as the marginal utility of  on generation , as opposed to the utility level of generation  under ?)

Comment by michael_wiebe on Formalizing longtermism · 2020-09-17T06:08:05.596Z · score: 1 (1 votes) · EA · GW

My model here is riffing on Jones (2016); you might look there for solving the model.

Re infinite utility, Jones does say (fn 6): "As usual,  must be sufficiently large given growth so that utility is finite."

Comment by michael_wiebe on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-14T19:51:42.552Z · score: 3 (2 votes) · EA · GW
  • Assumption Based Planning – having a written version of an organization’s plans and then identify load-bearing assumptions and assessing the vulnerability of the plan to each assumption.
  • Exploratory Modeling – rather than trying to model all available data to predict the most likely outcome these models map out a wide range of assumptions and show how different assumptions lead to different consequences
  • Scenario planning [2] – identifying the critical uncertainties, developing a set of internally consistent descriptions of future events based on each uncertainty, then developing plans that are robust [3] to all options.

Can you clarify how these tools are distinct? My (ignorant) first impression is that they just boil down to "use critical thinking".

Comment by michael_wiebe on Hedging against deep and moral uncertainty · 2020-09-14T18:47:13.800Z · score: 1 (1 votes) · EA · GW

Re algebra, are you defending the numbers you gave as reasonable? Otherwise, if we're just making up numbers, might as well do the general case.

Comment by michael_wiebe on Keynesian Altruism · 2020-09-13T19:58:26.939Z · score: 20 (9 votes) · EA · GW

Would 'countercyclical altruism' also capture this view?

Comment by michael_wiebe on Hedging against deep and moral uncertainty · 2020-09-13T17:09:41.561Z · score: 4 (3 votes) · EA · GW

I think this would be easier to explain with a two-sector model: ie, just  and . Also, would it be easier to just work with algebra? Ie,  .

Assuming a budget of 6 units

How does this fit with ? That's 10 units, no?

I will assume, for simplicity, constant marginal cost-effectiveness across each domain/effect/worldview

It's worth emphasizing that this assumption rules out the diminishing returns case for diversifying; this is a feature, since we want to isolate the uncertainty-case for diversifying.

Comment by michael_wiebe on Does Economic History Point Toward a Singularity? · 2020-09-13T15:19:44.729Z · score: 2 (2 votes) · EA · GW

One version of the phase change model that I think is worth highlighting: S-curve growth.

Basically, the set of transformative innovations is finite, and we discovered most of them over the past 200 years.  Hence, the Industrial Revolution was a period of fast technological growth, but that growth will end as we run out of innovations.The hockey-stick graph will level out and become an S-curve, as 

Comment by michael_wiebe on Does Economic History Point Toward a Singularity? · 2020-09-13T14:58:42.572Z · score: 1 (1 votes) · EA · GW

Although, is it the case that growth(GDP) increased during the modern era (ie, growth(population) has been rising)? My recollection is that the IR was a structural break, with  jumping from 0.5% to 2% (or something).

Comment by michael_wiebe on Does Economic History Point Toward a Singularity? · 2020-09-13T14:43:45.084Z · score: 1 (1 votes) · EA · GW

Right, growth(GDP) > growth(GDP per capita) when growth(population)>0.

Comment by michael_wiebe on Does Economic History Point Toward a Singularity? · 2020-09-13T06:51:13.237Z · score: 10 (3 votes) · EA · GW

while the author agrees that growth rates have been increasing in the modern era (roughly, the Industrial Revolution and everything after)

I think this is a misunderstanding. The common view is that the growth rate has been constant in the modern era.

Comment by michael_wiebe on Does Economic History Point Toward a Singularity? · 2020-09-13T01:07:20.569Z · score: 1 (1 votes) · EA · GW

Robert Gordon has argued for a coming growth slowdown: paper, book.

Comment by michael_wiebe on Michael_Wiebe's Shortform · 2020-09-12T19:07:02.673Z · score: 2 (2 votes) · EA · GW

My model here is based on the same Jones (2016) paper.

Comment by michael_wiebe on Michael_Wiebe's Shortform · 2020-09-12T17:50:50.235Z · score: 3 (2 votes) · EA · GW

This model focuses on extinction risk; another approach would look at trajectory changes.

Also, it might be interesting to incorporate Phil Trammell's work on optimal timing/giving-now vs giving-later. Eg, maybe the optimal solution involves the planner saving resources to be invested in safety work in the future.

Comment by michael_wiebe on Michael_Wiebe's Shortform · 2020-09-12T17:46:08.165Z · score: 4 (3 votes) · EA · GW

Longtermism is defined as holding that "what most matters about our actions is their very long term effects". What does this mean, formally? Below I set up a model of a social planner maximizing social welfare over all generations. With this model, we can give a precise definition of longtermism.

A model of a longtermist social planner

Consider an infinitely-lived representative agent with population size . In each period there is a risk of extinction via an extinction rate .

The basic idea is that economic growth is a double-edged sword: it increases our wealth, but also increases the risk of extinction. In particular, 'consumption research' develops new technologies , and these technologies increase both consumption and extinction risk.

Here are the production functions for consumption and consumption technologies:

However, we can also develop safety technologies to reduce extinction risk. Safety research produces new safety technologies , which are used to produce 'safety goods' .

Specifically,

The extinction rate is , where the number  of consumption technologies directly increases risk, and the number  of safety goods directly reduces it.

Let .

Now we can set up the social planner problem: choose the number of scientists (vs workers), the number of safety scientists (vs consumption scientists), and the number of safety workers (vs consumption workers) to maximize social welfare. That is, the planner is choosing an allocation of workers for all generations:

The social welfare function is:

The planner maximizes utility over all generations (), weighting by population size , and accounting for extinction risk via . The optimal allocation  is the allocation that maximizes social welfare.

The planner discounts using  (the Ramsey equation), where we have the discount rate , the exogenous extinction risk , risk-aversion  (i.e., diminishing marginal utility), and the growth rate .  (Note that  could be time-varying.)

Here there is no pure time preference; the planner values all generations equally. Weighting by population size means that this is a total utilitarian planner.

Defining longtermism

With the model set up, now we can define longtermism formally. Recall the informal definition that "what most matters about our actions is their very long term effects". Here are two ways that I think longtermism can be formalized in the model:

(1) The optimal allocation in our generation, , should be focused on safety work: the majority (or at least a sizeable fraction) of workers should be in safety research of production, and only a minority in consumption research or production. (Or,  for small values of  (say ) to capture that the next few generations need to work on safety.) This is saying that our time has high hingeyness due to existential risks. It's also saying that safety work is currently uncrowded and tractable.

(2) Small deviations from  (the optimal allocation in our generation) will produce large decreases in total social welfare , driven by generations  (or some large number). In other words, our actions today have very large effects on the long-term future. We could plot  against  for  and some suboptimal alternative , and show that  is much smaller than  in the tail.

While longtermism has an intuitive foundation (being intergenerationally neutral or having zero pure time preference), the commonly-used definition makes strong assumptions about tractability and hingeyness.

Comment by michael_wiebe on Michael_Wiebe's Shortform · 2020-08-27T23:07:22.339Z · score: 5 (3 votes) · EA · GW

Crowdedness by itself is uninformative. A cause could be uncrowded because it is improperly overlooked, or because it is intractable. Merely knowing that a cause is uncrowded shouldn't lead you to make any updates.

Comment by michael_wiebe on Some history topics it might be very valuable to investigate · 2020-08-27T21:40:25.620Z · score: 3 (2 votes) · EA · GW

Good list! For next steps, I'd like to see one-pager research proposals, detailing gaps in the literature and the value-added of new work.

Comment by michael_wiebe on The case of the missing cause prioritisation research · 2020-08-24T16:49:54.895Z · score: 3 (2 votes) · EA · GW

Yes, I meant an example of someone using in this way. It doesn't seem to be standard in welfare economics.

Comment by michael_wiebe on The case of the missing cause prioritisation research · 2020-08-24T02:25:08.167Z · score: 1 (1 votes) · EA · GW

Hm, I've never seen the use of $f$ like that. Can you point to an example?

Comment by michael_wiebe on The case of the missing cause prioritisation research · 2020-08-22T19:57:24.007Z · score: 1 (1 votes) · EA · GW

Tangent:

this doesn't imply we should maximize expected total utility, since it doesn't rule out risk-aversion

What do you mean by this? Isn't risk aversion just a fact about the utility function? You can maximize expected utility no matter how the utility function is shaped.

Comment by michael_wiebe on Should We Prioritize Long-Term Existential Risk? · 2020-08-20T20:18:55.830Z · score: 3 (2 votes) · EA · GW
The notion of long-term vs. short-term existential risk appears to provide a compelling argument for prioritizing longtermist institutional reform over x-risk reduction.

I think a portfolio approach is helpful here. Obviously the overall EA portfolio is going to assign nonzero shares to both short-term and long-term risks (with shares determined by equalizing marginal utility per dollar across causes). This framing avoids fights over which cause is the "top priority".

Comment by michael_wiebe on Should We Prioritize Long-Term Existential Risk? · 2020-08-20T19:59:53.556Z · score: 1 (1 votes) · EA · GW
if we face a 50% risk of extinction per century, we will last two centuries on average. If we reduce the risk to 25%, the expected length of the future doubles to four centuries. Halving risk again doubles the expected length to eight centuries. In general, halving x-risk becomes more valuable when x-risk is lower.

Presumably the marginal cost is increasing as the level of risk falls. So I don't think this is true in general.

Comment by michael_wiebe on Michael_Wiebe's Shortform · 2020-08-19T19:20:09.742Z · score: 3 (2 votes) · EA · GW

We need to drop the term "neglected". Neglectedness is crowdedness relative to importance, and the everyday meaning is "improperly overlooked". So it's more precise to refer to crowdedness ($ spent) and importance separately. Moreover, saying that a cause is uncrowded has a different connotation than saying that a cause is neglected. A cause could be uncrowded because it is overlooked, or because it is intractable; if the latter, it doesn't warrant more attention. But a neglected cause warrants more attention by definition.

More: https://forum.effectivealtruism.org/posts/fR55cjoph2wwiSk8R/formalizing-the-cause-prioritization-framework

Comment by michael_wiebe on The case of the missing cause prioritisation research · 2020-08-18T18:48:13.817Z · score: 2 (2 votes) · EA · GW

Thanks for the reply. I'm a jaded PhD student, but I am open to updating towards research-optimism.

I would distinguish research from implementation of research. I agree that there seems to be l0w-hanging fruit in implementing best practices, but I think implementation can be a super difficult problem in its own right. (See the state capacity literature.)

Comment by michael_wiebe on The case of the missing cause prioritisation research · 2020-08-18T07:21:29.719Z · score: 13 (6 votes) · EA · GW

More generally, research isn't magic. Hiring a researcher and having them work 9-5 is no guarantee of solving a problem. You write:

What empirical evidence is there that we can reliably impact the long run trajectory of humanity and how have similar efforts gone in the past? [...]
I think there needs to be much better research into how to make complex decisions despite high uncertainty.

Isn't it obvious that allocating researcher hours to these questions would be a waste of money? Almost by definition, we can't have good evidence that we can impact the long-run (ie. centuries) trajectory of humanity, because we haven't been collecting data for that long. And making complex decisions under high uncertainty will always be incredibly difficult; in the best case scenario, more research might yield small improvements in decision-making.

Comment by michael_wiebe on The case of the missing cause prioritisation research · 2020-08-18T07:12:29.305Z · score: 19 (11 votes) · EA · GW

I don't share your optimistic view of research. You write:

it is reasonable to think that research would make progress because:
Very little research has been done on this so far.

That's because cause prioritization research is extremely difficult, not because no one has thought to do this.

Human history reflects positively on our ability to build a collective understanding of a difficult subject and eventually make headway.

Survivorship bias: what about all of the difficult subjects where we couldn't make any progress and gave up?

Even if difficult, we should at least try! We would learn why such research is hard and should keep going until we reach a point of diminishing returns.

No, we should try if the expected returns are better than the next alternative. What if we've already hit diminishing returns?

Comment by michael_wiebe on Growth and the case against randomista development · 2020-02-24T00:37:29.686Z · score: 1 (1 votes) · EA · GW

Do you think that affects the conclusion about diminishing returns?

Comment by michael_wiebe on Growth and the case against randomista development · 2020-01-19T18:40:14.331Z · score: 18 (6 votes) · EA · GW

We should disaggregate down to the level of specific funding opportunities. Eg, suppose the top three interventions for hits-based development are {funding think tanks in developing countries, funding academic research, charter cities} with corresponding MU/$ {1000, 200, 100}. Suppose it takes $100M to fully fund developing-country think tanks, after which there's a large drop in MU/$ (moving to the next intervention, academic research). In this case, despite economic development being a huge problem area, we do see diminishing returns at the intervention level within the range of the EA budget.

Comment by michael_wiebe on Growth and the case against randomista development · 2020-01-17T22:11:58.396Z · score: 3 (3 votes) · EA · GW
there's no guarantee that growth wins

It's not binary, though. Think of the intermediate micro utility maximization problem: you allocate your budget across goods until marginal utility per dollar is equalized. With diminishing marginal utility, you generally will spread your budget across multiple goods.

Similarly, we should expect to allocate the EA budget across a portfolio of causes. Yes, it's possible that one cause has the highest MU/$, and that diminishing returns won't affect anything in the range of our budget (ie, after spending our entire budget on that cause, it still has the highest MU/$), but I see no reason to assume this is the default case.

More here.

Comment by michael_wiebe on Growth and the case against randomista development · 2020-01-17T21:54:39.182Z · score: 14 (4 votes) · EA · GW

Note that RCTs are still a minority in published academic research. I think Pritchett's criticism is that NGOs have been dominated by randomistas; eg, even the International Growth Centre does a lot of RCTs, instead of following his preferred growth diagnostics approach.

Comment by michael_wiebe on Growth and the case against randomista development · 2020-01-17T21:44:04.786Z · score: 9 (4 votes) · EA · GW

I think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc. In contrast, 'frontier' growth in developed countries is based on technological innovation, and is potentially more dangerous.

Comment by michael_wiebe on Formalizing the cause prioritization framework · 2019-11-24T18:10:49.862Z · score: 1 (1 votes) · EA · GW

I guess I'm expecting diminishing returns to be an important factor in practice, so I wouldn't place much weight on an analysis that excludes crowdedness.

Comment by michael_wiebe on Formalizing the cause prioritization framework · 2019-11-16T21:49:05.067Z · score: 1 (1 votes) · EA · GW

Hi Justin, thanks for the comment.

I'm in favor of reducing the complexity of the framework, but I'm not sure if this is the right way to do it. In particular, estimating "importance only" or "importance and tractability only" isn't helpful, because all three factors are necessary for calculating MU/$. A cause that scores high on I and T could be low MU/$ overall, due to being highly crowded. Or is your argument that the variance (across causes) in crowdedness is negligible, and therefore we don't need to account for diminishing returns in practice?

Comment by michael_wiebe on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-16T18:41:55.807Z · score: 1 (1 votes) · EA · GW
That means that we can't conclude afterwords whether the intervention worked. Instead, we need theories of change, and surveys of corruption, and second order estimates of the impact based on that. In short, we won't find out if our work helped.

This seems too strong. We can't conclude with certainty whether the intervention worked, and we won't find out with certainty if our work helped. But we will have some information.

Comment by michael_wiebe on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-16T18:24:47.171Z · score: 4 (2 votes) · EA · GW
But on review the track record doesn't imply these interventions failed, exactly. They were not found to be ineffective or harmful.

Another factor to consider: a cause area could be highly cost-effective, but GiveWell rejected it because the organizations working in that area were not sufficiently transparent or competent.

Comment by michael_wiebe on Why and how to start a for-profit company serving emerging markets · 2019-11-10T02:47:48.806Z · score: 7 (3 votes) · EA · GW
Even if you’re in an Anglophone country, you’ll need to be “bilingual” between local and tech-startup norms. At Wave, our internal culture emphasizes honesty, transparency and autonomy, which is very different from a typical, say, Senegalese work environment.

I'm curious to hear more about this. Can you give some examples of how the norms differ?

More generally, how feasible is it to export Silicon Valley's high product standards?

Comment by michael_wiebe on Overview of Capitalism and Socialism for Effective Altruism · 2019-11-08T07:41:21.490Z · score: 4 (3 votes) · EA · GW

This China scholar is pessimistic about the recent pivot to more state intervention.

https://cscc.sas.upenn.edu/podcasts/2019/04/12/ep-17-diagnosing-chinas-state-led-capitalism-yasheng-huang

Comment by michael_wiebe on Summary of my academic paper “Effective Altruism and Systemic Change” · 2019-11-08T07:15:14.467Z · score: 1 (1 votes) · EA · GW

I don't see that IMR poses any challenge to the standard EA cause prioritization method. IMR can be easily modeled as a tractability function that is increasing for some part of its domain. Depending on funding levels, causes with IMR can have the highest marginal utility per dollar, and hence would be prioritized according to the standard framework.

Comment by michael_wiebe on Formalizing the cause prioritization framework · 2019-11-07T01:45:57.033Z · score: 1 (1 votes) · EA · GW

Yes, the difficult part is applying the ITC framework in practice; I don't have any special insight there. But the goal is to estimate importance and the tractability function for different causes.

You can see how 80k tries to rank causes here.

Comment by michael_wiebe on Formalizing the cause prioritization framework · 2019-11-07T00:08:43.948Z · score: 0 (2 votes) · EA · GW

The google docs method worked, but you can't control image size.

I'm now using imgur, which should be recommended somewhere here for authors.