Posts

Formalizing longtermism 2020-09-16T05:00:04.351Z
Michael_Wiebe's Shortform 2020-08-19T19:19:58.481Z
Formalizing the cause prioritization framework 2019-11-05T18:09:24.746Z

Comments

Comment by Michael_Wiebe on Thoughts on “The Case for Strong Longtermism” (Greaves & MacAskill) · 2021-05-04T17:25:30.996Z · EA · GW

What's your take on this argument:

"Why do we need longtermism? Let's just do the usual approach of evaluating interventions based on their expected marginal utility per dollar. If the best interventions turn out to be aimed at the short-term or long-term, who cares?"

Comment by Michael_Wiebe on Strong Longtermism, Irrefutability, and Moral Progress · 2021-05-03T22:49:33.226Z · EA · GW

Coming from an economics background, here's how to persuade me of longtermism:

Set up a social planner problem with infinite generations and solve for the optimal allocation in each period. Do three cases:

  • A planner with nonzero time preference and perfect information
  • A (longtermist) planner with zero time preference and perfect information
  • A planner with zero time preference and imperfect information

Would the third planner ignore the utility of all generations less than 1000 years in the future? If so, then you've proved strong longtermism.

Comment by Michael_Wiebe on If you value future people, why do you consider near term effects? · 2020-11-18T02:58:30.300Z · EA · GW

the long-term effects of these actions probably dominate. But we don’t know what the long-term effects of many interventions are [...]

To me, it makes more sense, even if you’re focused on traditionally near-termist causes like mental health, animal welfare, and global poverty, to evaluate interventions based on their long-term effects.

This just seems like a nonstarter. If our estimates of long-term effects are massively uncertain, how can they possibly be action-guiding?

Comment by Michael_Wiebe on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-18T02:52:22.315Z · EA · GW

Or, (3'): if we can't calculate  for  and , then assume that they're equal, and rank them by using their expected value over periods before 

Comment by Michael_Wiebe on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-18T02:46:55.703Z · EA · GW

Is this longtermism?

  1. List all possible actions {,..,}.
  2. For each action , calculate expected value  over t=1:, using the social welfare function.
  3. If we can't calculate  for some t, due to cluelessness, then skip over that action.
  4. Out of the remaining actions, choose the action with the highest expected value.
Comment by Michael_Wiebe on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-18T02:36:50.819Z · EA · GW

So longtermism is not a general decision theory, and is only meant to be applied narrowly?

Comment by Michael_Wiebe on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-18T02:35:30.560Z · EA · GW

Is that idea that once these longtermist interventions are fully-funded (diminishing returns), then we start looking at shortterm interventions?

Comment by Michael_Wiebe on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-15T02:48:03.280Z · EA · GW

perhaps we'd do better to focus on different interventions: ones whose effects of the further future are more predictable

What's the decision theory here? 

Consider a two-action, two-period model: we know the effect of action A1 in t1, but not in t2; but we know effect of A2 in both periods. Is the suggestion to do A2 (rather than A1) because we have more information on the effect of A2?

Comment by Michael_Wiebe on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-14T23:38:19.599Z · EA · GW

Also, this seems like a bad decision theory. I can't estimate the longterm effects of eating an apple, but that doesn't imply that I should starve due to indecision.

Comment by Michael_Wiebe on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-14T23:20:28.683Z · EA · GW

Isn't Response 5 (go longtermist) really a subset of Response 4 (Ignore things that we can't even estimate)? It proposes to ignore shorttermist interventions, because we can't estimate their effects.

Comment by Michael_Wiebe on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-14T22:40:57.143Z · EA · GW

I don’t know whether donating to AMF makes the world better or worse.

What's your distribution for the value of donating to AMF?

Comment by Michael_Wiebe on How much does a vote matter? · 2020-11-05T17:25:15.202Z · EA · GW

I don't get it.

Comment by Michael_Wiebe on How much does a vote matter? · 2020-11-02T18:06:39.004Z · EA · GW

When you decide whether to vote you don't decide just for yourself, but rather you decide (roughly speaking) for everyone who is similar to you.

What does this mean? If I'm in the voting booth, and I suddenly decide to leave the ballot blank, how does that affect anyone else?

Comment by Michael_Wiebe on Can we drive development at scale? An interim update on economic growth work · 2020-10-28T21:32:11.307Z · EA · GW

Some thoughts regarding your uncertainties:

What kind of research would additional funding support? What are the major unanswered questions in this space researchers would tackle?

One answer: labor markets, firms, and monetary policy in developing countries.

How valuable is research at the current margin?

I think one of the main benefits would be the collection of new datasets, which would allow us to identify the most important problems (and figure out how to solve them).

What sort of policies are most likely to affect a country’s growth rate?

One answer, building on the Washington Consensus stuff you cited: having an economist "in the room" to prevent the president from implementing a policy that would cause hyperinflation.

Comment by Michael_Wiebe on Can we drive development at scale? An interim update on economic growth work · 2020-10-28T21:21:04.340Z · EA · GW

Establishing a causal link between policy advocacy and policy change is challenging basically impossible.

Fixed.

Comment by Michael_Wiebe on Can we drive development at scale? An interim update on economic growth work · 2020-10-28T20:13:02.948Z · EA · GW

Given the utter lack of any sort of unifying government on this planet, I think we have enough players as is.

It seems plausible that it would have helped to have more rich countries capable of lobbying against the nuclear arms race (in terms of reducing x-risk).

Comment by Michael_Wiebe on Can we drive development at scale? An interim update on economic growth work · 2020-10-28T20:07:00.015Z · EA · GW

On the practical side, one approach we find compelling is the idea that researchers or advocates could do important preparatory studies so that they have evidence and recommendations ready to go when opportunities for change arise. For example, on the 80,000 Hours podcast Rachel Glennerster suggested that researchers are currently playing such a role in Ethiopia.

Another example is the continential free trade zone currently being set up in Africa.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2020-10-27T17:30:36.791Z · EA · GW

How much do non-nuclear countries exert control over nuclear weapons? How would the US-Soviet arms race have been different if, say, African countries were all as rich as the US, and could lobby against reckless accumulation of nuclear weapons?

Comment by Michael_Wiebe on Leopold Aschenbrenner returns to X-risk and growth · 2020-10-25T23:03:34.564Z · EA · GW

That graph represents the comparison of increased growth (gray line) to some baseline (black).

Instead, I'm talking about the case where the dot is at the far left side of the graph: we're currently at a low level of risk, and continuing at the baseline growth rate means 'climbing the risk mountain' before we get to the other side with lower risk. If the peak is very high (relative to the dot), then it's not clear that continued (or accelerated) growth is optimal; stagnation might be better.

Comment by Michael_Wiebe on Leopold Aschenbrenner returns to X-risk and growth · 2020-10-23T18:23:55.243Z · EA · GW

What if the dot (representing where we are) was further to the left? In that case, it's not so clear that we want to speed through and incur the increased risk.

Comment by Michael_Wiebe on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-18T04:46:35.954Z · EA · GW

Good point calling out EA Munich's citing of the Slate article. We should have outright rejected  their writeup so long as it contained this citation.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2020-10-16T16:50:31.201Z · EA · GW

So far, the effective altruist strategy for global poverty has followed a high-certainty, low-reward approach. GiveWell only looks at charities with a strong evidence base, such as bednets and cash transfers. But there's also a low certainty, high reward approach: promote catch-up economic growth. Poverty is strongly correlated with economic development (urbanization, industrialization, etc), so encouraging development would have large effects on poverty. Whereas cash transfers have a large probability of a small effect, economic growth is a small probability of a large effect. (In general, we should diversify across high- and low-risk strategies.) In short, can we do “hits-based development”?

How can we affect growth? Tractability is the main problem for hits-based development, since GDP growth rates are notoriously difficult to change. However, there are a few promising options. One specific mechanism is to train developing-country economists, who can then work in developing-country governments and influence policy. Lant Pritchett gives the example of a think tank in India that influenced its liberalizing reforms, which preceded a large growth episode. This translates into a concrete goal: get X economists working in government in every developing country (where X might be proxied by the number in developed countries). Note that local experts are more likely than foreign World Bank advisors to positively affect growth, since they have local knowledge of culture, politics, law, etc.

I will focus on two instruments for achieving this goal: funding scholarships for developing-country scholars to get PhDs in economics, and funding think tanks and universities in developing countries. First, there are several funding sources within economics for developing-country students, such as Econometric Society scholarships, CEGA programs, and fee waivers at conferences. I will map out this funding space, contacting departments and conference organizers, and determine if more money could be used profitably. For example, are conference fees a bottleneck for developing-country researchers? Would earmarked scholarships make economics PhD programs accept more developing-country students? (We have to be careful in designing the funding mechanism, so that recipients don’t simply reduce funding elsewhere.) Next, I will organize fundraisers, so that donors have a ‘one-click’ opportunity to give money to hits-based development. (This might take the form of small recurring donations, or larger funding drives, or an endowment.) Then I will advertise these donation opportunities to effective altruists and others who want to promote hits-based development. (One potential large funder is the EA Global Health and Development Fund.)

My second approach is based on funding developing-country think tanks. Recently, IDRC led the Think Tank Initiative (TTI), which funded over 40 think tanks in 20 countries over 2009-2019. This program has not been renewed. My first step here would be to analyze the effectiveness of the TTI, and figure out whether it deserves to be renewed. While causal effects are hard to estimate, it seems reasonable to measure the number of think tanks, their progress under the program, and their effects on policy. To do this I will interview think tank employees, development experts, and the TTI organizers. Next I will determine what funding exists for renewing the program, as well as investigate whether a decentralized funding approach would work.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2020-10-15T20:44:53.667Z · EA · GW

The initial claim is that for any action, we can assess its normative status by looking at its long-run effects. This is a much stronger claim than yours.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2020-10-15T20:43:02.031Z · EA · GW

Why don't models of intelligence explosion assume diminishing marginal returns? In the model below, what are the arguments for assuming a constant , rather than diminishing marginal returns (eg, ). With diminishing returns, an AI can only improve itself at a dimishing rate, so we don't get a singularity.


https://www.nber.org/papers/w23928.pdf

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2020-09-29T17:51:13.690Z · EA · GW

is also a laughable proposition in the real world

Sure, but not even close to the same extent.

Comment by Michael_Wiebe on Expected value theory is fanatical, but that's a good thing · 2020-09-25T03:37:02.930Z · EA · GW

I guess the problem is that  is nonsensical. We can talk about , but not equality.

Comment by Michael_Wiebe on Expected value theory is fanatical, but that's a good thing · 2020-09-23T16:54:17.213Z · EA · GW

Yes, I'm saying that it happens to be the case that, in practice, fanatical tradeoffs never come up.

Furthermore, you'd have to assign  when , which means perfect certainty in an empirical claim, which seems wrong.

Hm, doesn't claiming  also require perfect certainty? Ie, to know that V is literally infinite rather than some large number.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2020-09-23T09:29:51.838Z · EA · GW

What is ? It seems all the work is being done by having  in the exponent.

Comment by Michael_Wiebe on Expected value theory is fanatical, but that's a good thing · 2020-09-23T08:46:52.748Z · EA · GW

How about this: fanaticism is fine in principle, but in practice we never face any actual fanatical choices. For any actions with extremely large value V, we estimate p < 1/V, so that the expected value is <1, and we ignore these actions based on standard EV reasoning.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2020-09-23T05:31:30.683Z · EA · GW

Will says:

in order to assess the value (or normative status) of a particular action we can in the first instance just look at the long-run effects of that action (that is, those after 1000 years), and then look at the short-run effects just to decide among those actions whose long-run effects are among the very best.

Is this not laughable? How could anyone think that "looking at the 1000+ year effects of an action" is workable?

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2020-09-19T15:51:50.454Z · EA · GW

What are the comparative statics for how uncertainty affects decisionmaking? How does a decisionmaker's behavior differ under some uncertainty compared to no uncertainty?

Consider a social planner problem where we make transfers to maximize total utility, given idiosyncratic shocks to endowments. There are two agents, , with endowments   (with probability 1) and  So  either gets nothing or twice as much as .

We choose a transfer  to solve:

For a baseline, consider  and . Then we get an optimal transfer of . Intuitively, as  (if B gets 10 for sure, don't make any transfer from A to B), and as  (if B gets 0 for sure, split A's endowment equally).

So that's a scenario with risk (known probabilities), but not uncertainty (unknown probabilities). What if we're uncertain about the value of ?

Suppose we think , for some distribution  over . If we maximize expected utility, the problem becomes:

Since the objective function is linear in probabilities, we end up with the same problem as before, except with  instead of . If we know the mean of , we plug it in and solve as before.

So it turns out that this form of uncertainty doesn't change the problem very much.

Questions:
- if we don't know the mean of , is the problem simply intractable? Should we resort to maxmin utility?
- what if we have a hyperprior over the mean of ? Do we just take another level of expectations, and end up with the same solution?
- how does a stochastic dominance decision theory work here?

Comment by Michael_Wiebe on Formalizing longtermism · 2020-09-18T19:54:26.759Z · EA · GW

Do you think Will's three criteria are inconsistent with the informal definition I used in the OP ("what most matters about our actions is their very long term effects")?

Comment by Michael_Wiebe on Formalizing longtermism · 2020-09-18T06:46:21.199Z · EA · GW

In my setup, I could say  for some large ; ie, generations  contribute basically nothing to total social utility . But I don't think this captures longtermism, because this is consistent with the social planner allocating no resources to safety work (and all resources to consumption of the current generation); the condition puts no constraints on . In other words, this condition only matches the first of three criteria that Will lists:

(i) Those who live at future times matter just as much, morally, as those who live today;

(ii) Society currently privileges those who live today above those who will live in the future; and

(iii) We should take action to rectify that, and help ensure the long-run future goes well.

Comment by Michael_Wiebe on Modelling the odds of recovery from civilizational collapse · 2020-09-18T06:16:28.786Z · EA · GW

I'm a bit skeptical about the value of formal modelling here. The parameter estimates would be almost entirely determined by your assumptions, and I'd expect the  confidence intervals to be massive.

I think a toy model would be helpful for framing the issue, but going beyond that (to structural estimation) seems not worth it.

Comment by Michael_Wiebe on Formalizing longtermism · 2020-09-18T00:50:30.262Z · EA · GW

and also a world where shorttermism is true

On Will's definition, longtermism and shorttermism are mutually exclusive.

Comment by Michael_Wiebe on Formalizing longtermism · 2020-09-17T22:20:48.057Z · EA · GW

Suppose you're taking a one-off action , and then you get (discounted) reward 

I'm a bit confused by this setup. Do you mean that  is analogous to , the allocation for ? If so, what are you assuming about ? In my setup, I can compare  to. , so we're comparing against the optimal allocation, holding fixed .

 where  is some large number.

I'm not sure this works. Consider: this condition would also be satisfied in a world with no x-risk, where each generation becomes successively richer and happier, and there's no need for present generations to care about improving the future. (Or are you defining  as the marginal utility of  on generation , as opposed to the utility level of generation  under ?)

Comment by Michael_Wiebe on Formalizing longtermism · 2020-09-17T06:08:05.596Z · EA · GW

My model here is riffing on Jones (2016); you might look there for solving the model.

Re infinite utility, Jones does say (fn 6): "As usual,  must be sufficiently large given growth so that utility is finite."

Comment by Michael_Wiebe on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-14T19:51:42.552Z · EA · GW
  • Assumption Based Planning – having a written version of an organization’s plans and then identify load-bearing assumptions and assessing the vulnerability of the plan to each assumption.
  • Exploratory Modeling – rather than trying to model all available data to predict the most likely outcome these models map out a wide range of assumptions and show how different assumptions lead to different consequences
  • Scenario planning [2] – identifying the critical uncertainties, developing a set of internally consistent descriptions of future events based on each uncertainty, then developing plans that are robust [3] to all options.

Can you clarify how these tools are distinct? My (ignorant) first impression is that they just boil down to "use critical thinking".

Comment by Michael_Wiebe on Hedging against deep and moral uncertainty · 2020-09-14T18:47:13.800Z · EA · GW

Re algebra, are you defending the numbers you gave as reasonable? Otherwise, if we're just making up numbers, might as well do the general case.

Comment by Michael_Wiebe on Keynesian Altruism · 2020-09-13T19:58:26.939Z · EA · GW

Would 'countercyclical altruism' also capture this view?

Comment by Michael_Wiebe on Hedging against deep and moral uncertainty · 2020-09-13T17:09:41.561Z · EA · GW

I think this would be easier to explain with a two-sector model: ie, just  and . Also, would it be easier to just work with algebra? Ie,  .

Assuming a budget of 6 units

How does this fit with ? That's 10 units, no?

I will assume, for simplicity, constant marginal cost-effectiveness across each domain/effect/worldview

It's worth emphasizing that this assumption rules out the diminishing returns case for diversifying; this is a feature, since we want to isolate the uncertainty-case for diversifying.

Comment by Michael_Wiebe on Does Economic History Point Toward a Singularity? · 2020-09-13T15:19:44.729Z · EA · GW

One version of the phase change model that I think is worth highlighting: S-curve growth.

Basically, the set of transformative innovations is finite, and we discovered most of them over the past 200 years.  Hence, the Industrial Revolution was a period of fast technological growth, but that growth will end as we run out of innovations.The hockey-stick graph will level out and become an S-curve, as 

Comment by Michael_Wiebe on Does Economic History Point Toward a Singularity? · 2020-09-13T14:58:42.572Z · EA · GW

Although, is it the case that growth(GDP) increased during the modern era (ie, growth(population) has been rising)? My recollection is that the IR was a structural break, with  jumping from 0.5% to 2% (or something).

Comment by Michael_Wiebe on Does Economic History Point Toward a Singularity? · 2020-09-13T14:43:45.084Z · EA · GW

Right, growth(GDP) > growth(GDP per capita) when growth(population)>0.

Comment by Michael_Wiebe on Does Economic History Point Toward a Singularity? · 2020-09-13T06:51:13.237Z · EA · GW

while the author agrees that growth rates have been increasing in the modern era (roughly, the Industrial Revolution and everything after)

I think this is a misunderstanding. The common view is that the growth rate has been constant in the modern era.

Comment by Michael_Wiebe on Does Economic History Point Toward a Singularity? · 2020-09-13T01:07:20.569Z · EA · GW

Robert Gordon has argued for a coming growth slowdown: paper, book.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2020-09-12T19:07:02.673Z · EA · GW

My model here is based on the same Jones (2016) paper.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2020-09-12T17:50:50.235Z · EA · GW

This model focuses on extinction risk; another approach would look at trajectory changes.

Also, it might be interesting to incorporate Phil Trammell's work on optimal timing/giving-now vs giving-later. Eg, maybe the optimal solution involves the planner saving resources to be invested in safety work in the future.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2020-09-12T17:46:08.165Z · EA · GW

Longtermism is defined as holding that "what most matters about our actions is their very long term effects". What does this mean, formally? Below I set up a model of a social planner maximizing social welfare over all generations. With this model, we can give a precise definition of longtermism.

A model of a longtermist social planner

Consider an infinitely-lived representative agent with population size . In each period there is a risk of extinction via an extinction rate .

The basic idea is that economic growth is a double-edged sword: it increases our wealth, but also increases the risk of extinction. In particular, 'consumption research' develops new technologies , and these technologies increase both consumption and extinction risk.

Here are the production functions for consumption and consumption technologies:

However, we can also develop safety technologies to reduce extinction risk. Safety research produces new safety technologies , which are used to produce 'safety goods' .

Specifically,

The extinction rate is , where the number  of consumption technologies directly increases risk, and the number  of safety goods directly reduces it.

Let .

Now we can set up the social planner problem: choose the number of scientists (vs workers), the number of safety scientists (vs consumption scientists), and the number of safety workers (vs consumption workers) to maximize social welfare. That is, the planner is choosing an allocation of workers for all generations:

The social welfare function is:

The planner maximizes utility over all generations (), weighting by population size , and accounting for extinction risk via . The optimal allocation  is the allocation that maximizes social welfare.

The planner discounts using  (the Ramsey equation), where we have the discount rate , the exogenous extinction risk , risk-aversion  (i.e., diminishing marginal utility), and the growth rate .  (Note that  could be time-varying.)

Here there is no pure time preference; the planner values all generations equally. Weighting by population size means that this is a total utilitarian planner.

Defining longtermism

With the model set up, now we can define longtermism formally. Recall the informal definition that "what most matters about our actions is their very long term effects". Here are two ways that I think longtermism can be formalized in the model:

(1) The optimal allocation in our generation, , should be focused on safety work: the majority (or at least a sizeable fraction) of workers should be in safety research of production, and only a minority in consumption research or production. (Or,  for small values of  (say ) to capture that the next few generations need to work on safety.) This is saying that our time has high hingeyness due to existential risks. It's also saying that safety work is currently uncrowded and tractable.

(2) Small deviations from  (the optimal allocation in our generation) will produce large decreases in total social welfare , driven by generations  (or some large number). In other words, our actions today have very large effects on the long-term future. We could plot  against  for  and some suboptimal alternative , and show that  is much smaller than  in the tail.

While longtermism has an intuitive foundation (being intergenerationally neutral or having zero pure time preference), the commonly-used definition makes strong assumptions about tractability and hingeyness.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2020-08-27T23:07:22.339Z · EA · GW

Crowdedness by itself is uninformative. A cause could be uncrowded because it is improperly overlooked, or because it is intractable. Merely knowing that a cause is uncrowded shouldn't lead you to make any updates.