Posts

Longtermism as Effective Altruism 2022-07-29T00:56:46.930Z
How would you draw the Venn diagram of longtermism and neartermism? 2022-05-25T04:34:30.987Z
Longtermist slogans that need to be retired 2022-05-09T01:07:36.779Z
Solving the replication crisis (FTX proposal) 2022-04-25T21:04:06.674Z
Hits-based development: funding developing-country economists 2022-01-01T00:28:05.519Z
Formalizing longtermism 2020-09-16T05:00:04.351Z
Michael_Wiebe's Shortform 2020-08-19T19:19:58.481Z
Formalizing the cause prioritization framework 2019-11-05T18:09:24.746Z

Comments

Comment by Michael_Wiebe on Simple BOTEC on X-Risk Work for Neartermists · 2022-12-04T02:27:41.194Z · EA · GW

How much would I personally have to reduce X-risk to make this the optimal decision? Well, that’s simple. We just calculate: 

  • 25 billion * X = 20,000 lives saved
  • X = 20,000 / 25 billion
  • X = 0.0000008 
  • That’s 0.00008% in x-risk reduction for a single individual.

I'm not sure I follow this exercise. Here's how I'm thinking about it:

Option A: spend your career on malaria.

  • Cost: one career
  • Payoff: save 20k lives with probability 1.

Option B: spend your career on x-risk.

  • Cost: one career
  • Payoff: save 25B lives with probability p (=P(prevent extinction)), save 0 lives with probability 1-p. 
    • Expected payoff: 25B*p.

Since the costs are the same, we can ignore them. Then you're indifferent between A and B if p=8x10^-7, and B is better if p>8x10^-7.

But I'm not sure how this maps to a reduction in P(extinction).

Comment by Michael_Wiebe on Simple BOTEC on X-Risk Work for Neartermists · 2022-12-02T19:39:12.827Z · EA · GW

How much would I personally have to reduce X-risk to make this the optimal decision?

Shouldn't this exercise start with the current P(extinction), and then calculate how much you need to reduce that probability? I think your approach is comparing two outcomes: save 25B lives with probability p, or save 20,000 lives with probability 1. Then the first option has higher expected value if p>20000/25B. But this isn't answering your question of personally reducing x-risk.

Also, I think you should calculate marginal expected value, ie., the value of additional resources conditional on the resources already allocated, to account for diminishing marginal returns.

Comment by Michael_Wiebe on Air Pollution: Founders Pledge Cause Report · 2022-10-05T22:21:35.605Z · EA · GW

Adding to the causal evidence, there's a 2019 paper that uses wind direction as an instrumental variable for PM2.5. They find that IV  > OLS, implying that observational studies are biased downwards:

Comparing the OLS estimates to the IV estimates in Tables 2 and 3 provides strong evidence that observational studies of the relationship between air pollution and health outcomes suffer from significant bias: virtually all our OLS estimates are smaller than the corresponding IV estimates. If the only source of bias were classical measurement error, which causes attenuation, we would not expect to see significantly negative OLS estimates. Thus, other biases, such as changes in economic activity that are correlated with both hospitalization patterns and pollution, appear to be a concern even when working with high-frequency data.

They also compare their results to the epidemiology literature:

To facilitate comparison to two studies from the epidemiological literature with settings similar to ours, we have also estimated the effect of PM 2.5 on one-day mortality and hospitalizations [...] Using data from 27 large US cities from 1997 to 2002, Franklin, Zeka, and Schwartz (2007) reports that a 10 μg/m3 increase in daily PM 2.5 exposure increases all-cause mortality for those aged 75 and above by 1.66 percent. Our one-day IV estimate for 75+ year-olds [...] is an increase of 2.97 percent [...] 

On the hospitalization side, Dominici et al. (2006) uses Medicare claims data from US urban counties from 1999 to 2002 and finds an increase in elderly hospitalization rates associated with a 10 μg/m3 increase in daily PM 2.5 exposure ranging from 0.44 percent (for ischemic heart disease hospitalizations) to 1.28 percent (for heart failure hospitalizations). We estimate that a 10 μg/m3 increase in daily PM 2.5 increases one-day all-cause hospitalizations by 2.22 percent [...], which is 70 percent larger than the heart failure estimate and over five times larger than the ischemic heart disease estimate. Overall, these comparisons suggest that observational studies may systematically underestimate the health effects of acute pollution exposure.

Comment by Michael_Wiebe on Enlightenment Values in a Vulnerable World · 2022-07-18T19:32:55.180Z · EA · GW

Related, John von Neumann on x-risk:

Finally and, I believe, most importantly, prohibition of technology (invention and development, which are hardly separable from underlying scientific inquiry), is contrary to the whole ethos of the industrial age. It is irreconcilable with a major mode of intellectuality as our age understands it. It is hard to imagine such a restraint successfully imposed in our civilization. Only if those disasters that we fear had already occurred, only if humanity were already completely disillusioned about technological civilization, could such a step be taken. But not even the disasters of recent wars have produced that degree of disillusionment, as is proved by the phenomenal resiliency with which the industrial way of life recovered even—or particularly—in the worst-hit areas. The technological system retains enormous vitality, probably more than ever before, and the counsel of restraint is unlikely to be heeded.

What safeguard remains? Apparently only day-to-day — or perhaps year-to-year — opportunistic measures, along sequence of small, correct decisions. [...] Under present conditions it is unreasonable to expect a novel cure-all. For progress there is no cure. Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration. The only safety possible is relative, and it lies in an intelligent exercise of day-to-day judgment.

Comment by Michael_Wiebe on Should you still use the ITN framework? [Red Teaming Contest] · 2022-07-14T22:14:08.198Z · EA · GW

I didn't suggest otherwise.

Comment by Michael_Wiebe on Should you still use the ITN framework? [Red Teaming Contest] · 2022-07-14T18:53:59.616Z · EA · GW

It sounds like you're arguing that we should estimate 'good done/additional resources' directly (via Fermi estimates), instead of indirectly using the ITN framework. But shouldn't these give the same answer?

Comment by Michael_Wiebe on Should you still use the ITN framework? [Red Teaming Contest] · 2022-07-14T18:47:14.893Z · EA · GW

And even when you can multiply the three quantities together, I feel like speaking in terms of importance, neglectedness and tractability might make you feel that there is no total ordering of intervention (“some have higher importance, some have higher tractability, whether you prefer one or the other is a matter a personal taste”)

I don't follow this. If you multiply I*T*N and get 'good done/additional resources', how is that not an ordering?

Comment by Michael_Wiebe on It's OK not to go into AI (for students) · 2022-07-14T17:09:48.133Z · EA · GW

There seems to be a "intentions don't matter, results do" lesson that's relevant here. Intending to solve AI alignment is secondary, and doesn't mean that you're making progress on the problem.

And we don't want people saying "I'm working on AI" just for the social status, if that's not their comparative advantage and they're not actually being productive.

Comment by Michael_Wiebe on Person-affecting intuitions can often be money pumped · 2022-07-11T16:31:40.373Z · EA · GW

Hm, then I find necessitarianism quite strange. In practice, how do we identify people who exist regardless of our choices?

Comment by Michael_Wiebe on An epistemic critique of longtermism · 2022-07-10T19:47:25.932Z · EA · GW

The longtermist claim is that because humans could in theory live for hundreds of millions or billions of years, and we have potential to get the risk of extinction very almost to 0, the biggest effects of our actions are almost all in how they affect the far future. Therefore, if we can find a way to predictably improve the far future this is likely to be, certainly from a utilitarian perspective, the best thing we can do.

I don't find this framing very useful. The importance-tractability-crowdedness framework gives us a sophisticated method for evaluating causes (allocate resources according to marginal utility per dollar), which is flexible enough to account for diminishing returns as funding increases.

But the longtermist framework collapses this down to a binary: is this the best intervention or not?

Comment by Michael_Wiebe on An epistemic critique of longtermism · 2022-07-10T19:32:19.194Z · EA · GW

Because of this heavy tailed distribution of interventions

Is it actually heavy-tailed? It looks like an ordered bar chart, not a histogram, so it's hard to tell what the tails are like.

Comment by Michael_Wiebe on Announcing the Center for Space Governance · 2022-07-10T19:16:41.316Z · EA · GW

Zach and Kelly Weinersmith are writing a book on space settlement. Might be worth reaching out to them.

Comment by Michael_Wiebe on Fanatical EAs should support very weird projects · 2022-07-09T05:02:31.000Z · EA · GW

What do you think of the Bayesian solution, where you shrink your EV estimate towards a prior (thereby avoiding the fanatical outcomes)?

Comment by Michael_Wiebe on When Giving People Money Doesn't Help · 2022-07-09T04:37:06.069Z · EA · GW

The three groups have completely converged by the end of the 180 day period

I find this surprising. Why don't the treated individuals stay on a permanently higher trajectory? Do they have a social reference point, and since they're ahead of their peers, they stop trying as hard?

Comment by Michael_Wiebe on Person-affecting intuitions can often be money pumped · 2022-07-08T16:46:12.117Z · EA · GW

Is the difference between actualism and necessitarianism that actualism cares about both (1) people who exist as a result of our choices, and (2) people who exist regardless of our choices; whereas necessitarianism cares only about (2)?

Comment by Michael_Wiebe on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-07-05T16:45:03.948Z · EA · GW

I wonder if we can back out what assumptions the 'peace pact' approach is making about these exchange rates. They are making allocations across cause areas, so they are implicitly using an exchange rate.

Comment by Michael_Wiebe on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-07-04T05:33:49.509Z · EA · GW

I get the weak impression that worldview diversification (partially) started as an approximation to expected value, and ended up being more of a peace pact between different cause areas. This peace pact disincentivizes comparisons between giving in different cause areas, which then leads to getting their marginal values out of sync. 

Do you think there's an optimal 'exchange rate' between causes (eg. present vs future lives, animal vs human lives), and that we should just do our best to approximate it? 

Comment by Michael_Wiebe on Global Health & Development - Beyond the Streetlight · 2022-07-03T03:12:05.943Z · EA · GW

Have you seen this?

Comment by Michael_Wiebe on Kurzgesagt - The Last Human (Longtermist video) · 2022-06-28T20:44:14.418Z · EA · GW

If we don't kill ourselves in the next few centuries or millennia, almost all humans that will ever exist will live in the future.

The idea is that, after a few millenia, we'll have spread out enough to reduce extinction risks to ~0?

Comment by Michael_Wiebe on Results of a survey of international development professors on EA · 2022-06-23T16:45:31.271Z · EA · GW

Nice work! Sounds like movement building is very important.

Comment by Michael_Wiebe on Longtermist slogans that need to be retired · 2022-06-21T21:35:06.854Z · EA · GW

Do you disagree with FTX funding lead elimination instead of marginal x-risk interventions?

Comment by Michael_Wiebe on Longtermist slogans that need to be retired · 2022-06-21T20:06:51.950Z · EA · GW

I happen to disagree that possible interventions that greatly improve the expectation of the long-term future will soon all be taken.

What do you think about MacAskill's claim that "there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear."?

Comment by Michael_Wiebe on Longtermist slogans that need to be retired · 2022-06-21T19:49:47.478Z · EA · GW

Do you think FTX funding lead elimination is a mistake, and that they should do patient philanthropy instead?

Comment by Michael_Wiebe on Critiques of EA that I want to read · 2022-06-20T17:53:56.562Z · EA · GW

Also, how are you defining "longtermist" here? You seem to be using it to mean "focused on x-risk".

Comment by Michael_Wiebe on Critiques of EA that I want to read · 2022-06-20T17:47:48.826Z · EA · GW

I think that these factors might be making it socially harder to be a non-longtermist who engages with the EA community, and that is an important and missing part of the ongoing discussion about EA community norms changing.

Although note that Will MacAskill supports lead elimination from a broad longtermist perspective:

Well, it’s because there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear. Whereas something in this broad longtermist area — like reducing people’s exposure to lead, improving brain and other health development — especially if it’s like, “We’re actually making real concrete progress on this, on really quite a small budget as well,” that just looks really good. We can just fund this and it’s no downside as well. And I think that’s something that people might not appreciate: just how much that sort of work is valued, even by the most hardcore longtermists.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2022-06-14T21:17:57.895Z · EA · GW

But again, whether non-extinction catastrophe or extinction catastrophe, if the probabilities are high enough, then both NTs and LTs will be maxing out their budgets, and will agree on policy. It's only when the probabilities are tiny that you get differences in optimal policy.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2022-06-14T16:50:03.833Z · EA · GW

Appreciate your support!

Comment by Michael_Wiebe on The value of x-risk reduction · 2022-06-13T23:01:13.386Z · EA · GW

Using  in  is assuming constant returns to scale. If you have , you get diminishing returns.

Messing around with some python code:

from scipy.stats import norm
import numpy as np

def risk_reduction(K,L,alpha,beta):
 print('risk:', norm.cdf(-(K**alpha)*(L**beta)))
 print('expected value:', 1/norm.cdf(-(K**alpha)*(L**beta)))
 
 print('risk (2x):', norm.cdf(-((2*K)**alpha)*(L**beta)))
 print('expected value (2x):', 1/norm.cdf(-((2*K)**alpha)*(L**beta)))
 
 print('ratio:',(1/norm.cdf(-((2*K)**alpha)*(L**beta)))/(1/norm.cdf(-(K**alpha)*(L**beta))))
 
K,L = 0.5, 0.5
alpha, beta = 0.5, 0.5
risk_reduction(K,L,alpha,beta)

K,L = 0.5, 0.5
alpha, beta = 0.2, 0.2
risk_reduction(K,L,alpha,beta)

K,L = 0.5, 20
alpha, beta = 0.2, 0.2
risk_reduction(K,L,alpha,beta)

K,L = 0.5, 20
alpha, beta = 0.5, 0.5
risk_reduction(K,L,alpha,beta)

Comment by Michael_Wiebe on The value of x-risk reduction · 2022-06-13T22:16:59.795Z · EA · GW

Are you using ?

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2022-06-13T21:30:45.816Z · EA · GW

Agreed, that's another angle. NTs will only have a small difference between non-extinction-level catastrophes and extinction-level catastrophes (eg. a nuclear war where 1000 people survive vs one that kills everyone), whereas LTs will have a huge difference between NECs and ECs.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2022-06-13T17:05:18.180Z · EA · GW

I agree that it's a difficult problem, but I'm not sure that it's impossible.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2022-06-13T05:40:56.098Z · EA · GW

Yes, I think of EA as optimally allocating a budget to maximize social welfare, analogous to the constrained utility maximization problem in intermediate microeconomics. 

The worldview diversification problem is in putting everything in common units (eg. comparing human and animal lives, or comparing current and future lives). Uncertainty over these 'exchange rates' translates into uncertainty in our optimal budget allocation.

Comment by Michael_Wiebe on What “pivotal” and useful research ... would you like to see assessed? (Bounty for suggestions) · 2022-06-13T05:34:22.670Z · EA · GW
Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2022-06-12T23:20:45.045Z · EA · GW

Yes, it sounds like MacAskill's motivation is about PR and community health ("getting people out of bed in the morning"). I think it's important to note when we're funding things because of direct expected value, vs these indirect effects.

Comment by Michael_Wiebe on Michael_Wiebe's Shortform · 2022-06-12T23:00:14.505Z · EA · GW

Does longtermism vs neartermism boil down to cases of tiny probabilities of x-risk? 

When P(x-risk) is high, then both longtermists and neartermists max out their budgets on it. We have convergence.

When P(x-risk) is low, then the expected value is low for neartermists (since they only care about the next ~few generations) and high for longtermists (since they care about all future generations). Here, longtermists will focus on x-risks, while neartermists won't.

Comment by Michael_Wiebe on AI Could Defeat All Of Us Combined · 2022-06-11T22:16:08.995Z · EA · GW

Do we know the expected cost for training an AGI? Is that within a single company's budget?

Comment by Michael_Wiebe on The dangers of high salaries within EA organisations · 2022-06-11T21:50:56.124Z · EA · GW

As you note, the key is being able to precisely select applicants based on altruism:

This tension also underpins a frequent argument made by policymakers that extrinsic rewards should be kept low so as to draw in agents who care sufficiently about delivering services per se. A simple conceptual framework makes precise that, in line with prevailing policy concerns, this attracts applicants who are less prosocial conditional on a given level of talent. However, since the outside option is increasing in talent, adding career benefits will draw in more talented individuals, and the marginal, most talented applicant in both groups will have the highest prosociality. Intuitively, since a candidate with high ability will also have a high outside option, if they are applying for the health worker position it must be because they are highly prosocial. The treatment effect on recruited candidates will therefore depend on how candidates are chosen from the pool. If applicants are drawn randomly, there might be a trade-off between talent and prosociality. However, if only the most talented are hired, there will be no trade-off.

Comment by Michael_Wiebe on The dangers of high salaries within EA organisations · 2022-06-11T21:43:03.620Z · EA · GW

Why does your graph have financial motivation as the y-axis? Isn't financial motivation negatively correlated with altruism, by definition? In other words, financial motivation and altruism are opposite ends of a one-dimensional spectrum.

I would've put talent on the y-axis, to illustrate the tradeoff between talent and altruism.

Comment by Michael_Wiebe on The dangers of high salaries within EA organisations · 2022-06-11T21:12:35.847Z · EA · GW

So perhaps EA orgs can raise salaries and attract more-talented-yet-equally-commited workers. (Though this effect would depend on the level of the salary.)

Comment by Michael_Wiebe on AI Could Defeat All Of Us Combined · 2022-06-11T21:00:22.387Z · EA · GW

Let  be the computing power used to train the model. Is the idea that "if you could afford  to train the model, then you can also afford  for running models"? 

Because that doesn't seem obvious. What if you used 99% of your budget on training? Then you'd only be able to afford  for running models.

Or is this just an example to show that training costs >> running costs?

Comment by Michael_Wiebe on The dangers of high salaries within EA organisations · 2022-06-11T20:44:00.277Z · EA · GW

Related:

"Losing Prosociality in the Quest for Talent? Sorting, Selection, and Productivity in the Delivery of Public Services"
By Nava Ashraf, Oriana Bandiera, Edward Davenport, and Scott S. Lee

Abstract:

We embed a field experiment in a nationwide recruitment drive for a new health care position in Zambia to test whether career benefits attract talent at the expense of prosocial motivation. In line with common wisdom, offering career opportunities attracts less prosocial applicants. However, the trade-off exists only at low levels of talent; the marginal applicants in treatment are more talented and equally prosocial. These are hired, and perform better at every step of the causal chain: they provide more inputs, increase facility utilization, and improve health outcomes including a 25 percent decrease in child malnutrition.

https://ashrafnava.files.wordpress.com/2021/11/aer.20180326.pdf

Comment by Michael_Wiebe on AI Could Defeat All Of Us Combined · 2022-06-11T20:36:48.953Z · EA · GW

Basically, is the computing power for training a fixed cost or a variable cost? If it's a fixed cost, then there's no further cost to using the same computing power to train models.

Comment by Michael_Wiebe on AI Could Defeat All Of Us Combined · 2022-06-11T20:34:58.793Z · EA · GW

once the first human-level AI system is created, whoever created it could use the same computing power it took to create it in order to run several hundred million copies for about a year each.

How does computing power work here? Is it:

  1. We use a supercomputer to train the AI, then the supercomputer is just sitting there, so we can use it to run models. Or:
  2. We're renting a server to do the training, and then have to rent more servers to run the models.

In (2), we might use up our whole budget on the training, and then not be able to afford to run any models.

Comment by Michael_Wiebe on AGI Ruin: A List of Lethalities · 2022-06-09T17:46:30.080Z · EA · GW

Great comment. Perhaps it would be helpful to explicitly split the analysis by assumptions about takeoff speed? It seems that conditional on takeoff speed, there's not much disagreement.

Comment by Michael_Wiebe on Potatoes: A Critical Review · 2022-06-08T15:25:10.364Z · EA · GW

This paper makes that point about linear regressions in general.

Comment by Michael_Wiebe on Four Concerns Regarding Longtermism · 2022-06-08T03:10:22.669Z · EA · GW

Re: discount factor,  longtermists have zero pure time preference. They still discount for exogenous extinction risk and diminishing marginal utility.

See: https://www.cambridge.org/core/journals/economics-and-philosophy/article/discounting-for-public-policy-a-survey/4CDDF711BF8782F262693F4549B5812E

Comment by Michael_Wiebe on Nuclear risk research ideas: Summary & introduction · 2022-06-06T23:59:52.987Z · EA · GW

I’m very unsure how many people and how much funding the effective altruism community should be allocating to nuclear risk reduction or related research, and I think it’s plausible we should be spending either substantially more or substantially less labor and funding on this cause than we currently are (see also Aird & Aldred, 2022a).[6] And I have a similar level of uncertainty about what “intermediate goals”[7] and interventions to prioritize - or actively avoid - within the area of nuclear risk reduction (see Aird & Aldred, 2022b). This is despite me having spent approximately half my time from late 2020 to late 2021 on research intended to answer these questions, which is - unfortunately! - enough to make me probably among the 5-20 members of the EA community with the best-informed views on those questions. [bold added]

This is pretty surprising to me. Do you have a sense of how much uncertainty you could have resolved if you spent another half-year working on this?

Comment by Michael_Wiebe on How would you draw the Venn diagram of longtermism and neartermism? · 2022-06-06T19:20:11.190Z · EA · GW

Relevant, by @HaydnBelfield:

Comment by Michael_Wiebe on A personal take on longtermist AI governance · 2022-06-05T21:10:32.656Z · EA · GW

One possible response is about long vs short AI timelines, but that seems orthogonal to longtermism/neartermism.

Comment by Michael_Wiebe on A personal take on longtermist AI governance · 2022-06-05T20:40:48.565Z · EA · GW

Our AI focus area is part of our longtermism-motivated portfolio of grants,[2] and we focus on AI alignment and AI governance grantmaking that seems especially helpful from a longtermist perspective. On the governance side, I sometimes refer to this longtermism-motivated subset of work as "transformative AI governance" for relative concreteness, but a more precise concept for this subset of work is "longtermist AI governance."[3]

What work is "from a longtermist perspective" doing here? (This phrase is used 8 times in the article.) Is it: longtermists have pure time preference = 0, while neartermists have >0,  so longtermists care a lot more about extinction than neartermists do (because they care more about future generations). Hence, longtermist AI governance means focusing on extinction-level AI risks, while neartermist AI governance is about non-extinction AI risks (eg. racial discrimination in predicting recidivism).

If so, I think this is misleading. Neartermists also care a lot about extinction, because everyone dying is really bad

Is there another interpretation that I'm missing? Eg. would neartermists and longtermists have different focuses within extinction-level AI risks?