Michael_Wiebe's Shortform

post by Michael_Wiebe · 2020-08-19T19:19:58.481Z · score: 3 (1 votes) · EA · GW · 22 comments

22 comments

Comments sorted by top scores.

comment by Michael_Wiebe · 2020-10-16T16:50:31.201Z · score: 13 (6 votes) · EA(p) · GW(p)

So far, the effective altruist strategy for global poverty has followed a high-certainty, low-reward approach. GiveWell only looks at charities with a strong evidence base, such as bednets and cash transfers. But there's also a low certainty, high reward approach: promote catch-up economic growth. Poverty is strongly correlated with economic development (urbanization, industrialization, etc), so encouraging development would have large effects on poverty. Whereas cash transfers have a large probability of a small effect, economic growth is a small probability of a large effect. (In general, we should diversify across high- and low-risk strategies.) In short, can we do “hits-based development”?

How can we affect growth? Tractability is the main problem for hits-based development, since GDP growth rates are notoriously difficult to change. However, there are a few promising options. One specific mechanism is to train developing-country economists, who can then work in developing-country governments and influence policy. Lant Pritchett gives the example of a think tank in India that influenced its liberalizing reforms, which preceded a large growth episode. This translates into a concrete goal: get X economists working in government in every developing country (where X might be proxied by the number in developed countries). Note that local experts are more likely than foreign World Bank advisors to positively affect growth, since they have local knowledge of culture, politics, law, etc.

I will focus on two instruments for achieving this goal: funding scholarships for developing-country scholars to get PhDs in economics, and funding think tanks and universities in developing countries. First, there are several funding sources within economics for developing-country students, such as Econometric Society scholarships, CEGA programs, and fee waivers at conferences. I will map out this funding space, contacting departments and conference organizers, and determine if more money could be used profitably. For example, are conference fees a bottleneck for developing-country researchers? Would earmarked scholarships make economics PhD programs accept more developing-country students? (We have to be careful in designing the funding mechanism, so that recipients don’t simply reduce funding elsewhere.) Next, I will organize fundraisers, so that donors have a ‘one-click’ opportunity to give money to hits-based development. (This might take the form of small recurring donations, or larger funding drives, or an endowment.) Then I will advertise these donation opportunities to effective altruists and others who want to promote hits-based development. (One potential large funder is the EA Global Health and Development Fund.)

My second approach is based on funding developing-country think tanks. Recently, IDRC led the Think Tank Initiative (TTI), which funded over 40 think tanks in 20 countries over 2009-2019. This program has not been renewed. My first step here would be to analyze the effectiveness of the TTI, and figure out whether it deserves to be renewed. While causal effects are hard to estimate, it seems reasonable to measure the number of think tanks, their progress under the program, and their effects on policy. To do this I will interview think tank employees, development experts, and the TTI organizers. Next I will determine what funding exists for renewing the program, as well as investigate whether a decentralized funding approach would work.

comment by HaukeHillebrandt · 2020-10-16T16:55:34.041Z · score: 5 (3 votes) · EA(p) · GW(p)

Interesting. 

Related: "Some programs have received strong hints that they will be killed off entirely. The Oxford Policy Fellowship, a technical advisory program that embeds lawyers with governments that require support for two years, will have to withdraw fellows from their postings, according to Kari Selander, who founded the program."

https://www.devex.com/news/inside-the-uk-aid-cut-97771

https://www.policyfellowship.org/

comment by G Gordon Worley III (gworley3) · 2020-10-16T21:04:12.386Z · score: 2 (1 votes) · EA(p) · GW(p)

I'm a big fan of ideas like this. One of the things I think EAs can bring to charitable giving that is otherwise missing from the landscape is being risk-neutral, and thus willing to bet on high variance strategies that, taken as a whole in a portfolio, may have the same or hopefully higher expect returns compared to typical risk-averse charitable spending that tends to focus on things like making no money is wasted to the exclusion of taking necessary risks to realize benefits.

comment by Michael_Wiebe · 2020-09-23T05:31:30.683Z · score: 6 (5 votes) · EA(p) · GW(p)

Will says [EA · GW]:

in order to assess the value (or normative status) of a particular action we can in the first instance just look at the long-run effects of that action (that is, those after 1000 years), and then look at the short-run effects just to decide among those actions whose long-run effects are among the very best.

Is this not laughable? How could anyone think that "looking at the 1000+ year effects of an action" is workable?

comment by Dan_Keys · 2020-09-29T19:37:19.736Z · score: 11 (4 votes) · EA(p) · GW(p)

If humanity goes extinct this century, that drastically reduces the likelihood that there are humans in our solar system 1000 years from now. So at least in some cases, looking at the effects 1000+ years in the future is pretty straightforward (conditional on the effects over the coming decades).

In order to act for the benefit of the far future (1000+ years away), you don't need to be able to track the far future effects of every possible action. You just need to find at least one course of action whose far future effects are sufficiently predictable to guide you (and good in expectation).

comment by Michael_Wiebe · 2020-10-15T20:44:53.667Z · score: 1 (1 votes) · EA(p) · GW(p)

The initial claim is that for any action, we can assess its normative status by looking at its long-run effects. This is a much stronger claim than yours.

comment by Markus_Woltjer (markuswoltjer@gmail.com) · 2020-09-23T06:42:48.770Z · score: 3 (3 votes) · EA(p) · GW(p)

It's often laughable. I would think of it like this. Each action can be represented as a polynomial that calculates the value at a time based on time:

v(t) = c1*t^n + c2*t^(n-1 )+...+c3*t+c4

I would think of the value function of the decisions in my life to be the sum of the individual value functions. With every decision I'm presented with multiple functions, and I get to pick one and the coefficients will basically be added into my life's total value function.

Consider foresight to be the ability to predict the end behavior of v for large t. If t=1000 means nothing to you, then c1 is far less important to you than if t=1000 means a lot to you.

Some people probably consciously ignore large t, for example educated people and politicians sometimes make the argument (and many of them certainly believe) that t greater than their life expectancy doesn't matter. This is why the climate crisis has been so difficult to prioritize, especially for people in power who might not have ten years left to live.

But also foresight is an ability. A toddler has trouble consider the importance of t=0.003 (the next day), and because of that no coefficients except for c4 matter. Resisting the entire tub of ice cream is impossible if you can't imagine a stomach ache.

It is unusual, probably even unnatural, to consider t=1000, but it is of course important. The largest t values we can imagine tell us the most about the coefficients for the high degree terms in the polynomial. It is unusual that most of our choices have effects for these coefficients, but some will, or some might, and those should be noticed, highlighted, etc. Until I learned the benefits of veganism, I had almost no consideration for high t values, and I was electrified by the short-term, medium-term, and especially long-term benefits such as avoiding a tipping point for the climate crisis. That was seven years ago and it's faded a little as I'm just passively supporting plant-based meats (consequences are sometimes easier to change than hearts).

If there was ever a selfless (and shameless) plug, it is this. I would prefer to be doing something with my extra time that might have a large c1. The wildfires in my home state, even near my house, have made me think twice about leisure in 0 < t < 5, and made me think more about 20 < t. I've been sitting on an important idea for almost half a year, and it's too long to wait, not just because I don't want to be a procrastinator but also because waiting could effect high order coefficients. I first posted this yesterday, but I want anyone who missed it or looked past it to consider it in the light of Will's suggestion to consider high t values.

________________________________________

My name is Markus Woltjer. I'm a computer scientist living in Portland, Oregon. I have an interest in developing a blue carbon capture-and-storage project. This project is still in its inception, but I am already looking for expertise in the following areas, starting mostly with remote research roles.

  • Botany and plant decomposition
  • Materials science
  • Environmental engineering

Please contact me here or at markuswoltjer@gmail.com [? · GW] if you're interested, and I will be happy to fill in more details and discuss whether your background and interests are aligned with the roles available.

comment by Michael_Wiebe · 2020-09-23T09:29:51.838Z · score: 1 (1 votes) · EA(p) · GW(p)

What is ? It seems all the work is being done by having  in the exponent.

comment by Markus_Woltjer (markuswoltjer@gmail.com) · 2020-09-23T15:06:25.890Z · score: 0 (2 votes) · EA(p) · GW(p)

I was thinking along the lines of Taylor polynomial approximations of functions. So actually this polynomial can have infinite terms especially if t is unbounded, and n is just the iterating degree of each term representing the relationship between time and value for an action. And we choose the n to approximate v well, accepting the it is more important for the approximation to have correct end behavior but many actions have flat end behaviors and end behavior is less certain. For instance, when considering the action of taking my kittens out for a walk, I might assume that long term effects are negligible (flat end behavior), meaning that even if I could represent the value with dozens of terms (e.g. n=24), I would find that the high order term coefficients are very close to zero or zero. Maybe the only effect on 1000 < t would be if I have half an hour more each day to work on the project, but in exchange I have more energy. The uncertainty might be large compared to the coefficient's predicted value, e.g. I estimate walking has a coefficient value of 0.0001 for n=24 but I also think there's 50% chance that the value is outside the range (-0.01, 0.1). It's right to trust the small benefit predicted and prioritize that degree in selecting to include or exclude v (to choose for or against the action associated with v). So I would choose to consider just the last few terms where my value approximation is more certain. I could consider the immediate benefits of letting them release energy in their adolescence rather than terrorizing the older cat, meaning my software engineering work would be interrupted less. That could be n=1 because my career success will benefit slightly, and give more raises somewhat linearly year after year, but not really beyond my life and not compounding benefit. And in the most immediate I simply enjoy it most days, which is n=0 because the enjoyment is immediate and temporary.

comment by Aaron Gertler (aarongertler) · 2020-09-29T17:23:40.704Z · score: 2 (1 votes) · EA(p) · GW(p)

I don't think Will or any other serious scholar believes that it is "workable". It reads to me like a theoretical assumption that defines a particular abstract philosophy. 

"Looking at every possible action, calculating the expected outcome, and then choosing the best one" is also a laughable proposition in the real world, but the notion of "utilitarianism" still makes intuitive sense and can help us weigh how we make decisions (at least, some people think so). Likewise, the notion of "longtermism" can do the same, even if looking 1000 years into the future is impossible.

comment by Michael_Wiebe · 2020-09-29T17:51:13.690Z · score: 1 (1 votes) · EA(p) · GW(p)

is also a laughable proposition in the real world

Sure, but not even close to the same extent.

comment by Aaron Gertler (aarongertler) · 2020-09-29T18:07:38.943Z · score: 2 (1 votes) · EA(p) · GW(p)

I also find utilitarian thinking to be more useful/practical than "longtermist thinking". That said, I haven't seen much advocacy for longtermism as a guide to personal action, rather than as a guide to research that much more intensively attempts to map out long-term consequences.

Maybe an apt comparison would be "utilitarianism is to decisions I make in my daily life as longtermism is to the decisions I'd make if I were in an influential position with access to many person-years of planning". But this is me trying to guess what another author was thinking; you could consider writing to them directly, too.

(I assume you've heard/considered points of this type before; I'm writing them out here mostly for my own benefit, as a way of thinking through the question.)

comment by Michael_Wiebe · 2020-08-27T23:07:22.339Z · score: 5 (3 votes) · EA(p) · GW(p)

Crowdedness by itself is uninformative. A cause could be uncrowded because it is improperly overlooked, or because it is intractable. Merely knowing that a cause is uncrowded shouldn't lead you to make any updates.

comment by Michael_Wiebe · 2020-09-12T17:46:08.165Z · score: 4 (3 votes) · EA(p) · GW(p)

Longtermism is defined as holding that "what most matters about our actions is their very long term effects". What does this mean, formally? Below I set up a model of a social planner maximizing social welfare over all generations. With this model, we can give a precise definition of longtermism.

A model of a longtermist social planner

Consider an infinitely-lived representative agent with population size . In each period there is a risk of extinction via an extinction rate .

The basic idea is that economic growth is a double-edged sword: it increases our wealth, but also increases the risk of extinction. In particular, 'consumption research' develops new technologies , and these technologies increase both consumption and extinction risk.

Here are the production functions for consumption and consumption technologies:

However, we can also develop safety technologies to reduce extinction risk. Safety research produces new safety technologies , which are used to produce 'safety goods' .

Specifically,

The extinction rate is , where the number  of consumption technologies directly increases risk, and the number  of safety goods directly reduces it.

Let .

Now we can set up the social planner problem: choose the number of scientists (vs workers), the number of safety scientists (vs consumption scientists), and the number of safety workers (vs consumption workers) to maximize social welfare. That is, the planner is choosing an allocation of workers for all generations:

The social welfare function is:

The planner maximizes utility over all generations (), weighting by population size , and accounting for extinction risk via . The optimal allocation  is the allocation that maximizes social welfare.

The planner discounts using  (the Ramsey equation), where we have the discount rate , the exogenous extinction risk , risk-aversion  (i.e., diminishing marginal utility), and the growth rate .  (Note that  could be time-varying.)

Here there is no pure time preference; the planner values all generations equally. Weighting by population size means that this is a total utilitarian planner.

Defining longtermism

With the model set up, now we can define longtermism formally. Recall the informal definition that "what most matters about our actions is their very long term effects". Here are two ways that I think longtermism can be formalized in the model:

(1) The optimal allocation in our generation, , should be focused on safety work: the majority (or at least a sizeable fraction) of workers should be in safety research of production, and only a minority in consumption research or production. (Or,  for small values of  (say ) to capture that the next few generations need to work on safety.) This is saying that our time has high hingeyness due to existential risks. It's also saying that safety work is currently uncrowded and tractable.

(2) Small deviations from  (the optimal allocation in our generation) will produce large decreases in total social welfare , driven by generations  (or some large number). In other words, our actions today have very large effects on the long-term future. We could plot  against  for  and some suboptimal alternative , and show that  is much smaller than  in the tail.

While longtermism has an intuitive foundation (being intergenerationally neutral or having zero pure time preference), the commonly-used definition makes strong assumptions about tractability and hingeyness.

comment by Michael_Wiebe · 2020-09-12T17:50:50.235Z · score: 3 (2 votes) · EA(p) · GW(p)

This model focuses on extinction risk; another approach would look at trajectory changes.

Also, it might be interesting to incorporate Phil Trammell's work on optimal timing/giving-now vs giving-later. Eg, maybe the optimal solution involves the planner saving resources to be invested in safety work in the future.

comment by NunoSempere · 2020-09-12T18:22:48.893Z · score: 1 (1 votes) · EA(p) · GW(p)

You might be interested in Existential Risk and Growth

comment by Michael_Wiebe · 2020-09-12T19:07:02.673Z · score: 2 (2 votes) · EA(p) · GW(p)

My model here is based on the same Jones (2016) paper.

comment by Michael_Wiebe · 2020-09-19T15:51:50.454Z · score: 3 (2 votes) · EA(p) · GW(p)

What are the comparative statics for how uncertainty affects decisionmaking? How does a decisionmaker's behavior differ under some uncertainty compared to no uncertainty?

Consider a social planner problem where we make transfers to maximize total utility, given idiosyncratic shocks to endowments. There are two agents, , with endowments   (with probability 1) and  So  either gets nothing or twice as much as .

We choose a transfer  to solve:

For a baseline, consider  and . Then we get an optimal transfer of . Intuitively, as  (if B gets 10 for sure, don't make any transfer from A to B), and as  (if B gets 0 for sure, split A's endowment equally).

So that's a scenario with risk (known probabilities), but not uncertainty (unknown probabilities). What if we're uncertain about the value of ?

Suppose we think , for some distribution  over . If we maximize expected utility, the problem becomes:

Since the objective function is linear in probabilities, we end up with the same problem as before, except with  instead of . If we know the mean of , we plug it in and solve as before.

So it turns out that this form of uncertainty doesn't change the problem very much.

Questions:
- if we don't know the mean of , is the problem simply intractable? Should we resort to maxmin utility?
- what if we have a hyperprior over the mean of ? Do we just take another level of expectations, and end up with the same solution?
- how does a stochastic dominance decision theory work here?

comment by MichaelStJules · 2020-09-19T18:05:14.450Z · score: 3 (2 votes) · EA(p) · GW(p)

if we don't know the mean of , is the problem simply intractable? Should we resort to maxmin utility?

It's possible in a given situation that we're willing to commit to a range of probabilities, e.g.  (without committing to  or any other number), so that we can check the recommendations for each value of  (sensitivity analysis).

I don't think maxmin utility follows, but it's one approach we can take.

what if we have a hyperprior over the mean of ? Do we just take another level of expectations, and end up with the same solution?

Yes, I think so.

how does a stochastic dominance decision theory work here?

I'm not sure specifically, but I'd expect it to be more permissible and often allow multiple options for a given setup. I think the specific approach in that paper is like assuming that we only know the aggregate (not individual) utility function up to monotonic transformations, not even linear transformations, so that any action which is permissible under some degree  of risk aversion with respect to aggregate utility is permissible generally. (We could also have uncertainty about individual utility/welfare functions, too, which makes things more complicated.)

comment by MichaelStJules · 2020-09-19T23:52:42.207Z · score: 3 (2 votes) · EA(p) · GW(p)

I think we can justify ruling out all options the maximality rule [EA · GW] rules out, although it's very permissive. Maybe we can put more structure on our uncertainty than it assumes. For example, we can talk about distributional properties for  without specifying an actual distribution for , e.g.  is more likely to be between 0.8 and 0.9 than 0.1 and 0.2, although I won't commit to a probability for either.

comment by Michael_Wiebe · 2020-08-19T19:20:09.742Z · score: 3 (2 votes) · EA(p) · GW(p)

We need to drop the term "neglected". Neglectedness is crowdedness relative to importance, and the everyday meaning is "improperly overlooked". So it's more precise to refer to crowdedness ($ spent) and importance separately. Moreover, saying that a cause is uncrowded has a different connotation than saying that a cause is neglected. A cause could be uncrowded because it is overlooked, or because it is intractable; if the latter, it doesn't warrant more attention. But a neglected cause warrants more attention by definition.

More: https://forum.effectivealtruism.org/posts/fR55cjoph2wwiSk8R/formalizing-the-cause-prioritization-framework [EA · GW]

comment by Michael_Wiebe · 2020-10-15T20:43:02.031Z · score: 1 (1 votes) · EA(p) · GW(p)

Why don't models of intelligence explosion assume diminishing marginal returns? In the model below, what are the arguments for assuming a constant , rather than diminishing marginal returns (eg, ). With diminishing returns, an AI can only improve itself at a dimishing rate, so we don't get a singularity.


https://www.nber.org/papers/w23928.pdf