Posts

Informational Lobbying: Theory and Effectiveness 2020-07-30T22:02:15.200Z · score: 64 (25 votes)
Matt_Lerner's Shortform 2019-12-20T18:11:47.835Z · score: 2 (1 votes)
Kotlikoff et al., 'Making Carbon Taxation a Generational Win Win' 2019-11-24T16:27:50.351Z · score: 14 (7 votes)

Comments

Comment by mattlerner on Effective donation for Moria / Lesbos · 2020-10-20T18:05:32.167Z · score: 3 (5 votes) · EA · GW

I wonder if the forum  shouldn't encourage a class of post (basically like this one) that's something like "are there effective giving opportunities in X context?" Although EA is cause-neutral, there's no reason why members shouldn't take the opportunity provided by serendipity to investigate highly specific scenarios and model "virtuous EA behavior." This could be a way of making the forum friendlier to visitors like the OP, and a way for comments to introduce visitors to EA concepts in a way that's emotionally relevant.

Comment by mattlerner on EA's abstract moral epistemology · 2020-10-20T15:00:42.382Z · score: 5 (4 votes) · EA · GW

I also found this (ironically) abstract. There are more than enough philosophers on this board to translate this for us, but I think it might be useful to give it a shot and let somebody smarter correct the misinterpretations.

The author suggests that the "radical" part of EA is the idea that we are just as obligated to help a child drowning in a faraway pond as in a nearby one:

The morally radical suggestion is that our ability to act so as to produce value anywhere places the same moral demands on us as does our ability to produce value in our immediate practical circumstances

She notes that what she sees as the EA moral view excludes "virtue-oriented" or subjective moral positions, and lists several views (e.g. "Kantian constructivist") that are restricted if one takes what she sees as the EA moral view. She maintains that such views, which (apparently) have a long history at Oxford, have a lot to offer in the way of critique of EA.

Institutional critique

In a nutshell, EA focuses too much on what it can measure, and what it can measure are incrementalist approaches that ignores the "structural, political roots of global misery." The author says that the EA responses to this criticism (that even efforts at systemic change can be evaluated and judged effective) are fair. She says that these responses constitute a claim that the institutional critique is a criticism of how closely EA hews to its tenets, rather than of the tenets themselves. She disagrees with this claim.

Philosophical critique

This critique holds that EAs basically misunderstand what morality is-- that the point of view of the universe is not really possible. The author argues that attempting to take this perspective actively "deprives us of the very resources we need to recognise what matters morally"-- in other words, taking the abstract view eliminates moral information from our reasoning.

The author lists some of the features of the worldview underpinning the philosophical critique. Acting rightly includes:

acting in ways that are reflective of virtues such as benevolence, which aims at the well-being of others

 

acting, when appropriate, in ways reflective of the broad virtue of justice, which aims at an end—giving people what they are owed—that can conflict with the end of benevolence

She concludes:

In a case in which it is not right to improve others’ well-being, it makes no sense to say that we produce a worse result. To say this would be to pervert our grasp of the matter by importing into it an alien conception of morality ... There is here simply no room for EA-style talk of “most good.”

So in this view there are situations in which morality is more expansive than the improvement of others' well-being, and taking the abstract view eliminates these possibilities.

The philosophical-institutional critique

The author combines the philosophical and institutional critiques. The crux of this view seems to be that large-scale social problems have an ethical valence, and that it's basically impossible to understand or begin to rectify them if you take the abstract (god's eye) view, which eliminates some of this useful information:

Social phenomena are taken to be irreducibly ethical and such that we require particular modes of affective response to see them clearly ... Against this backdrop, EA’s abstract epistemological stance seems to veer toward removing entirely it from the business of social understanding.

This critique maintains that it's the methodological tools of EA ("economic modes of reasoning") that block understanding, and articulates part of the worldview behind this critique:

Underlying this charge is a very particular diagnosis of our social condition. The thought is that the great social malaise of our time is the circumstance, sometimes taken as the mark of neoliberalism, that economic modes of reasoning have overreached so that things once rightly valued in a manner immune to the logic of exchange have been instrumentalised.

In other words, the overreach of economic thinking into moral philosophy is a kind of contamination that blinds EA to important moral concerns.

Conclusion

Finally, the author contends that EA's framework constrains "available moral and political outlooks," and ties this to the lack of diversity within the movement. By excluding more subjective strains of moral theory, EA excludes the individuals who "find in these traditions the things they most need to say." In order for EA to make room for these individuals, it would need to expand its view of morality.

Comment by mattlerner on The Risk of Concentrating Wealth in a Single Asset · 2020-10-18T21:46:09.702Z · score: 1 (1 votes) · EA · GW

I'm curious to hear Michael's response, but also interested to hear more about why you think this. I have the opposite intuition- presumably 1910 had its fair share of moonshots which seemed crazy at the time and which turned out, in fact, to be basically crazy, which is why we haven't heard about them.

A portfolio which included Ford and Edison would have performed extremely well, but I don't know how many possible 1910 moonshot portfolios would have included them or would have weighted them significantly enough to outperform the many failed other moonshots.

Comment by mattlerner on Introducing LEEP: Lead Exposure Elimination Project · 2020-10-06T19:35:01.313Z · score: 5 (4 votes) · EA · GW

I'm really excited to see this!

I understand that, lead abatement itself aside, the alkalinity of the water supply seems to have an impact on lead absorption in the human body and its attendant health effects. I'm curious whether (1) this impact is significant (2) whether interventions to change the pH of water are competitive in terms of cost-effectiveness with other types of interventions and (3) whether this has been tried.

Comment by mattlerner on No More Pandemics: a lobbying group? · 2020-10-04T16:02:47.784Z · score: 2 (2 votes) · EA · GW

The venue of advocacy here will depend at least in part on the policies you decide are worth advocating. Even with hundreds of grassroots volunteers, it will be hard to ensure the fidelity of the message you are trying to communicate. It is hard at first blush to imagine how greater attention to pandemic preparedness could do harm, but it is not difficult that simply exhorting government to "do something" could have bad consequences.

Given the situation, it seems likely that governments preparing for future pandemics without clear guidance will prepare for a repeat of the pandemic that is already happening, rather than a different and worse one in future.

Once you select certain highly effective policy worth advocating (for example, an outbreak contingency fund), that's the stage at which to determine the venue and the tactic. I'm not a bio expert, but it's not difficult to imagine that once you identify a roster of potential policies, the most effective in expectation may involve, for example, lobbying Heathrow Airport Holdings or the Greater London Authority rather than Parliament.

Comment by mattlerner on Some learnings I had from forecasting in 2020 · 2020-10-04T15:34:54.809Z · score: 5 (3 votes) · EA · GW
The EA community overrates the predictive validity and epistemic superiority of forecasters/forecasting.

This seems to be true and also to be an emerging consensus (at least here on the forum).

I've only been forecasting for a few months, but it's starting to seem to me like forecasting does have quite a lot of value—as valuable training in reasoning, and as a way of enforcing a common language around discussion of possible futures. The accuracy of the predictions themselves seems secondary to the way that forecasting serves as a calibration exercise. I'd really like to see empirical work on this, but anecdotally it does feel like it has improved my own reasoning somewhat. Curious to hear your thoughts.

Comment by mattlerner on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-20T22:38:02.820Z · score: 2 (2 votes) · EA · GW

I think scale/scope is a pretty intuitive way of thinking about problems, which is I imagine why it's part of the ITN framework. To my eye, the framework is successful because it reflects intuitive concepts like scale, so I don't see too much of a coincidence here.

If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.

This is a good point. I don't see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice. Still, you're right that my "straw activist" would probably scoff at AI risk, for example.

I guess I'd say that the way of thinking I've described doesn't imply an accurate assessment of problem scale, and since skepticism about the (relatively formal) arguments on which concerns about AI risk are based is core to the worldview, there'd be no reason for someone like this to accept that some of the more "out there" GCRs are GCRs at all.

Quite separately, there is a tendency among all activists (EAs included) to see convergence where there is none, and I think this goes a long way toward neutralizing legitimate but (to the activist) novel concerns. Anecdotally, I see this a lot—the proposition, for instance, that international development will come "along for the ride" when the U.S. gets its own racial justice house in order, or that the end of capitalism necessarily implies more effective global cooperation.

Comment by mattlerner on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-19T04:47:33.347Z · score: 2 (2 votes) · EA · GW

This is certainly a charitable reading of the article, and you are doing the right thing by trying to read it as generously as possible. I think they are indeed making this point:

the technocratic nature of the approach itself will only very rarely result in more funds going to the type of social justice philanthropy that we support with the Guerrilla Foundation – simply because the effects of such work are less easy to measure and they are less prominent among the Western, educated elites that make up the majority of the EA movement

This criticism is more than fair. I have to agree with it and simultaneously point out that of course this is a problem that many are aware of and are actively working to change. I don't think that they're explicitly arguing for the worldview I was outlining above. This is my own perception of the motivating worldview, and I find support in the authors' explicit rejection of science and objectivity.

Comment by mattlerner on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-18T22:58:56.069Z · score: 6 (4 votes) · EA · GW

I can get behind your initial framing, actually. It's not explicit—I don't think the authors would define themselves as people who don't believe decision under uncertainty is possible—but I think it's a core element of the view of social good professed in this article and others like it.

A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.

These people and groups select causes based only on perceived scale. They don't necessarily think that malaria and AI risk aren't important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.

To me, this is not necessarily reflective of innumeracy or a lack of comfort with probability. It seems more like a really radical second- and third-order uncertainty about the value of certain kinds of reasoning— a deep-seated mistrust of numbers, science, experts, data, etc. I think the authors of the posted article lay their cards on the table in this regard:

the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites

These are people who associate the conventions and methods of science and rationality with their instrumental use in a system that they see as inherently unjust. As a result of that association, they're hugely skeptical about the methods themselves, and aren't able or willing to use them in decision-making.

I don't think this is logical, but I do think it is understandable. Many students, in particular American ones (though I recognize that Guerrilla is a European group) have been told repeatedly, for many years, that the central value of learning science and math lies in getting a good job in industry. I think it can be hard to escape this habituation and see scientific thinking as a tool for civilization instead of as some kind of neoliberal astrology.

Comment by mattlerner on evelynciara's Shortform · 2020-09-04T23:34:21.237Z · score: 3 (3 votes) · EA · GW

I think the instrumental benefits of greater equality (racial, gender, economic, etc.) are hugely undersold, particularly by those of us who like to imagine that we're somehow "above" traditional social justice concerns (including myself in this group, reluctantly and somewhat shamefully).

In this case, I think your thought is spot on and deserves a lot more exploration. I immediately thought of the claim (e.g. 1, 2) that teams with more women make better collective decisions. I haven't inspected this evidence in detail, but on an anecdotal level I am ready to believe it.

Comment by mattlerner on More empirical data on 'value drift' · 2020-09-03T20:55:25.728Z · score: 1 (1 votes) · EA · GW

The former! This is pretty sensitive to modeling choices-- tried a different way, I get an engagement effect of 31 percentage points (38% vs. 7% dropout).

The modeling assumption made here is that engagement level shifts the whole distribution of dropout rates, which otherwise looks the same; not sure if that's justifiable (seems like not?), but the size of the data is constraining. I'd be curious to hear what someone with more meta-analysis experience has to say about this, but one way to approximate value drift via a diversity of measurements might be to pile more proxy measurements into the model—dropout rates, engagement reductions, and whatever else you can come up with—on the basis that they are all noisy measurements of value drift.

I'd be super curious to know if the mean/median age of EA right now is a function of the people who got into it as undergrads or grad students several years ago and who have continued to be highly engaged over time. Not having been involved for that long, I have no idea whether that idea has anecdotal resonance.

Comment by mattlerner on More empirical data on 'value drift' · 2020-09-03T16:12:29.118Z · score: 3 (2 votes) · EA · GW
they're for really different groups at very different levels of engagement (which leads to predictably very different drop out rates).

This is the reason for doing a random effects meta-analysis in the first place: the motivating assumption is that the populations across studies are very different and so are the underlying dropout rates (e.g. differing estimates are due not just to within-study variation but also to cross-study variation of the kind you describe).

Still, it was sloppy of me to describe 23% as the true estimate above- in RE, there is no true estimate. A better takeaway is that, within the scope of the kind of variation we see across these survey populations, we'd almost certainly expect to see dropout of less than 40%, regardless of engagement level. Perhaps straining the possibilities of the sample size, I ran the analysis again with an intercept for engagement-- high engagement seems to be worth about 21 percentage points' worth of reduced dropout likelihood on the 5-year frame.

>60% persistence in the community at large seems pretty remarkable to me. I understand that you haven't been able to benchmark against similar communities, but my prior on youth movements (as I think EA qualifies) would be considerably higher. Do you have a reference class for the EA community in mind? If so, what's in it?

Comment by mattlerner on More empirical data on 'value drift' · 2020-09-02T22:32:10.873Z · score: 16 (6 votes) · EA · GW

FWIW, I did a quick meta-analysis in Stan of the adjusted 5-year dropout rates in your first table (for those surveys where the sample size is known). The punchline is an estimated true mean cross-study dropout rate of ~23%, with a 90% CI of roughly [5%, 41%]. For good measure, I also fit the data to a beta distribution and came up with a similar result.

I struggle with how to interpret these numbers. It's not clear to me that the community dropout rate is a good proxy for value drift (however it's defined), as in some sense it is a central hope of the community that the values will become detached from the movement -- I think we want more and more people to feel "EA-like", regardless of whether they're involved with the community. It's easy for me to imagine that people who drift out of the movement (and stop answering the survey) maintain broad alignment with EA's core values. In this sense, the "core EA community" around the Forum, CEA, 80k, etc is less of a static glob and more of a mechanism for producing people who ask certain questions about the world.

Conversely, value drift within members who are persistently engaged in the community seems to be of real import, and presumably the kind of thing that can only be tracked longitudinally, by matching EA Survey respondents across years.

Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-08-25T19:34:08.478Z · score: 1 (1 votes) · EA · GW

Though I didn't read Godwin (now on my to-do list), I encountered some useful research that seemed to point toward the idea that regulatory lobbying could be a lot more efficient than legislative lobbying. By the end of my review, I had started to think that it would have more productive to do that instead.

Since I finished, though, I've been thinking about one of the main concerns I have about regulatory lobbying. The fact that it's probably (comparatively) easy to influence regulatory agencies means that it's pretty easy to walk back any positive rule changes. This seems to happen fairly frequently, e.g. with EPA regulations.

From that standpoint, the stickiness of the status quo in the legislative context is also an advantage: when policy change succeeds legislatively, the new policy becomes part of the difficult-to-change status quo. For longermist-oriented policies, it seems like this is a major advantage over regulatory changes.

Curious to hear your thoughts.

Comment by mattlerner on Do research organisations make theory of change diagrams? Should they? · 2020-08-23T15:10:55.295Z · score: 7 (2 votes) · EA · GW
But maybe this push should take the form of explicitly highlighting the option of making ToC diagrams, providing some good examples, and encouraging people to try it a few times. And then hopefully, if the employees were chosen well, they'll naturally come to use them about as often as they should. 

This is probably the right course of action. Before the project I just finished, it was never really clear to me the settings in which flow chart-type diagrams made sense. As a more or less mathy type, I think I didn't give them their due. Now that I've seen them in practice, I've started making them here and there.

I think just giving employees the allowance to make diagrams instead of slideshows or reports, and cluing them into best practices (see e.g. this guide from the CDC) can go a long way. It seems like lots of staffers go down the report/slideshow rabbit hole because they want to be seen to be doing something. This results in long, unread memos, etc.

There's another benefit, too: staffers have sometimes dramatically different writing and design skills, and simple diagrams can lower the barriers to communicating ideas for employees who may not be confident in these skills. If staff members are held to a strict standard for the clarity and coherence of logic models, they can be a way of rapidly iterating ideas that would otherwise remain unheard.

Comment by mattlerner on Do research organisations make theory of change diagrams? Should they? · 2020-08-23T14:40:44.613Z · score: 3 (2 votes) · EA · GW

I've just finished a project working with a large American foundation (not sure I'm okay to say which, but it's in the top 10 largest). They use logic models / ToC diagrams internally as their lingua franca: everything is expressed as a diagram. I feel a little ambivalent.

On the one hand, they are a clear and expeditious way of expressing information that might otherwise be crammed into a memo no one will read. They also very clearly express causal flow in a way that other media might not, which can facilitate understanding. At the foundation I worked with, they seem to be used primarily as a way of rapidly communicating mechanisms of action (e.g. in proposed foundation grants or investments) to and between program officers, who seem extremely pressed for time.

On the other hand, I also saw reports and presentations crammed full of incredibly detailed logic models. I'm talking about pages and pages on which small-type boxes and arrows completely fill each page. I really don't think this is useful. These incredibly detailed models are not easy to understand at a glance, and they seem to sit in an unhappy middle ground: by being complicated, they challenge comprehension, but by being simplifications, they occlude important details relevant to the mechanism being described.

I got the impression that because the order had come down from on high to put everything in a logic model, it was being done even in contexts where these models made no sense. I worried that the focus on logic models encourages only a logic model - level understanding of the world, while simultaneously eating up huge amounts of foundation time creating diagrams that few will look at or understand.

However, I am still a convert. I think theory of change / logic models do have a lot of value, but I think they need to be used sparingly and kept small. I'd make some kind of a rule: no more than twenty boxes in a model, or something like that.

Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-08-18T22:32:32.659Z · score: 1 (1 votes) · EA · GW

1) Sounds good to me! We can connect about it over DM.

2) Your reading is right. A priori, a positive correlation means lower cost-effectiveness in expectation. However, I'm not sure if it means anything generally for the median cost-effectiveness (which I tried to work with in my existing CEA), irrespective of the other model parameters. And in my existing setup, if worlds of high spending and high success are more likely co-occur, and worlds with low spending and low success are more likely to co-occur, then I believe the distribution of their product would have been more dispersed, since there would be more values at the extremes (high/high and low/low) then there would be if they were independent. But I'm pretty convinced now that a better approach would have been, as you've suggested, to do separate CEAs conditional on various assumed interventions. Rather than change the parameters of independent distributions as I did in the posted analysis, the true next step is probably to re-model under varying assumptions about the covariance of the different variables.

3) I have a different sense of this, but not an overwhelmingly different sense, and I'm going to think about it some more.

Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-08-17T14:50:34.270Z · score: 1 (1 votes) · EA · GW

After your comments and @jackva's, I actually struck this conclusion. I was trying to make a more modest statement that upon reflection (thanks to you) is (1) not such a valuable claim and (2) not well-supported enough to have >50% confidence in. It's true Baumgartner don't find that money doesn't matter; my initial (now disavowed) read was that if resources mattered independent of deployment strategy, then we'd expect to see a much stronger correlation even in the observational context. I sort of think that this observation holds true even given the passage you've cited, but it's definitely not a top-level extract from the lit review and definitely needs a considerably more robust defense than I am prepared to muster.

Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-08-12T20:22:06.910Z · score: 1 (1 votes) · EA · GW

Points all well-taken. I'd love to share with FP's journal club, though I hasten to add that I'm still making edits and modifications based on your feedback, @smclare's, and others.

With respect to uncertainty in the CE calculation, my thinking was (am I making a dumb mistake here?) that because

and , then . So if covariance is nonzero, then (I think?) the variance of the product of two correlated random variables should be bigger than in the uncorrelated counterfactual.

To me, the main value of the CE model was in the sensitivity analysis - working through it really helped me think about what "effective lobbying" would have to be able to do, and where the utility would lie in doing so. I think if it doesn't serve this purpose for the reader, then I agree this document would have been better off without the model altogether.

Thanks for your thoughts on money in politics. Vis (1) I have to think more about this, but I do definitely view the topic a little differently. For instance, it's not obvious to me that economic arguments and political representation do the necessary work of regulatory capture. Boeing is in Washington and Northrop Grumman is in Virginia. It seems clear that the representatives of the relevant districts are prepared to argue for earmarks that will benefit their constituents... but these companies are still in direct competition, and it seems like there's still strategic benefit to each in getting the rest of Congress on their side. I might misunderstand- maybe we're reaching the limits of asynchronous discussion on this topic.

Vis (2), the "inside view" I was talking about was actually yours, as someone who thinks about this professionally- so thank you for your thoughts!

Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-08-11T16:54:38.889Z · score: 1 (1 votes) · EA · GW

I'm replying again here to note that I've struck the salience point from my conclusions. I've noted why up top. I now have a lot of uncertainty about whether this is the case or not, and don't stand by my suggestion that salience is a good guide to resource allocation.

Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-08-09T18:06:44.756Z · score: 2 (2 votes) · EA · GW

Thanks for your response!

With respect to your first point, I'm considering striking this conclusion upon reflection - see my discussion with @jackva elsewhere in this thread. In any case, my confidence level here is certainly too high given the evidence, and I really appreciate your close attention to this.

With respect to your second point, I don't mean to imply that the lack of organized opposition is the only thing that justifies lobbying expenditure, and think my wording is sloppy here as well. I used "lack of an organized opposition" to refer broadly to oppositions that are simply doing less of the (ostensibly) effective things — lower "organizational strength" as in Caldeira and Wright (1998), number of groups, as in Wright (1990), or simply lower relative expenditure, as in Ludema, Mayda, and Mishra (2018).

The evidence in Baumgartner et al that you reference about the apparent association between lack of countermobilization and success is also related to @jackva's concern about my underemphasis on potential lobbying equilibria here. On the one hand, I think this is clearly evidence in favor of the hypothesis that there is some efficiency in the market for lobbying- perhaps most lobbyists have a good idea of which efforts succeed, and don't bother to countermobilize against less sophisticated opposition. On the other hand, lobbying is a sequential game, and, since the base rate for policy enactment is so low to start with, it makes sense that opposition wouldn't appear until there's a more significant threat.

EDIT: I've actually struck the first bit, with a note. I wanted to add one more thing, which is that I don't know how much you've adjusted your prior on lobbying, but I wouldn't say this has made me "optimistic" about lobbying. The core thing I've come away with is that lobbying for policy change is extraordinarily unlikely to succeed, but that marginal changes to increase the probability of success are (1) plausible, based on the research and (2) potentially cost-effective, based on the high value of some policies.

Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-08-08T04:00:06.705Z · score: 1 (1 votes) · EA · GW

I like this spreadsheet idea and think I may kick it off (if you haven't already done so!)

I took the project on because I got interested in this topic, went looking for this, couldn't find it, and decided to make it so that it might be useful to others. I wasn't feeling very useful in my day job, so it was easy to stay motivated to spend time on this for a while. I tend to be most interested in generalizable or flexible approaches to improving welfare across different domains, and this seemed like it might be one of those.

Some areas I'm thinking about exploring. These are pretty rough thoughts:

  • Some more exploration of strategies for ameliorating child abuse in light of the well-known ACES Study. GiveWell and RandomEA have both explored Nurse-Family Partnerships. This problem is just so huge in terms of people affected (and in terms of second-order effects) that I think it's worth exploring a lot more. I'm particularly interested in focusing on child sexual abuse in particular.
  • Aggregating potentially cost-effective avenues to improve institutional performance. I'm curious about thinking at a higher level of abstraction than institutional decision-making. It seems worthwhile to put together the existing cross-disciplinary evidence on the question: what steps outside of those explicitly focusing on rationality and decision-making can companies/nonprofits/government agencies take to increase the probability that they make good decisions? A good example of one such step is in the apparent evidence that intellectually diverse teams make better decisions.
  • Long-term cost-effectiveness of stress reduction for pregnant women (with potential effects of infant mortality, maternal health, and long-term outcomes like brain development and violence).
  • Review of recent innovations that seem to like they might have potential for expediting scientific progress (like grant lotteries)
Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-08-08T02:59:51.920Z · score: 11 (4 votes) · EA · GW

Hello and thank you for your response!

Your criticism of the cost-effectiveness model is fair. Thematically, I guess it does contradict the spirit of my prior analysis in that it avoids the concerns of strategic choice. I was actively trying to be as general as possible, and actively trying to err on the side of greater uncertainty by not including any assumptions about correlatedness, though it occurs to me now that making such an assumption (e.g. a correlation between expenditure and likelihood of success) would actually have increased the variance of the final estimate, which would have been more in line with my goals. When I have time, I may comment here with an updated CEA.

I also agree that the only useful way to do this analysis is, as you've described, with a suite of models for different scenarios. I don't have a defense for not having done this beyond my own capacity constraints, though I hope it's more useful to have included the flawed model than not to have one at all (what do you think?).

I also think that the conclusion which, I believe, mostly draws from Baumgaertner " (80%) Well-resourced interest groups are no more or less likely to achieve policy success, in general, than their less well-resourced opponents." is quite surprising and I would be curious to find out why you think that / in how far you trust that conclusion.


Thanks for this, in particular. I think your surprise stems from a lack of clarity on my part. The reason I have high confidence in this conclusion is that it's a much weaker claim than it might seem. It does stem primarily from Baumgartner et al and from Burstein and Linton (2002). The claim here is that resource-rich groups are no more or less likely to get what they want--holding all else equal, including absolute expenditure and the spending differential between groups and their opponents.

There are three types of claim that are closely related:
1) Groups that spend more relative to their opposition on a given policy are likelier to win
2) Groups that spend more in absolute terms are likelier to win
3) Groups that have more money to spend are likelier to win

So I found fairly consistent evidence for (1), some evidence for (2), and no real evidence for (3). It's not obvious to me that (3) should be the case irrespective of (1): why would resource-rich groups succeed in lobbying if they deploy those resources poorly? It seems like the success of resource-rich groups is dependent upon (1), and that (3) should not be true when in isolation, unmediated by (1). Although Baumgartner et al conduct an observational study, the size of their (to me, convincingly representative) sample to me suggests that if such an effect exists, it should be observable as a correlation in their analysis. The association they observe is pretty small.

I have to say, though, that in writing this comment, my confidence in this conclusion has eased up a bit, so I'm curious to hear your response. I also think that since Baumgartner et al do find a small effect, I probably overstate the case here.

Baumgartner et al offer a theoretical take on this: "...organizations rarely lobby alone. Citizen groups, like others, typically participate in policy debates alongside other actors of many types who share the same goals. For every citizen group opposing an action by a given industrial group, for example, there may also be an ally coming from a competing industry with which the group can join forces" (p.12). So it's important to recognize that the finding here is about individual parties, not "sides" or coalitions advocating a given policy.

Finally, I'm curious to hear your take on the two potential money-in-politics explanations you mentioned. I've never found (1) particularly convincing—it's not clear to me that firms and their employees have the same interests, or that (if they do) the marginal value of regulatory capture isn't still high. But I agree that I underemphasized (2) and think it would be useful to have in this thread the "inside view" on lobbying equilibria from someone who works in the field.

Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-07-31T19:27:54.309Z · score: 4 (2 votes) · EA · GW

Thanks for your response!

(1) I spent something like 100 hours on this over the course of several months. I think I could have cut this by something like 30-40% if I'd been a little bit more attentive to the scope of the research. I decided on the scope (assessing the effectiveness of national-level legislative lobbying in the U.S.) at the beginning of the project, but I repeatedly wound up off track, pursuing lines of research outside of what I'd decided to focus on. I also spent a good chunk of time on the GitHub repo with the setup for analyzing lobbying data, which wasn't directly related to the lit review but which I felt served the goal of presenting this as a foundation for further research.

If I had 40 more hours, I'd intentionally pursue an expanded scope. In particular, I'd want to fully review the research on lobbying of (a) regulatory agencies and (b) state and local governments. I explicitly excluded studies along those lines, some of which were very interesting.

(2) Thanks for asking for clarification on this. Baumgartner et al mean that it takes a long time for policy change to be observed on any given issue. After starting to pursue a policy goal, lobbyists are more likely to see success after four years than after two.

Baumgartner et al include a chapter that is mostly critical of the incrementalist idea of policy change, which they trace to Charles Lindblom's 1959 article The Science of "Muddling Through". Incrementalism is tied to Herbert Simon's idea of "bounded rationality." Broadly, the incrementalist idea is that policymakers face a broad universe of possible policy options, and in order to reduce the landscape to a manageable set, they choose from only the most available options, e.g. those closest to the status quo: "incremental" changes.

Frank Baumgartner, with Bryan Jones, is now well-known for their theory of "punctuated equilibrium." This is a partial alternative to incrementalism which uses the analogy of friction to understand policy change. Basically: the pressure builds on an issue over a period of time, during which no change occurs. After the pressure is overwhelming, policy shifts in a major way.

I say that punctuated equilibrium is a "partial" alternative because Baumgartner and Jones actually collected data that seems to demonstrate that policy change follows a steeply peeked, fat-tailed distribution. Their overall takeaway is that very small changes are overwhelmingly common, but moderate changes are relatively uncommon, and very large changes are surprisingly common. To come back to your question, Baumgartner et al might say that although most policy change is incremental—like year-to-year changes in agency budgets—meaningful policy change happens in a big way, all of a sudden.

(3) I agree with you. I think some of my suggested policies are not likely to be those most effectively advocated for, and I included them just to give a flavor of the types of things we might care about lobbying for. Coming up with more practicable ideas is, I think, a much bigger, much longer-term project.

I also think that although lobbying for the status quo is more effective all other things being equal, it may not be the best use of EA resources to focus exclusively on that side of things. That's because (per the counteractive lobbying theory) on many issues there is are latent interests that will arise to lobby against harmful proposals. It's hard to identify beforehand which proposals will stimulate this opposition, so there's a lot of prior uncertainty as to whether funding opposition to policy change is marginally useful in expectation.

(4) There are a lot of takes on the Tullock paradox, but I'll present two broad possible explanations.

  • Explanation A: Lobbying is basically ineffective, and the reason we don't see more lobbying is that most organizations recognize its ineffectiveness.
  • Explanation B: Lobbying is highly effective, and the reason we don't see more lobbying is that relatively small expenditures can exert enormous amounts of leverage.

Given the evidence here, I'm starting to be a lot more inclined toward Explanation B. I think it's demonstrably not the case, as you have noted with respect to the Clean Air Task Force, that organizations that lobby are wasting their money. For both altruistic and self-interested interest groups, the rewards to be captured are very large, and they make it worth the risk of wasting money. Alexander, Scholz, and Mazza (2009), for example, find a 22,000% return on investment.

If Explanation B holds, then the question is really just why the market for policy isn't efficient. Why hasn't the price of lobbying been bid up to the value of the rewards to be captured? I think it seems likely that this is down to multiple layers of information asymmetry (between legislators and their staffs, between these staffers and lobbyists, between lobbyists and their clients, etc.), which create multiple layers of uncertainty and drive the expected value of lobbying down from the standpoint of those in a position to purchase it.

I agree with you that a normal distribution is probably not the best choice to model the expected incremental change in probability. I felt like, given my CI for this figure and my sense that values closer to 0% and values closer to 5% were each less likely than values in the middle of that range, this served my purposes here - but please take my code and modify as you see fit!

Perhaps we want to start with a low prior chance of policy success, and then update way up or down based on which policy we're working on. Do you think we'd be able to identify highly-likely policies in practice?

I don't know. I think it's worth investigating. It seems like, given an already-existing basket of policies we'd be interested in advocating for, we can make lobbying more cost-effective just by allocating more resources to (e.g.) issues that are less salient to the public.

I have a sense that lobbyists, do, in fact, do something like what you're describing, and that this is part of the resolution to the Tullock paradox. Money spent on lobbying is not spent all at once: lobbyists can make an effort, check their results, report to their clients, and identify whether or not they're likely to meet with success in continued expenditure. If lobbying expenditure on a given topic seems unlikely to make a difference, then it can just stop. I wasn't able to find anything on how this process actually works, so the next step in this research is to actually talk to some lobbyists.

(5)

I think perhaps something that's missing here is a discussion of incentives within the civil service or bureaucracy

I agree with this too. I'd love for an EA with a public choice background to tackle this topic. I didn't consider it as part of my scope, but I do want to note something:

A policy proposal like taking ICBMs off hair-trigger alert just seems so obvious, so good, and so easy that I think there must be some illegible institutional factors within the decision-making structure stopping it from happening.

I think this is probably true in many if not most cases of yet-to-be-implemented policy changes that are obvious, good, and easy. It is probably true in this case. But I want to warn against concluding that, because some obvious, good, and easy policy change has not been implemented, that means that there is some illegible institutional factor that is stopping it from happening. It could just be that no one has been pushing for it. In EA terms, it's an important and tractable policy change that's neglected by the policy community. Given what I know about the policy community, it's not at all difficult for me to imagine that such policies exist.

Comment by mattlerner on Sample size and clustering advice needed · 2020-07-30T17:56:45.691Z · score: 4 (2 votes) · EA · GW

I refer you to Sindy's comment (she is actually an expert) but I want to note and verify that it sounds as if you may not actually be thinking of collecting individual-level data, and that you're thinking of making observations at the village level (e.g. what % of people in this village wear masks?). So it's not just the case that you wouldn't have enough clusters to make a statistical claim, but you may actually be talking about doing an experiment in which the units are villages... so n = 6 to 12. Then of course you'd have considerable error in the village-level estimate, and uncertainty about the representativeness about the sample within each village. I agree with Sindy that you probably don't want an RCT here.

Comment by mattlerner on Sample size and clustering advice needed · 2020-07-29T16:14:24.547Z · score: 10 (3 votes) · EA · GW

If you don't already have it, I would strongly recommend getting a copy of Gerber & Green's Field Experiments. I would also very strongly recommend that you (or EA Cameroon) engage an experimental methodology expert for this project, rather than pose the question on the forum (I am not such an expert).

It is very difficult to address all of these questions in a broad way, since the answers depend on:

  • The smallest effect size you would hope to observe
  • Your available resources
  • The population within each cluster
  • The total population
  • Your analysis methodology

I'm a little confused about the setup. You say that there are 6 groups— so how would it be possible to have "6 intervention + 3 non-intervention?" Sorry if I'm misunderstanding.

In general, and particularly in this context, it makes sense to split your clusters evenly between treatment and control. This is the setup that minimizes the standard error of the difference between groups. When the variance is larger, smaller effect sizes are difficult to detect. The smaller the number of clusters in your control group, for example, the larger the effect size that you would have to detect in order to make a statistically defensible claim.

With such a small number of clusters, effect sizes would have to be very large in order to be statistically distinguishable from zero. If indeed 50% of the population in these groups is already masked, 6 clusters may not be enough to see an effect.

Can we get some clarification on some of your questions? Particularly:

How important, in terms of statistical power is to include all clusters

If you have only 6 to choose from, then the answer is very important. But I'm not sure this is the sense in which you mean this.

How many persons should be observed at each place?

My inclination here is to say "as many as possible." But this is constrained by your resources and your method of observation. Can you say more about the data collection plan?

Comment by mattlerner on Nathan Young's Shortform · 2020-07-23T14:42:06.207Z · score: 8 (7 votes) · EA · GW

I also thought this when I first read that sentence on the site, but I find it difficult (as I'm sure its original author does) to communicate its meaning in a subtler way. I like your proposed changes, but to me the contrast presented in that sentence is the most salient part of EA. To me, the thought is something like this:

"Doing good feels good, and for that reason, when we think about doing charity, we tend to use good feeling as a guide for judging how good our act is. That's pretty normal, but have you considered that we can use evidence and analysis to make judgments about charity?"

The problem IMHO is that without the contrast, the sentiment doesn't land. No one, in general, disagrees in principle with the use of evidence and careful analysis: it's only in contrast with the way things are typically done that the EA argument is convincing.

Comment by mattlerner on The EA movement is neglecting physical goods · 2020-06-18T20:43:35.346Z · score: 3 (3 votes) · EA · GW

I don't work in physical goods (I'm a data scientist) but I am definitely interested in leveling up my skillset in this way. I'm probably only available for 3 to 4 hours a week to start, but that will probably change soon.

Thanks for making this post! This is an interesting observation.

Comment by mattlerner on HLI’s Mental Health Programme Evaluation Project - Update on the First Round of Evaluation · 2020-06-12T06:54:34.343Z · score: 8 (5 votes) · EA · GW

Thank you for doing this work! I really admire the rigor of this process. I'm really curious to hear how this work is received by (1) other evaluation orgs and (2) mental health experts. Have you received any such feedback so far? Has it been easy to explain? Have you had to defend any particular aspect of it in conversations with outsiders?

I do have one piece of feedback. You have included a data visualization here that, if you'll forgive me for saying so, is trying to tell a story without seeming to care about the listener. There is simply too much going on in the viz for it to be useful.

I think a visualization can be extremely useful here in communicating various aspects of your process and its results, but cramming all of this information into a single pane makes the chart essentially unreadable; there are too many axes that the viewer needs to understand simultaneously.

I'm not sure exactly what you wanted to highlight in the visualization, but if you want to demonstrate the simple correlation between mechanical and intuitive estimates, a simple scatterplot will do, without the extra colors and shapes. On the other hand, if that extra information is substantive, it should really be in separate panes for the sake of comprehensibility. Here's a quick example with your data (direct link to a larger version here):

I don't think this is the best possible version of this chart (I'd guess it's too wide, and opinions differ as to whether all axes should start at 0), but it's an example of how you might tell multiple stories in a slightly more readable way. The linear trend is visible in each plot, it's easier to make out the screening sizes, and I've outlined the axes delineating the four quadrants of each pane in order to highlight the fact that mostly top-scoring programmes on both measures were included in Round 2.

Feel free to take this with as much salt as necessary. I'm working from my own experience, which is that communicating data has tended to take just as much work on the communication as it does on the data.

Comment by mattlerner on EA Forum Prize: Winners for April 2020 · 2020-06-08T23:09:27.587Z · score: 12 (8 votes) · EA · GW
while some users reported finding the Prize valuable or motivating, that number wasn’t quite as high as I had been hoping for

It seems like the instrumental thing here is whether users who won prizes found them motivating. Most users will not write prize-winning posts, but if the users who did were at least partially motivated by the prospect of winning one, then the world with the prize is almost certainly better than the counterfactual. More generally, if users who wrote original posts were likelier to endorse the prize than users in general, that is some indication that the prize is somewhat effective. Did you have enough data to determine whether either of these situations obtain?

Comment by mattlerner on I Want To Do Good - an EA puppet mini-musical! · 2020-05-21T16:25:35.642Z · score: 21 (13 votes) · EA · GW

I don't have anything to say except that I loved this, and I'm really happy somebody is starting to present a warmer and fuzzier side of EA.

Comment by mattlerner on Matt_Lerner's Shortform · 2020-05-01T17:13:31.724Z · score: 2 (2 votes) · EA · GW

In general, I'm skeptical about software solutionism, but I wonder if there's a need/appetite for group decision-making tools. While it's unclear exactly what works for helping groups make decisions, it does seem like a structured format could provide value to lots of organizations. Moreover, tools like this could provide valuable information about what works (and doesn't).

Comment by mattlerner on Matt_Lerner's Shortform · 2020-04-22T18:21:37.428Z · score: 1 (1 votes) · EA · GW

Proportional representation

Comment by mattlerner on Matt_Lerner's Shortform · 2020-04-13T21:16:51.339Z · score: 7 (5 votes) · EA · GW

School closures

Workplace closures


The usual caveats apply here: cross-country comparisons are often BS, correlation is not causation, I'm presenting smoothed densities instead of (jagged) histograms, etc, etc...

I've combined data on electoral system design and covid response to start thinking about the possible relationships between electoral system and crisis response. Here's some initial stuff: the gap, in days, between first confirmed cases and first school and workplace closures. Note that n= ~80 for these two datasets, pending some cleaning and hopefully a fuller merge between the different datasets.

To me, the potentially interesting thing here is the apparently lower variability of PR government responses. But I think there's a 75% chance that this is an illusion... there are many more PR governments than others in the dataset, and this may just be an instance of variability decreasing with sample size.

If there's an appetite here for more like this, I'll try and flesh out the analysis with some more instructive stuff, with the predictable criticisms either dismissed or validated.

Comment by mattlerner on Matt_Lerner's Shortform · 2020-04-09T19:27:21.429Z · score: 1 (1 votes) · EA · GW

Or of course, restrict our sample to a smaller geographic region in the US with more prevalence.

Comment by mattlerner on Matt_Lerner's Shortform · 2020-04-09T18:59:10.261Z · score: 4 (3 votes) · EA · GW

It seems like there's a significant need right now to identify what the plausible relationship is between mask-wearing and covid19 symptoms. The virus is now widespread enough that a very quick Mechanical Turk survey could provide useful information.

Collect the following:

• Age group (5 categories)

• Wear a mask in public 1 month ago? (y/n)

• If yes to above, type of mask? (bandana/N95+/surgical/cloth/other)

• Sick with covid19 symptoms in past month? (y/n)

• Know anyone in everyday life who tested positive for covid19 in past month? (y/n)

• Postal code (for pop. density info)

Based on figures from this Gallup piece, a back-of-the-envelope says we could get usable results from surveying 20,000 Americans -- but we could work with a much smaller sample if we survey in a country where the virus is more prevalent.

Comment by mattlerner on What is the average EA salary? · 2020-04-05T19:14:46.168Z · score: 1 (1 votes) · EA · GW

I'd love to see some more information about the distribution (e.g. percentiles, change since previous years, breakdown by organization size/type or by role). Is it possible to provide that while maintaining anonymity?

Comment by mattlerner on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T20:04:54.804Z · score: 11 (8 votes) · EA · GW

This is a great post and I, like @rohinmshah, feel that simply the introduction of this general class of discussion is of value to the community.

With respect to expert surveys, I am somewhat surprised that there isn't someone in the EA community already pursuing this avenue in earnest. I think that it's firmly within the wheelhouse of the community's larger knowledge-building project to conduct something like the IGM experts panel across a variety of fields. I think, first, that this sort of thing is direly needed in the world at large and could have considerable direct positive effects, but secondly that it could have a number of virtues for the EA community:

  • Improve efficiency of additional research: Knowing what the expert consensus is on a given topic will save some nontrivial percentage of time when starting a literature review, and help researchers contextualize papers that they find over the course of the review. Expert consensus is a good starting place for a lit review, and surveys will save time and reduce uncertainty in that phase.
  • Let EAs know where we stand relative to the expert consensus: when we explore topics like growth as a cause area, we need to be able to (1) have a quick reference to the expert consensus at vital pivots in a conversation (e.g. do structural adjustments work?) and (2) identify with certainty where EA views might depart from the consensus.
  • Provide a basis for argument to policymakers and philanthropists: Appeals to authority are powerful persuasive mechanisms outside the EA community. Being able to fall back on expert consensus in any range of issues can be a powerful obstacle or motivator, depending on the issue. Here's an example: governments around the world continue to locally relitigate conversations about the degree to which electronic voting is safe, desirable, secure or feasible. Security researchers have a pretty solid consensus on these questions-- that consensus should be available to these governments and those of us who seek to influence them.
  • Demonstrate to those outside the community that EAs are directly linked to the mainstream research community: This is a legitimacy issue: regardless of whether the EA community ends up being broader or narrower, we are often insisting to some degree on a new way of doing things: we need to be able to demonstrate to newcomers and outsiders that we are not simply starting from scratch.
  • Establish continued relationships with experts across a variety of fields: Repeated deployment of these expert surveys affords opportunities for contact with experts who can be integrated into projects, sought for advice, or deployed (in the best case scenario) as voices on behalf of sensible policies or interventions.
  • Identify funding opportunities for further research or for novel epistemic avenues like the adversarial collaborations mentioned in the initial post: Expert surveys will reveal areas where there is no consensus. Although consensus can be and sometimes is wrong, areas where there is considerable disagreement seem like obvious avenues for further exploration. Where issues have a direct bearing on human wellbeing, uncovering a relative lack of conclusive research seems like a cause area in and of itself.
  • Finally, the question-finding and -constructing process is itself an important activity that requires expert input. Identifying the key questions to ask experts is itself very important research, and can result in constructive engagements with experts and others.
Comment by mattlerner on Thoughts on electoral reform · 2020-02-20T21:31:21.154Z · score: 6 (5 votes) · EA · GW

I agree that EAs should continue investigating and possibly advocating different voting methods, and I strongly agree that electoral reform writ large should be part of the "EA portfolio."

I don't think EAs (qua EAs, as opposed to as individuals concerned as a matter of principle with having their electoral preferences correctly represented) should advocate for different voting methods in isolation, even though essentially all options are conceptually superior to FPTP/plurality voting.

This is because A democratic system is not the same as a utility-maximizing one. The various criteria used to evaluate voting systems in social choice theory are, generally speaking, formal representations of widely-shared intuitions about how individuals' preferences should be aggregated or, more loosely, how democratic governments should function.

Obviously, the only preferences voting systems aggregate are those over the topic being voted on. But voters have preferences over lots of other areas as well, and the choice of voting system relates only to two of them: (a) their preferences over the choice in question and (b) their meta-preferences over how preferences are aggregated (e.g. how democratic their society is).

As others in this thread have pointed out, individuals' electoral preferences cannot be convincingly said to represent their preferences over all of the other areas their choice will influence.

So an individual gains utility from a voting system if and only if the utility gained by its superior representation of their preferences exceeds the utility lost in other areas lost by switching. I don't think this is a high bar to clear, but I do think that, beyond the contrast between broadly democratic and non-democratic systems, we have next-to-no good information about the relationship between electoral systems and non-electoral outcomes.

In the simplest terms possible: we know that some voting systems are better than others when it comes to meeting our intuitive conception of democratic government. But we're concerned about people's welfare beyond just having people's electoral preferences represented, and we don't know what the relationship between these things is.

It is totally possible that voting systems that violate the Condorcet criterion also dominate systems that meet the criterion with respect to social welfare. We simply don't know.

It's also not clear to what degree different voting systems induce a closer relationship between individuals' electoral preferences and their preferences over non-electoral topics, e.g. by incentivizing or disincentivizing voter education.

To reiterate, I strongly support the increased interest in approval voting and RCV that we're seeing, and I voted for it here in NYC. I want to see my own electoral preferences represented more accurately and I don't think there is a big risk that (at least here) my other preferences will suffer. But as consequentialists I think we are on very uncertain ground.

Comment by mattlerner on What posts you are planning on writing? · 2020-02-02T20:26:54.715Z · score: 3 (3 votes) · EA · GW

I'm doing a lit review on the effectiveness of lobbying and on some of the relevant theoretical background that I'm planning on posting when I'm done. I feel like this is potentially very relevant but I'm not sure if people will be interested.

Comment by mattlerner on Call for beta-testers for the EA Pen Pals Project! · 2020-01-27T18:44:25.429Z · score: 1 (1 votes) · EA · GW

Just want to follow up to acknowledge that I see that you're already conducting a survey and that I'm proposing you add a set of questions about personal beliefs/stances/positions.

Comment by mattlerner on Call for beta-testers for the EA Pen Pals Project! · 2020-01-27T18:40:36.533Z · score: 1 (1 votes) · EA · GW

This is a really cool project! Just want to plug this as a really good opportunity to rigorously study how EA ideas spread: a quick 5-minute pre- and post-survey asking participants Likert-style questions about their positions on various EA-relevant topics and perhaps their style of argument/conversation would be potentially high-value here.

Since assignment will be randomized, there's a real opportunity here to draw causal conclusions about how ideas spread, even if the external validity will be largely restricted to the EA population.

Comment by mattlerner on Growth and the case against randomista development · 2020-01-26T21:39:03.531Z · score: 1 (1 votes) · EA · GW

Thanks for your response! I still have some confusion, but this is somewhat tangentially related. In your CBA, you use an NPV figure of $3752bn as the output gain from growth. This is apparently derived from India's 1993 and 2002 growth episodes.

The CBA calculation calculates the EV of the GDP increase therefore as 0.5*0.1*3572 = $178.56 bn. You acknowledge elsewhere in your writeup that efforts to increase GDP entail some risk of harm (and likewise with the randomista approach) so my confusion lies with the elision of this possible harm from the EV calculation.

Even if the probability that a think tank induces a growth episode—e.g. the probability that a think tank influences economic policy in country X according to its own recommendations—is 10%, then there is still obviously a probability distribution over the possible influence that successfully implemented think tank recommendations would have. This should include possible harms and their attendant likelihoods, right?

I recognize that the $3,572bn figure comes directly from Pritchett as part of an assessment of the Indian experience, but it's not obvious to me that the number encapsulates the range of possibilities for a successful (in the sense of being implemented) intervention. I may be missing something, but it seems to me that a (perhaps only slightly) more rigorous CBA would have to itself include an expected value of success that incorporates possible benefits and harms for both Growth and Randomista approaches in the line of your spreadsheet model reading "NPV (@ 5%) of output loss from growth deceleration relative to counter-factual growth."

I understand that what you're envisioning is a sort of high-confidence approach to growth advocacy: target only countries where improvements are mostly obvious, and then only with the most robustly accepted recommendations. I still think there is a risk of harm and that the CBA may not capture a meaningful qualitative difference between the growth and randomista approaches. In principle, at least, the use of localized, small-scale RCTs to test development programs before they are deployed avoids large-scale harm and (in my view) pushes the mass of the distribution of possible outcomes largely above 0. No such obstacle to large harms exists, or indeed is even possible, in the case of growth recommendations. Pro-growth recommendations by economists have not been uniformly productive in the past and (I think) are unlikely to be so in the future.

I still favor this approach you suggest but, given the state of the field of growth economics—and the failure of GDP/capita to capture many welfare-relevant variables that you cite at the end of the writeup—I'd be keen to see more highly quantified conversation around possible harms.


Comment by mattlerner on Growth and the case against randomista development · 2020-01-25T01:31:33.812Z · score: 9 (3 votes) · EA · GW

Thanks for writing this! I am coming somewhat late to the party , but I wanted to add my support for what you have both written here. I back the concerted research effort you propose and believe it somewhat likely that it will have the benefits you suggest are probable.

I was digging through the Pritchett paper in hopes of doing my own analysis, and I do have a question: how did you calculate the median figure for Vietnam that you reference in section 4 ($6,914 GDP per capita)? I've been looking at the Pritchett paper and I can't quite figure it out. It seems close to the median absolute growth in $PPP presented in Pritchett's Table 4, but I imagine that's not right since Table 4 only lists the top 20 growth episodes from the full set of about 300. When I look at the those figures in Appendix A, though, it seems like the median growth episode calculated using PRM (without reference to dollar size) is somewhere around Ecuador's negative growth in 1978, which doesn't seem like it would line up even with the conversion to $PPP.

EDIT:

I see that you've written that Vietnam/89 is the median growth episode "to be affected by a think tank," and a little research reveals that Vietnam began a concerted economic liberalization in 1986, so perhaps you have a secondary subset of growth episodes that you believe were affected by think tanks?

I can also sort of see a case for selecting the median from Table 4 of the top 20 but that seems strange since (a) the cutoff is arbitrary and (b) it doesn't factor in the risk of harm from a think tank-influenced growth episode.


Comment by mattlerner on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-16T19:02:20.275Z · score: 11 (3 votes) · EA · GW

Thanks for your response. I think I should make clear (as I really didn't do in my initial post) that I mean my comment more broadly: when EAs think about doing ballot initiatives, they should strongly consider doing public opinion polling. In a setting where an EA advocacy group is trying to select (a) which of X effective policies to advocate and (b) in which of Y locales to advocate it, it seems (to me, at least) that polling is cost-effective, since choosing between X*Y potentially large number of independent options is a nontrivial problem that requires a rigorous approach.

In your setting, however (making the binary choice of whether or not to advocate for policy P in location L), I understand why you chose the strategy you did. Your point about the relative cost-effectiveness of talking to local politicians versus conducting an (arguably) expensive poll is well-taken. I don't have any idea how Swiss referenda work and I conclude from your comment that voters largely follow the lead of their representatives.

I'm not sure how you're thinking about future efforts along these lines, but if you're planning on selecting from a longer list of policies and cantons, I think polling—in a cheap way—could challenge your legislative strategy for cost-effectiveness, at least as a guide for initial research investment.


Comment by mattlerner on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-13T16:24:38.429Z · score: 4 (4 votes) · EA · GW

Fantastic work! In your post introducing this initiative you wrote that the base rate for passage of ballot initiatives was 11%. A conservative reading of the data here (taking the low value of $20m for development funding raised) seems to indicate a 100:1 return on investment. Taking the base rate, this $10 in effective development aid for $1 spent on advocacy (in expectation). If the development aid is effectively spent, the implication here is that money spent on an initiative like this might be ten times as effective in expectation as money donated directly to a top-rated charity. This assumes, of course, that the base rate is accurate.

In that initial post, you had an exchange with Stefan Schubert about the relevance of your assumed base rate. You discussed the importance of polling at that point but it's not clear to me where you left off.

This success really seems to highlight the importance of public opinion polling here. The value of information in this domain is very high, since you're trying to identify the avenue which will provide the greatest leverage. Choosing the wrong avenue has no value, and potentially even minor reputational costs for your organization or for EA in general. Choosing the right avenue has huge upsides.

Public opinion polling seems crucial to this end. In this scenario, prior polling might have allowed you to identify a reasonable figure beforehand (avoiding the $87 million overreach). More importantly, though (if I understand the procedure correctly), it might have enabled you to avoid the counterproposal process and to pinpoint an optimal figure to ask for-- perhaps one higher than the one you ultimately got.

I don't want to diminish the achievement here, which I think is huge; I just want to point out that extremely useful information for this effort can be retrieved from the public at relatively low cost. In the future, this information can be used to reduce the uncertainty around efforts to fund ballot proposals and increase the expected value of these efforts by lowering the probability of failure in expectation.

Comment by mattlerner on Personal Data for analysing people's opinions on EA issues · 2020-01-12T15:06:05.958Z · score: 10 (7 votes) · EA · GW

I think that it’s unnecessary to go to such great (and risky) lengths to find out what the public believes with respect to issues relevant to EAs. A well-constructed survey conducted via Mechanical Turk, for example, would (in conjunction with a technique like multilevel regression and poststratification) yield very accurate estimates of public opinion at various arbitrary levels of geographic aggregation. I’d be supportive of this and would be interested in helping to design and/or fund such a survey.

Comment by mattlerner on Has pledging 10% made meeting other financial goals substantially more difficult? · 2020-01-09T14:22:30.383Z · score: 5 (5 votes) · EA · GW

Since I started donating 10% (not very long ago), the only part of my discretionary spending that has taken a hit are my “dumb” expenses: nice new clothes, fancy meals, just overall waste. It turns out that stuff added up to 10%. But YMMV.

If you’re worried, and I think it’s reasonable to be, why don’t you start by pledging 1% and notching it up bit by bit? There’s no need to rush to take the 10% pledge. There is nothing special about that number and you need to figure out what works for you.

Comment by mattlerner on The Center for Election Science Year End EA Appeal · 2020-01-02T02:04:03.333Z · score: 2 (4 votes) · EA · GW
Given standard models of rational voter ignorance (and rational irrationality, etc.), this shouldn’t be surprising. Oversimplifying for a moment, the electorate’s middle are in all likelihood systematically mistaken about the sort of policies that would advance their interests; and when you pair these voters with political leaders who are incentivized to pander, we have a recipe for occasional disaster. I see no reason why this wouldn’t occur in a system with approval voting in the same way that it occurs in our current system.

I can think of one reason: rational ignorance is partially a consequence of the voting procedure used. People have less of an incentive to be ignorant when their votes matter more, as they would with approval voting. I don't have a strong stance on this, but I think it's important to recognize that studies about voter ignorance are not yielding evidence of an immutable characteristic of citizens; the situation is actually heavily contingent.

In the first few pages of The Myth of the Rational Voter, Bryan Caplan makes (implicitly) the case that voter ignorance isn't a huge deal as long as errors are symmetric: ignorant voters on both sides of an issue will cancel each other out, and the election will be decided by informed voters who should be on the "right" side, in expectation. Caplan claims that systematic bias across the population results in "wrong" answers.

My point in bringing this up is just that the existence of large numbers of ignorant voters doesn't have to be a major issue: large elections are decided by relatively small groups. Different voting procedures have very different ramifications for the composition of these small groups.

Comment by mattlerner on Let’s Fund: annual review / fundraising / hiring / AMA · 2019-12-31T19:59:42.315Z · score: 16 (9 votes) · EA · GW

Thanks for the writeup!

If the recent Bill Gates documentary on Netflix is to be believed, then Gates first became seriously aware of the problem of diarrhea in the developing world thanks to a 1998 column by Nicholas Kristof. It's hard to assess the counterfactual here (would Gates have encountered the issue in a different context? Would he have taken the steps he ultimately did after reading the Kristof piece?) but it seems plausible that Kristof's article constitutes a cost-effective intervention in its own right (if a not particularly targeted one).

I bring this up because I'm intrigued by the viral coverage of your clean energy research. It's not possible to quantify the impact of an article like this in any realistic way, but perhaps we can agree that a plausible distribution of beliefs about its value is close to strictly positive.

Future Perfect being what it is, it's obviously the case that Vox constitutes an unusually receptive channel for EA-adjacent research. But I'm curious if you consider the wide propagation of your research in the news media a "risky and very effective" project, and if your research products have been intentionally structured toward this end. If you have some takeaways from your big success so far, it could be very helpful to post them here- widely taken-up tweaks to make research propagate more effectively through the media are marginal improvements with potentially very high value.