Informational Lobbying: Theory and Effectiveness 2020-07-30T22:02:15.200Z · score: 48 (19 votes)
Matt_Lerner's Shortform 2019-12-20T18:11:47.835Z · score: 2 (1 votes)
Kotlikoff et al., 'Making Carbon Taxation a Generational Win Win' 2019-11-24T16:27:50.351Z · score: 14 (7 votes)


Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-08-12T20:22:06.910Z · score: 1 (1 votes) · EA · GW

Points all well-taken. I'd love to share with FP's journal club, though I hasten to add that I'm still making edits and modifications based on your feedback, @smclare's, and others.

With respect to uncertainty in the CE calculation, my thinking was (am I making a dumb mistake here?) that because

and , then . So if covariance is nonzero, then (I think?) the variance of the product of two correlated random variables should be bigger than in the uncorrelated counterfactual.

To me, the main value of the CE model was in the sensitivity analysis - working through it really helped me think about what "effective lobbying" would have to be able to do, and where the utility would lie in doing so. I think if it doesn't serve this purpose for the reader, then I agree this document would have been better off without the model altogether.

Thanks for your thoughts on money in politics. Vis (1) I have to think more about this, but I do definitely view the topic a little differently. For instance, it's not obvious to me that economic arguments and political representation do the necessary work of regulatory capture. Boeing is in Washington and Northrop Grumman is in Virginia. It seems clear that the representatives of the relevant districts are prepared to argue for earmarks that will benefit their constituents... but these companies are still in direct competition, and it seems like there's still strategic benefit to each in getting the rest of Congress on their side. I might misunderstand- maybe we're reaching the limits of asynchronous discussion on this topic.

Vis (2), the "inside view" I was talking about was actually yours, as someone who thinks about this professionally- so thank you for your thoughts!

Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-08-11T16:54:38.889Z · score: 1 (1 votes) · EA · GW

I'm replying again here to note that I've struck the salience point from my conclusions. I've noted why up top. I now have a lot of uncertainty about whether this is the case or not, and don't stand by my suggestion that salience is a good guide to resource allocation.

Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-08-09T18:06:44.756Z · score: 1 (1 votes) · EA · GW

Thanks for your response!

With respect to your first point, I'm considering striking this conclusion upon reflection - see my discussion with @jackva elsewhere in this thread. In any case, my confidence level here is certainly too high given the evidence, and I really appreciate your close attention to this.

With respect to your second point, I don't mean to imply that the lack of organized opposition is the only thing that justifies lobbying expenditure, and think my wording is sloppy here as well. I used "lack of an organized opposition" to refer broadly to oppositions that are simply doing less of the (ostensibly) effective things — lower "organizational strength" as in Caldeira and Wright (1998), number of groups, as in Wright (1990), or simply lower relative expenditure, as in Ludema, Mayda, and Mishra (2018).

The evidence in Baumgartner et al that you reference about the apparent association between lack of countermobilization and success is also related to @jackva's concern about my underemphasis on potential lobbying equilibria here. On the one hand, I think this is clearly evidence in favor of the hypothesis that there is some efficiency in the market for lobbying- perhaps most lobbyists have a good idea of which efforts succeed, and don't bother to countermobilize against less sophisticated opposition. On the other hand, lobbying is a sequential game, and, since the base rate for policy enactment is so low to start with, it makes sense that opposition wouldn't appear until there's a more significant threat.

EDIT: I've actually struck the first bit, with a note. I wanted to add one more thing, which is that I don't know how much you've adjusted your prior on lobbying, but I wouldn't say this has made me "optimistic" about lobbying. The core thing I've come away with is that lobbying for policy change is extraordinarily unlikely to succeed, but that marginal changes to increase the probability of success are (1) plausible, based on the research and (2) potentially cost-effective, based on the high value of some policies.

Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-08-08T04:00:06.705Z · score: 1 (1 votes) · EA · GW

I like this spreadsheet idea and think I may kick it off (if you haven't already done so!)

I took the project on because I got interested in this topic, went looking for this, couldn't find it, and decided to make it so that it might be useful to others. I wasn't feeling very useful in my day job, so it was easy to stay motivated to spend time on this for a while. I tend to be most interested in generalizable or flexible approaches to improving welfare across different domains, and this seemed like it might be one of those.

Some areas I'm thinking about exploring. These are pretty rough thoughts:

  • Some more exploration of strategies for ameliorating child abuse in light of the well-known ACES Study. GiveWell and RandomEA have both explored Nurse-Family Partnerships. This problem is just so huge in terms of people affected (and in terms of second-order effects) that I think it's worth exploring a lot more. I'm particularly interested in focusing on child sexual abuse in particular.
  • Aggregating potentially cost-effective avenues to improve institutional performance. I'm curious about thinking at a higher level of abstraction than institutional decision-making. It seems worthwhile to put together the existing cross-disciplinary evidence on the question: what steps outside of those explicitly focusing on rationality and decision-making can companies/nonprofits/government agencies take to increase the probability that they make good decisions? A good example of one such step is in the apparent evidence that intellectually diverse teams make better decisions.
  • Long-term cost-effectiveness of stress reduction for pregnant women (with potential effects of infant mortality, maternal health, and long-term outcomes like brain development and violence).
  • Review of recent innovations that seem to like they might have potential for expediting scientific progress (like grant lotteries)
Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-08-08T02:59:51.920Z · score: 7 (3 votes) · EA · GW

Hello and thank you for your response!

Your criticism of the cost-effectiveness model is fair. Thematically, I guess it does contradict the spirit of my prior analysis in that it avoids the concerns of strategic choice. I was actively trying to be as general as possible, and actively trying to err on the side of greater uncertainty by not including any assumptions about correlatedness, though it occurs to me now that making such an assumption (e.g. a correlation between expenditure and likelihood of success) would actually have increased the variance of the final estimate, which would have been more in line with my goals. When I have time, I may comment here with an updated CEA.

I also agree that the only useful way to do this analysis is, as you've described, with a suite of models for different scenarios. I don't have a defense for not having done this beyond my own capacity constraints, though I hope it's more useful to have included the flawed model than not to have one at all (what do you think?).

I also think that the conclusion which, I believe, mostly draws from Baumgaertner " (80%) Well-resourced interest groups are no more or less likely to achieve policy success, in general, than their less well-resourced opponents." is quite surprising and I would be curious to find out why you think that / in how far you trust that conclusion.

Thanks for this, in particular. I think your surprise stems from a lack of clarity on my part. The reason I have high confidence in this conclusion is that it's a much weaker claim than it might seem. It does stem primarily from Baumgartner et al and from Burstein and Linton (2002). The claim here is that resource-rich groups are no more or less likely to get what they want--holding all else equal, including absolute expenditure and the spending differential between groups and their opponents.

There are three types of claim that are closely related:
1) Groups that spend more relative to their opposition on a given policy are likelier to win
2) Groups that spend more in absolute terms are likelier to win
3) Groups that have more money to spend are likelier to win

So I found fairly consistent evidence for (1), some evidence for (2), and no real evidence for (3). It's not obvious to me that (3) should be the case irrespective of (1): why would resource-rich groups succeed in lobbying if they deploy those resources poorly? It seems like the success of resource-rich groups is dependent upon (1), and that (3) should not be true when in isolation, unmediated by (1). Although Baumgartner et al conduct an observational study, the size of their (to me, convincingly representative) sample to me suggests that if such an effect exists, it should be observable as a correlation in their analysis. The association they observe is pretty small.

I have to say, though, that in writing this comment, my confidence in this conclusion has eased up a bit, so I'm curious to hear your response. I also think that since Baumgartner et al do find a small effect, I probably overstate the case here.

Baumgartner et al offer a theoretical take on this: "...organizations rarely lobby alone. Citizen groups, like others, typically participate in policy debates alongside other actors of many types who share the same goals. For every citizen group opposing an action by a given industrial group, for example, there may also be an ally coming from a competing industry with which the group can join forces" (p.12). So it's important to recognize that the finding here is about individual parties, not "sides" or coalitions advocating a given policy.

Finally, I'm curious to hear your take on the two potential money-in-politics explanations you mentioned. I've never found (1) particularly convincing—it's not clear to me that firms and their employees have the same interests, or that (if they do) the marginal value of regulatory capture isn't still high. But I agree that I underemphasized (2) and think it would be useful to have in this thread the "inside view" on lobbying equilibria from someone who works in the field.

Comment by mattlerner on Informational Lobbying: Theory and Effectiveness · 2020-07-31T19:27:54.309Z · score: 4 (2 votes) · EA · GW

Thanks for your response!

(1) I spent something like 100 hours on this over the course of several months. I think I could have cut this by something like 30-40% if I'd been a little bit more attentive to the scope of the research. I decided on the scope (assessing the effectiveness of national-level legislative lobbying in the U.S.) at the beginning of the project, but I repeatedly wound up off track, pursuing lines of research outside of what I'd decided to focus on. I also spent a good chunk of time on the GitHub repo with the setup for analyzing lobbying data, which wasn't directly related to the lit review but which I felt served the goal of presenting this as a foundation for further research.

If I had 40 more hours, I'd intentionally pursue an expanded scope. In particular, I'd want to fully review the research on lobbying of (a) regulatory agencies and (b) state and local governments. I explicitly excluded studies along those lines, some of which were very interesting.

(2) Thanks for asking for clarification on this. Baumgartner et al mean that it takes a long time for policy change to be observed on any given issue. After starting to pursue a policy goal, lobbyists are more likely to see success after four years than after two.

Baumgartner et al include a chapter that is mostly critical of the incrementalist idea of policy change, which they trace to Charles Lindblom's 1959 article The Science of "Muddling Through". Incrementalism is tied to Herbert Simon's idea of "bounded rationality." Broadly, the incrementalist idea is that policymakers face a broad universe of possible policy options, and in order to reduce the landscape to a manageable set, they choose from only the most available options, e.g. those closest to the status quo: "incremental" changes.

Frank Baumgartner, with Bryan Jones, is now well-known for their theory of "punctuated equilibrium." This is a partial alternative to incrementalism which uses the analogy of friction to understand policy change. Basically: the pressure builds on an issue over a period of time, during which no change occurs. After the pressure is overwhelming, policy shifts in a major way.

I say that punctuated equilibrium is a "partial" alternative because Baumgartner and Jones actually collected data that seems to demonstrate that policy change follows a steeply peeked, fat-tailed distribution. Their overall takeaway is that very small changes are overwhelmingly common, but moderate changes are relatively uncommon, and very large changes are surprisingly common. To come back to your question, Baumgartner et al might say that although most policy change is incremental—like year-to-year changes in agency budgets—meaningful policy change happens in a big way, all of a sudden.

(3) I agree with you. I think some of my suggested policies are not likely to be those most effectively advocated for, and I included them just to give a flavor of the types of things we might care about lobbying for. Coming up with more practicable ideas is, I think, a much bigger, much longer-term project.

I also think that although lobbying for the status quo is more effective all other things being equal, it may not be the best use of EA resources to focus exclusively on that side of things. That's because (per the counteractive lobbying theory) on many issues there is are latent interests that will arise to lobby against harmful proposals. It's hard to identify beforehand which proposals will stimulate this opposition, so there's a lot of prior uncertainty as to whether funding opposition to policy change is marginally useful in expectation.

(4) There are a lot of takes on the Tullock paradox, but I'll present two broad possible explanations.

  • Explanation A: Lobbying is basically ineffective, and the reason we don't see more lobbying is that most organizations recognize its ineffectiveness.
  • Explanation B: Lobbying is highly effective, and the reason we don't see more lobbying is that relatively small expenditures can exert enormous amounts of leverage.

Given the evidence here, I'm starting to be a lot more inclined toward Explanation B. I think it's demonstrably not the case, as you have noted with respect to the Clean Air Task Force, that organizations that lobby are wasting their money. For both altruistic and self-interested interest groups, the rewards to be captured are very large, and they make it worth the risk of wasting money. Alexander, Scholz, and Mazza (2009), for example, find a 22,000% return on investment.

If Explanation B holds, then the question is really just why the market for policy isn't efficient. Why hasn't the price of lobbying been bid up to the value of the rewards to be captured? I think it seems likely that this is down to multiple layers of information asymmetry (between legislators and their staffs, between these staffers and lobbyists, between lobbyists and their clients, etc.), which create multiple layers of uncertainty and drive the expected value of lobbying down from the standpoint of those in a position to purchase it.

I agree with you that a normal distribution is probably not the best choice to model the expected incremental change in probability. I felt like, given my CI for this figure and my sense that values closer to 0% and values closer to 5% were each less likely than values in the middle of that range, this served my purposes here - but please take my code and modify as you see fit!

Perhaps we want to start with a low prior chance of policy success, and then update way up or down based on which policy we're working on. Do you think we'd be able to identify highly-likely policies in practice?

I don't know. I think it's worth investigating. It seems like, given an already-existing basket of policies we'd be interested in advocating for, we can make lobbying more cost-effective just by allocating more resources to (e.g.) issues that are less salient to the public.

I have a sense that lobbyists, do, in fact, do something like what you're describing, and that this is part of the resolution to the Tullock paradox. Money spent on lobbying is not spent all at once: lobbyists can make an effort, check their results, report to their clients, and identify whether or not they're likely to meet with success in continued expenditure. If lobbying expenditure on a given topic seems unlikely to make a difference, then it can just stop. I wasn't able to find anything on how this process actually works, so the next step in this research is to actually talk to some lobbyists.


I think perhaps something that's missing here is a discussion of incentives within the civil service or bureaucracy

I agree with this too. I'd love for an EA with a public choice background to tackle this topic. I didn't consider it as part of my scope, but I do want to note something:

A policy proposal like taking ICBMs off hair-trigger alert just seems so obvious, so good, and so easy that I think there must be some illegible institutional factors within the decision-making structure stopping it from happening.

I think this is probably true in many if not most cases of yet-to-be-implemented policy changes that are obvious, good, and easy. It is probably true in this case. But I want to warn against concluding that, because some obvious, good, and easy policy change has not been implemented, that means that there is some illegible institutional factor that is stopping it from happening. It could just be that no one has been pushing for it. In EA terms, it's an important and tractable policy change that's neglected by the policy community. Given what I know about the policy community, it's not at all difficult for me to imagine that such policies exist.

Comment by mattlerner on Sample size and clustering advice needed · 2020-07-30T17:56:45.691Z · score: 4 (2 votes) · EA · GW

I refer you to Sindy's comment (she is actually an expert) but I want to note and verify that it sounds as if you may not actually be thinking of collecting individual-level data, and that you're thinking of making observations at the village level (e.g. what % of people in this village wear masks?). So it's not just the case that you wouldn't have enough clusters to make a statistical claim, but you may actually be talking about doing an experiment in which the units are villages... so n = 6 to 12. Then of course you'd have considerable error in the village-level estimate, and uncertainty about the representativeness about the sample within each village. I agree with Sindy that you probably don't want an RCT here.

Comment by mattlerner on Sample size and clustering advice needed · 2020-07-29T16:14:24.547Z · score: 10 (3 votes) · EA · GW

If you don't already have it, I would strongly recommend getting a copy of Gerber & Green's Field Experiments. I would also very strongly recommend that you (or EA Cameroon) engage an experimental methodology expert for this project, rather than pose the question on the forum (I am not such an expert).

It is very difficult to address all of these questions in a broad way, since the answers depend on:

  • The smallest effect size you would hope to observe
  • Your available resources
  • The population within each cluster
  • The total population
  • Your analysis methodology

I'm a little confused about the setup. You say that there are 6 groups— so how would it be possible to have "6 intervention + 3 non-intervention?" Sorry if I'm misunderstanding.

In general, and particularly in this context, it makes sense to split your clusters evenly between treatment and control. This is the setup that minimizes the standard error of the difference between groups. When the variance is larger, smaller effect sizes are difficult to detect. The smaller the number of clusters in your control group, for example, the larger the effect size that you would have to detect in order to make a statistically defensible claim.

With such a small number of clusters, effect sizes would have to be very large in order to be statistically distinguishable from zero. If indeed 50% of the population in these groups is already masked, 6 clusters may not be enough to see an effect.

Can we get some clarification on some of your questions? Particularly:

How important, in terms of statistical power is to include all clusters

If you have only 6 to choose from, then the answer is very important. But I'm not sure this is the sense in which you mean this.

How many persons should be observed at each place?

My inclination here is to say "as many as possible." But this is constrained by your resources and your method of observation. Can you say more about the data collection plan?

Comment by mattlerner on Nathan Young's Shortform · 2020-07-23T14:42:06.207Z · score: 8 (7 votes) · EA · GW

I also thought this when I first read that sentence on the site, but I find it difficult (as I'm sure its original author does) to communicate its meaning in a subtler way. I like your proposed changes, but to me the contrast presented in that sentence is the most salient part of EA. To me, the thought is something like this:

"Doing good feels good, and for that reason, when we think about doing charity, we tend to use good feeling as a guide for judging how good our act is. That's pretty normal, but have you considered that we can use evidence and analysis to make judgments about charity?"

The problem IMHO is that without the contrast, the sentiment doesn't land. No one, in general, disagrees in principle with the use of evidence and careful analysis: it's only in contrast with the way things are typically done that the EA argument is convincing.

Comment by mattlerner on The EA movement is neglecting physical goods · 2020-06-18T20:43:35.346Z · score: 3 (3 votes) · EA · GW

I don't work in physical goods (I'm a data scientist) but I am definitely interested in leveling up my skillset in this way. I'm probably only available for 3 to 4 hours a week to start, but that will probably change soon.

Thanks for making this post! This is an interesting observation.

Comment by mattlerner on HLI’s Mental Health Programme Evaluation Project - Update on the First Round of Evaluation · 2020-06-12T06:54:34.343Z · score: 8 (5 votes) · EA · GW

Thank you for doing this work! I really admire the rigor of this process. I'm really curious to hear how this work is received by (1) other evaluation orgs and (2) mental health experts. Have you received any such feedback so far? Has it been easy to explain? Have you had to defend any particular aspect of it in conversations with outsiders?

I do have one piece of feedback. You have included a data visualization here that, if you'll forgive me for saying so, is trying to tell a story without seeming to care about the listener. There is simply too much going on in the viz for it to be useful.

I think a visualization can be extremely useful here in communicating various aspects of your process and its results, but cramming all of this information into a single pane makes the chart essentially unreadable; there are too many axes that the viewer needs to understand simultaneously.

I'm not sure exactly what you wanted to highlight in the visualization, but if you want to demonstrate the simple correlation between mechanical and intuitive estimates, a simple scatterplot will do, without the extra colors and shapes. On the other hand, if that extra information is substantive, it should really be in separate panes for the sake of comprehensibility. Here's a quick example with your data (direct link to a larger version here):

I don't think this is the best possible version of this chart (I'd guess it's too wide, and opinions differ as to whether all axes should start at 0), but it's an example of how you might tell multiple stories in a slightly more readable way. The linear trend is visible in each plot, it's easier to make out the screening sizes, and I've outlined the axes delineating the four quadrants of each pane in order to highlight the fact that mostly top-scoring programmes on both measures were included in Round 2.

Feel free to take this with as much salt as necessary. I'm working from my own experience, which is that communicating data has tended to take just as much work on the communication as it does on the data.

Comment by mattlerner on EA Forum Prize: Winners for April 2020 · 2020-06-08T23:09:27.587Z · score: 12 (8 votes) · EA · GW
while some users reported finding the Prize valuable or motivating, that number wasn’t quite as high as I had been hoping for

It seems like the instrumental thing here is whether users who won prizes found them motivating. Most users will not write prize-winning posts, but if the users who did were at least partially motivated by the prospect of winning one, then the world with the prize is almost certainly better than the counterfactual. More generally, if users who wrote original posts were likelier to endorse the prize than users in general, that is some indication that the prize is somewhat effective. Did you have enough data to determine whether either of these situations obtain?

Comment by mattlerner on I Want To Do Good - an EA puppet mini-musical! · 2020-05-21T16:25:35.642Z · score: 20 (12 votes) · EA · GW

I don't have anything to say except that I loved this, and I'm really happy somebody is starting to present a warmer and fuzzier side of EA.

Comment by mattlerner on Matt_Lerner's Shortform · 2020-05-01T17:13:31.724Z · score: 2 (2 votes) · EA · GW

In general, I'm skeptical about software solutionism, but I wonder if there's a need/appetite for group decision-making tools. While it's unclear exactly what works for helping groups make decisions, it does seem like a structured format could provide value to lots of organizations. Moreover, tools like this could provide valuable information about what works (and doesn't).

Comment by mattlerner on Matt_Lerner's Shortform · 2020-04-22T18:21:37.428Z · score: 1 (1 votes) · EA · GW

Proportional representation

Comment by mattlerner on Matt_Lerner's Shortform · 2020-04-13T21:16:51.339Z · score: 7 (5 votes) · EA · GW

School closures

Workplace closures

The usual caveats apply here: cross-country comparisons are often BS, correlation is not causation, I'm presenting smoothed densities instead of (jagged) histograms, etc, etc...

I've combined data on electoral system design and covid response to start thinking about the possible relationships between electoral system and crisis response. Here's some initial stuff: the gap, in days, between first confirmed cases and first school and workplace closures. Note that n= ~80 for these two datasets, pending some cleaning and hopefully a fuller merge between the different datasets.

To me, the potentially interesting thing here is the apparently lower variability of PR government responses. But I think there's a 75% chance that this is an illusion... there are many more PR governments than others in the dataset, and this may just be an instance of variability decreasing with sample size.

If there's an appetite here for more like this, I'll try and flesh out the analysis with some more instructive stuff, with the predictable criticisms either dismissed or validated.

Comment by mattlerner on Matt_Lerner's Shortform · 2020-04-09T19:27:21.429Z · score: 1 (1 votes) · EA · GW

Or of course, restrict our sample to a smaller geographic region in the US with more prevalence.

Comment by mattlerner on Matt_Lerner's Shortform · 2020-04-09T18:59:10.261Z · score: 4 (3 votes) · EA · GW

It seems like there's a significant need right now to identify what the plausible relationship is between mask-wearing and covid19 symptoms. The virus is now widespread enough that a very quick Mechanical Turk survey could provide useful information.

Collect the following:

• Age group (5 categories)

• Wear a mask in public 1 month ago? (y/n)

• If yes to above, type of mask? (bandana/N95+/surgical/cloth/other)

• Sick with covid19 symptoms in past month? (y/n)

• Know anyone in everyday life who tested positive for covid19 in past month? (y/n)

• Postal code (for pop. density info)

Based on figures from this Gallup piece, a back-of-the-envelope says we could get usable results from surveying 20,000 Americans -- but we could work with a much smaller sample if we survey in a country where the virus is more prevalent.

Comment by mattlerner on What is the average EA salary? · 2020-04-05T19:14:46.168Z · score: 1 (1 votes) · EA · GW

I'd love to see some more information about the distribution (e.g. percentiles, change since previous years, breakdown by organization size/type or by role). Is it possible to provide that while maintaining anonymity?

Comment by mattlerner on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T20:04:54.804Z · score: 11 (8 votes) · EA · GW

This is a great post and I, like @rohinmshah, feel that simply the introduction of this general class of discussion is of value to the community.

With respect to expert surveys, I am somewhat surprised that there isn't someone in the EA community already pursuing this avenue in earnest. I think that it's firmly within the wheelhouse of the community's larger knowledge-building project to conduct something like the IGM experts panel across a variety of fields. I think, first, that this sort of thing is direly needed in the world at large and could have considerable direct positive effects, but secondly that it could have a number of virtues for the EA community:

  • Improve efficiency of additional research: Knowing what the expert consensus is on a given topic will save some nontrivial percentage of time when starting a literature review, and help researchers contextualize papers that they find over the course of the review. Expert consensus is a good starting place for a lit review, and surveys will save time and reduce uncertainty in that phase.
  • Let EAs know where we stand relative to the expert consensus: when we explore topics like growth as a cause area, we need to be able to (1) have a quick reference to the expert consensus at vital pivots in a conversation (e.g. do structural adjustments work?) and (2) identify with certainty where EA views might depart from the consensus.
  • Provide a basis for argument to policymakers and philanthropists: Appeals to authority are powerful persuasive mechanisms outside the EA community. Being able to fall back on expert consensus in any range of issues can be a powerful obstacle or motivator, depending on the issue. Here's an example: governments around the world continue to locally relitigate conversations about the degree to which electronic voting is safe, desirable, secure or feasible. Security researchers have a pretty solid consensus on these questions-- that consensus should be available to these governments and those of us who seek to influence them.
  • Demonstrate to those outside the community that EAs are directly linked to the mainstream research community: This is a legitimacy issue: regardless of whether the EA community ends up being broader or narrower, we are often insisting to some degree on a new way of doing things: we need to be able to demonstrate to newcomers and outsiders that we are not simply starting from scratch.
  • Establish continued relationships with experts across a variety of fields: Repeated deployment of these expert surveys affords opportunities for contact with experts who can be integrated into projects, sought for advice, or deployed (in the best case scenario) as voices on behalf of sensible policies or interventions.
  • Identify funding opportunities for further research or for novel epistemic avenues like the adversarial collaborations mentioned in the initial post: Expert surveys will reveal areas where there is no consensus. Although consensus can be and sometimes is wrong, areas where there is considerable disagreement seem like obvious avenues for further exploration. Where issues have a direct bearing on human wellbeing, uncovering a relative lack of conclusive research seems like a cause area in and of itself.
  • Finally, the question-finding and -constructing process is itself an important activity that requires expert input. Identifying the key questions to ask experts is itself very important research, and can result in constructive engagements with experts and others.
Comment by mattlerner on Thoughts on electoral reform · 2020-02-20T21:31:21.154Z · score: 6 (5 votes) · EA · GW

I agree that EAs should continue investigating and possibly advocating different voting methods, and I strongly agree that electoral reform writ large should be part of the "EA portfolio."

I don't think EAs (qua EAs, as opposed to as individuals concerned as a matter of principle with having their electoral preferences correctly represented) should advocate for different voting methods in isolation, even though essentially all options are conceptually superior to FPTP/plurality voting.

This is because A democratic system is not the same as a utility-maximizing one. The various criteria used to evaluate voting systems in social choice theory are, generally speaking, formal representations of widely-shared intuitions about how individuals' preferences should be aggregated or, more loosely, how democratic governments should function.

Obviously, the only preferences voting systems aggregate are those over the topic being voted on. But voters have preferences over lots of other areas as well, and the choice of voting system relates only to two of them: (a) their preferences over the choice in question and (b) their meta-preferences over how preferences are aggregated (e.g. how democratic their society is).

As others in this thread have pointed out, individuals' electoral preferences cannot be convincingly said to represent their preferences over all of the other areas their choice will influence.

So an individual gains utility from a voting system if and only if the utility gained by its superior representation of their preferences exceeds the utility lost in other areas lost by switching. I don't think this is a high bar to clear, but I do think that, beyond the contrast between broadly democratic and non-democratic systems, we have next-to-no good information about the relationship between electoral systems and non-electoral outcomes.

In the simplest terms possible: we know that some voting systems are better than others when it comes to meeting our intuitive conception of democratic government. But we're concerned about people's welfare beyond just having people's electoral preferences represented, and we don't know what the relationship between these things is.

It is totally possible that voting systems that violate the Condorcet criterion also dominate systems that meet the criterion with respect to social welfare. We simply don't know.

It's also not clear to what degree different voting systems induce a closer relationship between individuals' electoral preferences and their preferences over non-electoral topics, e.g. by incentivizing or disincentivizing voter education.

To reiterate, I strongly support the increased interest in approval voting and RCV that we're seeing, and I voted for it here in NYC. I want to see my own electoral preferences represented more accurately and I don't think there is a big risk that (at least here) my other preferences will suffer. But as consequentialists I think we are on very uncertain ground.

Comment by mattlerner on What posts you are planning on writing? · 2020-02-02T20:26:54.715Z · score: 3 (3 votes) · EA · GW

I'm doing a lit review on the effectiveness of lobbying and on some of the relevant theoretical background that I'm planning on posting when I'm done. I feel like this is potentially very relevant but I'm not sure if people will be interested.

Comment by mattlerner on Call for beta-testers for the EA Pen Pals Project! · 2020-01-27T18:44:25.429Z · score: 1 (1 votes) · EA · GW

Just want to follow up to acknowledge that I see that you're already conducting a survey and that I'm proposing you add a set of questions about personal beliefs/stances/positions.

Comment by mattlerner on Call for beta-testers for the EA Pen Pals Project! · 2020-01-27T18:40:36.533Z · score: 1 (1 votes) · EA · GW

This is a really cool project! Just want to plug this as a really good opportunity to rigorously study how EA ideas spread: a quick 5-minute pre- and post-survey asking participants Likert-style questions about their positions on various EA-relevant topics and perhaps their style of argument/conversation would be potentially high-value here.

Since assignment will be randomized, there's a real opportunity here to draw causal conclusions about how ideas spread, even if the external validity will be largely restricted to the EA population.

Comment by mattlerner on Growth and the case against randomista development · 2020-01-26T21:39:03.531Z · score: 1 (1 votes) · EA · GW

Thanks for your response! I still have some confusion, but this is somewhat tangentially related. In your CBA, you use an NPV figure of $3752bn as the output gain from growth. This is apparently derived from India's 1993 and 2002 growth episodes.

The CBA calculation calculates the EV of the GDP increase therefore as 0.5*0.1*3572 = $178.56 bn. You acknowledge elsewhere in your writeup that efforts to increase GDP entail some risk of harm (and likewise with the randomista approach) so my confusion lies with the elision of this possible harm from the EV calculation.

Even if the probability that a think tank induces a growth episode—e.g. the probability that a think tank influences economic policy in country X according to its own recommendations—is 10%, then there is still obviously a probability distribution over the possible influence that successfully implemented think tank recommendations would have. This should include possible harms and their attendant likelihoods, right?

I recognize that the $3,572bn figure comes directly from Pritchett as part of an assessment of the Indian experience, but it's not obvious to me that the number encapsulates the range of possibilities for a successful (in the sense of being implemented) intervention. I may be missing something, but it seems to me that a (perhaps only slightly) more rigorous CBA would have to itself include an expected value of success that incorporates possible benefits and harms for both Growth and Randomista approaches in the line of your spreadsheet model reading "NPV (@ 5%) of output loss from growth deceleration relative to counter-factual growth."

I understand that what you're envisioning is a sort of high-confidence approach to growth advocacy: target only countries where improvements are mostly obvious, and then only with the most robustly accepted recommendations. I still think there is a risk of harm and that the CBA may not capture a meaningful qualitative difference between the growth and randomista approaches. In principle, at least, the use of localized, small-scale RCTs to test development programs before they are deployed avoids large-scale harm and (in my view) pushes the mass of the distribution of possible outcomes largely above 0. No such obstacle to large harms exists, or indeed is even possible, in the case of growth recommendations. Pro-growth recommendations by economists have not been uniformly productive in the past and (I think) are unlikely to be so in the future.

I still favor this approach you suggest but, given the state of the field of growth economics—and the failure of GDP/capita to capture many welfare-relevant variables that you cite at the end of the writeup—I'd be keen to see more highly quantified conversation around possible harms.

Comment by mattlerner on Growth and the case against randomista development · 2020-01-25T01:31:33.812Z · score: 9 (3 votes) · EA · GW

Thanks for writing this! I am coming somewhat late to the party , but I wanted to add my support for what you have both written here. I back the concerted research effort you propose and believe it somewhat likely that it will have the benefits you suggest are probable.

I was digging through the Pritchett paper in hopes of doing my own analysis, and I do have a question: how did you calculate the median figure for Vietnam that you reference in section 4 ($6,914 GDP per capita)? I've been looking at the Pritchett paper and I can't quite figure it out. It seems close to the median absolute growth in $PPP presented in Pritchett's Table 4, but I imagine that's not right since Table 4 only lists the top 20 growth episodes from the full set of about 300. When I look at the those figures in Appendix A, though, it seems like the median growth episode calculated using PRM (without reference to dollar size) is somewhere around Ecuador's negative growth in 1978, which doesn't seem like it would line up even with the conversion to $PPP.


I see that you've written that Vietnam/89 is the median growth episode "to be affected by a think tank," and a little research reveals that Vietnam began a concerted economic liberalization in 1986, so perhaps you have a secondary subset of growth episodes that you believe were affected by think tanks?

I can also sort of see a case for selecting the median from Table 4 of the top 20 but that seems strange since (a) the cutoff is arbitrary and (b) it doesn't factor in the risk of harm from a think tank-influenced growth episode.

Comment by mattlerner on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-16T19:02:20.275Z · score: 11 (3 votes) · EA · GW

Thanks for your response. I think I should make clear (as I really didn't do in my initial post) that I mean my comment more broadly: when EAs think about doing ballot initiatives, they should strongly consider doing public opinion polling. In a setting where an EA advocacy group is trying to select (a) which of X effective policies to advocate and (b) in which of Y locales to advocate it, it seems (to me, at least) that polling is cost-effective, since choosing between X*Y potentially large number of independent options is a nontrivial problem that requires a rigorous approach.

In your setting, however (making the binary choice of whether or not to advocate for policy P in location L), I understand why you chose the strategy you did. Your point about the relative cost-effectiveness of talking to local politicians versus conducting an (arguably) expensive poll is well-taken. I don't have any idea how Swiss referenda work and I conclude from your comment that voters largely follow the lead of their representatives.

I'm not sure how you're thinking about future efforts along these lines, but if you're planning on selecting from a longer list of policies and cantons, I think polling—in a cheap way—could challenge your legislative strategy for cost-effectiveness, at least as a guide for initial research investment.

Comment by mattlerner on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-13T16:24:38.429Z · score: 4 (4 votes) · EA · GW

Fantastic work! In your post introducing this initiative you wrote that the base rate for passage of ballot initiatives was 11%. A conservative reading of the data here (taking the low value of $20m for development funding raised) seems to indicate a 100:1 return on investment. Taking the base rate, this $10 in effective development aid for $1 spent on advocacy (in expectation). If the development aid is effectively spent, the implication here is that money spent on an initiative like this might be ten times as effective in expectation as money donated directly to a top-rated charity. This assumes, of course, that the base rate is accurate.

In that initial post, you had an exchange with Stefan Schubert about the relevance of your assumed base rate. You discussed the importance of polling at that point but it's not clear to me where you left off.

This success really seems to highlight the importance of public opinion polling here. The value of information in this domain is very high, since you're trying to identify the avenue which will provide the greatest leverage. Choosing the wrong avenue has no value, and potentially even minor reputational costs for your organization or for EA in general. Choosing the right avenue has huge upsides.

Public opinion polling seems crucial to this end. In this scenario, prior polling might have allowed you to identify a reasonable figure beforehand (avoiding the $87 million overreach). More importantly, though (if I understand the procedure correctly), it might have enabled you to avoid the counterproposal process and to pinpoint an optimal figure to ask for-- perhaps one higher than the one you ultimately got.

I don't want to diminish the achievement here, which I think is huge; I just want to point out that extremely useful information for this effort can be retrieved from the public at relatively low cost. In the future, this information can be used to reduce the uncertainty around efforts to fund ballot proposals and increase the expected value of these efforts by lowering the probability of failure in expectation.

Comment by mattlerner on Personal Data for analysing people's opinions on EA issues · 2020-01-12T15:06:05.958Z · score: 10 (7 votes) · EA · GW

I think that it’s unnecessary to go to such great (and risky) lengths to find out what the public believes with respect to issues relevant to EAs. A well-constructed survey conducted via Mechanical Turk, for example, would (in conjunction with a technique like multilevel regression and poststratification) yield very accurate estimates of public opinion at various arbitrary levels of geographic aggregation. I’d be supportive of this and would be interested in helping to design and/or fund such a survey.

Comment by mattlerner on Has pledging 10% made meeting other financial goals substantially more difficult? · 2020-01-09T14:22:30.383Z · score: 5 (5 votes) · EA · GW

Since I started donating 10% (not very long ago), the only part of my discretionary spending that has taken a hit are my “dumb” expenses: nice new clothes, fancy meals, just overall waste. It turns out that stuff added up to 10%. But YMMV.

If you’re worried, and I think it’s reasonable to be, why don’t you start by pledging 1% and notching it up bit by bit? There’s no need to rush to take the 10% pledge. There is nothing special about that number and you need to figure out what works for you.

Comment by mattlerner on The Center for Election Science Year End EA Appeal · 2020-01-02T02:04:03.333Z · score: 2 (4 votes) · EA · GW
Given standard models of rational voter ignorance (and rational irrationality, etc.), this shouldn’t be surprising. Oversimplifying for a moment, the electorate’s middle are in all likelihood systematically mistaken about the sort of policies that would advance their interests; and when you pair these voters with political leaders who are incentivized to pander, we have a recipe for occasional disaster. I see no reason why this wouldn’t occur in a system with approval voting in the same way that it occurs in our current system.

I can think of one reason: rational ignorance is partially a consequence of the voting procedure used. People have less of an incentive to be ignorant when their votes matter more, as they would with approval voting. I don't have a strong stance on this, but I think it's important to recognize that studies about voter ignorance are not yielding evidence of an immutable characteristic of citizens; the situation is actually heavily contingent.

In the first few pages of The Myth of the Rational Voter, Bryan Caplan makes (implicitly) the case that voter ignorance isn't a huge deal as long as errors are symmetric: ignorant voters on both sides of an issue will cancel each other out, and the election will be decided by informed voters who should be on the "right" side, in expectation. Caplan claims that systematic bias across the population results in "wrong" answers.

My point in bringing this up is just that the existence of large numbers of ignorant voters doesn't have to be a major issue: large elections are decided by relatively small groups. Different voting procedures have very different ramifications for the composition of these small groups.

Comment by mattlerner on Let’s Fund: annual review / fundraising / hiring / AMA · 2019-12-31T19:59:42.315Z · score: 16 (9 votes) · EA · GW

Thanks for the writeup!

If the recent Bill Gates documentary on Netflix is to be believed, then Gates first became seriously aware of the problem of diarrhea in the developing world thanks to a 1998 column by Nicholas Kristof. It's hard to assess the counterfactual here (would Gates have encountered the issue in a different context? Would he have taken the steps he ultimately did after reading the Kristof piece?) but it seems plausible that Kristof's article constitutes a cost-effective intervention in its own right (if a not particularly targeted one).

I bring this up because I'm intrigued by the viral coverage of your clean energy research. It's not possible to quantify the impact of an article like this in any realistic way, but perhaps we can agree that a plausible distribution of beliefs about its value is close to strictly positive.

Future Perfect being what it is, it's obviously the case that Vox constitutes an unusually receptive channel for EA-adjacent research. But I'm curious if you consider the wide propagation of your research in the news media a "risky and very effective" project, and if your research products have been intentionally structured toward this end. If you have some takeaways from your big success so far, it could be very helpful to post them here- widely taken-up tweaks to make research propagate more effectively through the media are marginal improvements with potentially very high value.

Comment by mattlerner on Matt_Lerner's Shortform · 2019-12-30T03:44:32.153Z · score: 1 (1 votes) · EA · GW

Thanks for your thoughts. I wasn't thinking about the submerged part of the EA iceberg (e.g. GWWC membership), and I do feel somewhat less confident in my initial thoughts.

Still, I wonder if you'd countenance a broader version of my initial point- that there is a way of thinking that is not itself explicitly quantitative, but that is nonetheless very common among quantitative types. I'm tempted to call this 'rationality,' but it's not obvious to me that this thinking style is as all-encompassing as what LW-ers, for example, mean when they talk about rationality.

The examples you give of commonsensical versions of expected value and probability are what I'm thinking about here- perhaps the intuitive, informal versions of these concepts are soft prerequisites. This thinking style is not restricted to the formally trained, but it is more common among them (because it's trained into them). So in my (revised) telling, the thinking style is a prerequisite and explicitly quantitative types are overrepresented in EA simply because they're more likely to have been exposed to these concepts in either a formal or informal setting.

The reason I think this might be important is that I occasionally have conversations in which these concepts—in the informal sense—seem unfamiliar. "Do what has the best chance of working out" is, in my experience, a surprisingly rare way of conducting everyday business in the world, and some people seem to find it strange and new to think in that fashion. The possible takeaway is that some basic informal groundwork might need to be done to maximize the efficacy of different EA messages.

Comment by mattlerner on Matt_Lerner's Shortform · 2019-12-20T18:11:47.969Z · score: 3 (3 votes) · EA · GW

The EA movement is disproportionately composed of highly logical, analytically minded individuals, often with explicitly quantitative backgrounds. The intuitive-seeming folk explanation for this phenomenon is that that EA, with its focus on rigor and quantification, appeals to people with a certain mindset, and that the relative lack of diversity of thinking styles in the movement is a function of personality type.

I want to reframe this in a way that I think makes a little more sense: the case for an EA perspective is really only made in an analytic, quantitative way. In this sense, having a quantitative mindset is actually a soft prerequisite for "getting" EA, and therefore for getting involved.

I don't mean to say that only quantitative people can understand the movement, or that there's something intellectually very special about EAs.

Rather- very few people would disagree that charity should be effective. Even non-utilitarians readily agree that in most contexts we should help as many people as we can. But the essential concepts for understanding the EA perspective are highly unfamiliar to most people.

  • Expected value
  • Cost-benefit analysis
  • Probability
  • An awareness of the abilities and limitations of social science

You don't need to be an expert in any of these areas to "get" EA. You just need to be vaguely comfortable with them in the way that people who have studied microeconomics or analytic philosophy or mathematics are, and most other people aren't.

This may be a distinction without a difference, but I want to raise the perspective that the composition of the EA movement is less about personality types and more about intellectual preparation.

Comment by mattlerner on Community vs Network · 2019-12-20T16:44:57.658Z · score: 6 (4 votes) · EA · GW

This part of the discussion really rang true to me, and I want to hear more serious discussion on this topic. To many people outside the community it's not at all clear what AI research, animal welfare, and global poverty have in common. Whatever corner of the movement they encounter first will guide their perception of EA; this obviously affects their likelihood of participation and the chances of their giving to an effective cause.

We all mostly recognize that EA is a question and not an answer, but the question that ties these topics together itself requires substantial context and explanation for the uninitiated (people who are relatively unused to thinking in a certain way). In addition, entertaining counterintuitive notions is a central part of lots of EA discourse, but many people simply do not accept counterintuitive conclusions as a matter of habit and worldview.

The way the movement is structured now, I fear that large swaths of the population are basically excluded by these obstacles. I think we have a tendency to write these people off. But in the "network" sense, many of these people probably have a lot to contribute in the way of skills, money, and ideas. There's a lot of value—real value of the kind we like to quantify when we think about big cause areas—lost in failing to include them.

I recognize that EA movement building is an accepted cause area. But I'd like to see our conception of that cause area broaden by a lot— even the EA label is enough to turn people off, and strategies for communication of the EA message to the wider world have severely lagged the professionalization of discourse within the "community."

Comment by mattlerner on What content do people recommend looking at when trying to find jobs? · 2019-12-19T17:34:28.222Z · score: 4 (3 votes) · EA · GW

I want to caveat the following suggestions with the information that although I have achieved a high degree of success when it comes to getting first-round interviews (>40% response rate), my track record of actually getting the jobs that I want is not particularly high. So take these bullet points as tried-and-true advice on how to get the interview, not on how to get the job (that part is up to you).

  • Think hard about the UI of your CV.
    • Your résumé should look really, really good! If you know InDesign, use it. If you don't, learn it. People who are hiring genuinely and sincerely try not to care about things like this. They still do. It can make a difference.
    • Tailor your resume to fit the jobs you're applying to. Do this for every job. This may mean moving your education to the top, highlighting your skills, or front-loading certain accomplishments. Think of your resume as a prior distribution that you're going to hand to a hiring manager who's trying to estimate your potential fit for the job: you don't want to supply every hiring manager with the same prior, since they're estimating fits for different jobs. You want to maximize the likelihood that you'll be considered a fit for any given job. Not tailoring your resume is not "more honest" than modifying it to fit the job— it's just providing hiring managers with an uninformative prior.
  • Organize and file your past applications
    • Make a subfolder for each application you do containing the resume and cover letter you used for that application. You should be adjusting your resume and cover letter for each job, but as you apply for more jobs, you'll be able to simply adjust the materials from similar previous applications. This will reduce friction for you and make you more productive in your application process.
  • Cold-emailing never hurt anyone
    • If you're interested in working somewhere, email them, even if there's not a job posted. Don't send them your resume at first. Just say you're interested, give a sentence or two of background, and ask if there's some way you can get involved. In the worst case, your email with disappear into the void. Often, though, your email will be treated as a serious indicator of genuine interest when a job is posted. This puts you in a very good position. In the best case, someone will actually set an interview with you (this has happened to me more than once).
  • Always, always, always follow up after your initial application
    • This is a no-brainer. It takes ten seconds, demonstrates interest, and brings you to the top of the pile for disorganized hiring managers.
    • Two anecdotes: (1) I wouldn't have gotten my first job ever if I hadn't followed up after submitting my application: their CRM had lost my resume, and they wouldn't even have known about me if I hadn't emailed. (2) I recently went through a process where the hiring manager only moved me forward after follow-ups after every stage of the process. I imagine this is a way of weeding out less interested candidates.
  • Be disciplined about your job search
    • When you're looking for a job, you'll feel like you have to be constantly looking. This is a mistake and will drain your energy. You'll have a few sites to check once a day. Check them. Then Google around if you have any ideas for finding new jobs to apply to. Don't do this for more than half an hour a day- you'll hit negative returns in terms of both your state of mind and opportunity cost. You can instead use that time to...
  • Make things that show you can do the job
    • This obviously can't work everywhere, but for some jobs it will go a long way. I think this has become standard advice in tech and EA, but it's worth repeating: it takes a big investment to work for free on a project that perhaps no one will see, but the expected return is much higher than that on throwing a CV into the void.
  • Networking is overrated
    • Perhaps the only potentially controversial item on this list. People will tell you "it's who you know." I think this is true in a limited way: people who you've worked with in the past and who are familiar with you in a professional context will, indeed, recommend you, meet with you, think of you for future positions, and occasionally go out on a limb for you. People you have just met will not, generally speaking, do this. They may or may not "pass your resume along" after you grab a coffee with them, but the fact that you barely know them will register with whoever they pass your resume to.
Comment by mattlerner on The Center for Election Science Year End EA Appeal · 2019-12-19T16:43:02.083Z · score: 2 (2 votes) · EA · GW

I'll post a summary lit review here on the forum when I'm done with my research. Spoiler alert: political scientists don't have a great idea of how/why/whether lobbying works and research on its effectiveness is almost strictly limited to trade policy and large publicly traded firms. So you get expressions of effects like "$140 in additional shareholder value for every $1 spent on lobbying." Interesting, but not particularly generalizable.

It seems like CES's strategy so far has been to start small, which makes obvious sense. I'm curious to know when/if you make the decision to withdraw from a local advocacy effort that seems like it's not paying off. It's not obvious to me that public support is monotonically increasing in dollars spent on advocacy— what's your stopping rule?

Comment by mattlerner on The Center for Election Science Year End EA Appeal · 2019-12-18T17:37:44.670Z · score: 3 (3 votes) · EA · GW

Hey Aaron! Thanks for posting this. I am likely going to include CES in my giving this year as a result of some of the points you've made here.

I've been researching lobbying recently and I'm curious about this passage:

We will lobby legislators. Because of approval voting’s simplicity, there are opportunities for lobbying elected officials. Normally, this isn’t an option because of the conflict of interest with those elected. But the opportunity presents itself when the party in power suffers because of vote splitting yet wants to avoid implementing a complex method. There are places where RCV is stalled out where we have opportunities. These are typically higher risk but very high reward since they don’t require the same resources as a campaign. Our estimate is that they can be one sixth the expected cost per citizen compared to ballot measures when factoring in their relative probability of success. This also requires funding for a 501(c)4 to do this effectively at scale.

I'm not particularly skeptical about this one-sixth estimate, but I haven't been able to find anything like it my lit review! Do you have some background on this research?

Comment by mattlerner on 21 Recent Publications on Existential Risk (Sep 2019 update) · 2019-11-05T20:18:23.466Z · score: 9 (5 votes) · EA · GW


Curious to know- how many of these papers were TERRA previously aware of before they were uncovered by the algo?

Comment by mattlerner on Deliberation May Improve Decision-Making · 2019-11-05T14:37:51.396Z · score: 1 (1 votes) · EA · GW

I've always wondered about the "first N Google results" strategy. Even in the absence of a file-drawer effect, isn't this more likely to turn up papers making positive claims (on the assumption that e.g. rejections of the null are more likely to be cited than inconclusive results)?

Comment by mattlerner on Deliberation May Improve Decision-Making · 2019-11-05T02:02:35.532Z · score: 19 (7 votes) · EA · GW

Thank you so much for writing this. This is one of my central areas of interest, and I've been puzzled by the comparative lack of resources expended by the EA community on institutional decision-making given the apparently high degree of importance accorded to it by many of us.

This is a great guide. I agree that the central question here is whether or not deliberative democracy leads to better outcomes. If it does, or even if probably does, it seems that it's easily one of the highest-value potential cause areas, since the levers that influence many other cause areas are within reach of democratic polities.

With that in mind, it seems clear to me that the primary way in which deliberation is EA-relevant is as a large-scale decision making mechanism. So it seems like relatively small-scale uses are not very important to us, and it also seems like information about these successes may not be useful given the likelihood that instituting these mechanisms at a large scale is likely to present very different problems of kind, not of degree. I'd love to hear your thoughts on that.

I have a few other thoughts about this review, and I'd like to hear your responses if you have the time.

• Basically all of the cross-country comparisons in this review suffer from reverse causation. Countries that have lots of deliberation and good outcomes don't necessarily have the former causing the latter; the former could rather be just another instance of the latter. As enthused as I am about deliberative democracy, this scenario seems just as likely as the causal one. Is there any reason to view these correlations as suggestive of a causal effect?

• It seems like this review contains a relative paucity of research supporting the null hypothesis that deliberation does not improve decision making (or, for that matter, the alternative hypothesis that it actually worsens decision making). Were you unable to find studies taking this position? If not, how worried are you about the file-drawer effect here?

• Based on your reading of all this evidence, I'd love to hear your subjective first impressions- what do you personally feel is the "best bet" for enacting deliberative democracy on a large scale somewhere besides China? How far do you think this could feasibly go and how long would you expect such a change to take? Very wide confidence bands on these estimates are fine, of course.

Comment by mattlerner on We should choose between moral theories based on the scale of the problem · 2019-11-04T17:17:25.200Z · score: 3 (3 votes) · EA · GW

I think this is a great and really sensible way to think about things. It's really natural, and the physics analogy provides some intuition behind why that is. A question: have you thought about how this way of thinking is in some sense "baked into" certain moral frameworks? I'm thinking specifically here of rule utilitarianism: rules can apply at different scales. It seems to be that at the personal level rule utilitarianism is basically instantiated as virtue ethics.

Comment by mattlerner on Notes on 'Atomic Obsession' (2009) · 2019-10-28T16:01:36.018Z · score: 5 (4 votes) · EA · GW

I haven't read this book and I'm also not an expert, so my confidence on this comment is low.


Although nuclear weapons seem to have at best a quite limited substantive impact on actual historical events, they have had a tremendous influence on our agonies and obsessions, inspiring desperate rhetoric, extravagant theorizing, wasteful expenditure, and frenetic diplomatic posturing

Not only have nuclear weapons failed to be of much value in military conflicts, they also do not seem to have helped a nuclear country to swing its weight or “dominate” an area

Wars are not caused by weapons or arms races, and the quest to control nuclear weapons has mostly been an exercise in irrelevance

As a relative layman, I find claims like these puzzling. This is primarily because the "agonies and obsessions ... desperate rhetoric, extravagant theorizing, wasteful expenditure, and frenetic diplomatic posturing" that Mueller apparently dismisses drove the course of history for the half-century following the Second World War.

It's hard to imagine that the Cold War would have occurred at all in the absence of nuclear weapons. While it's true that the first nukes didn't pose much more serious a threat than a large-scale firebombing, it was barely more than a decade after the war that much more destructive weapons were being built. A successful conventional Soviet assault on the U.S. mainland was, as far as I know, never a serious possibility. It seems clear that the terror of that period was driven by the nuclear threat, and that the nuclear threat drove U.S. and Soviet strategic posture, which also influenced foreign aid, trade policy, etc. Even if their danger is exaggerated, perception of their danger (in my view an unavoidable perception--even the Joint Chiefs were prepared to nuke Cuba during the missile crisis despite knowing that the strategic situation had not appreciably changed) had serious effects.

Also, and again, not an expert (and I'd like to know if Mueller addresses this specific case) but of course Israel has been a nuclear power since as early as 1979. Before that date, Israel fought three major wars and dozens of smaller engagements with its neighbors. Since then, virtually all of Israel's military conflicts have been essentially counterinsurgency or against state proxies such as Hezbollah. It's often argued that Israel's status as a nuclear power has driven Iran's efforts in that arena, which has also influenced Saudi belligerence; this conflict has affected oil prices, domestic politics in both countries, the ongoing war in Yemen, etc. This is kind of a long DAG, but I feel like there are other examples like this, and I find it sort of hard to accept the position that the simple existence of nuclear weapons hasn't been immensely consequential.

Comment by mattlerner on What are your top papers of the 2010s? · 2019-10-23T01:07:24.738Z · score: 5 (5 votes) · EA · GW

I nominate Raj Chetty's Who Becomes an Inventor in America? The Importance of Exposure to Innovation, which builds on his and his collaborators' impressive other work using administrative data to estimate intergenerational economic mobility.

Chetty's recent work is methodologically ahead of the curve, and I hope to see many more economists using large-scale administrative data to address the big questions. But the paper I've nominated--the "Lost Einsteins" paper--is exceptionally interesting, and I think that within a few years it will start to be seen as really important.

This is, first, because it very palpably demonstrates that concerns about inequality and economic efficiency and long-run growth are inextricably linked. If you accept endogenous growth theory as a plausible account, then the Lost Einsteins paper suggests (actually, states explicitly) that various kinds of inequality can slow innovation and therefore growth.

Second, I think that this is a fairly EA-relevant paper. It's clear that individual inventors or small groups of innovators (Haber/Bosch, Borlaug, Tesla, Robert Noyce) can alter the course of history in a meaningful way. It's impossible to estimate the lost social value of the lost Einsteins, but I think it's plausible to suggest that it could be significant.

Comment by mattlerner on The Germy Paradox – Filters: A taboo · 2019-10-19T19:18:42.957Z · score: 6 (3 votes) · EA · GW

I've been following this series and I'm really enjoying it. I'm curious if you've thought about Fermi-like paradoxes in a general way and if you have any thoughts on extending your analysis here to other domains. You are probably familiar with Sandberg et al.'s proposed resolution of the Fermi paradox, but your framing of the issue has got me thinking about other similar (though perhaps less mystifying) paradoxes out there. The lenses you apply here (e.g. humaneness/treachery) seem like they could equally be applicable in other domains. A couple other examples:

• It seems like far-right terrorism in the U.S. is relatively rare despite the (again, relative) prevalence of militant views and easy access to firearms

• I often wonder why bookstores don't burn down more often, since arsonists and pyromaniacs exist (and arson is fairly common) and bookstores are among the easiest pickings.

Comment by mattlerner on Effective Altruism and International Trade · 2019-10-16T04:05:30.173Z · score: 15 (8 votes) · EA · GW

Thanks for writing this! I take the broader point and I think you provide good reasons to think that international trade deserves more attention as an effective intervention.

I may be missing something, but I'm really not sure what to make of that $200k number. It seems low intuitively, but a little examination makes it seem even stranger. In 2018, about $3.5 billion was spent on lobbying. In the 115th congress, 2017-2019, 443 bills were passed, as in, actually became law. So it seems reasonable to say that about 200 bills became law in 2018. That's almost twenty million dollars per bill. And that's in a weird idealized scenario where spending on lobbying gets the bill passed and where all lobbying money is being spent on lobbying-for (not lobbying-against) and where the money is evenly divided across bills.

We have no idea what the distribution of effectiveness looks like, and I totally buy the idea that some bills can be passed with only $200k in lobbying funds, but that would be true at the tails of the distribution, not in expectation.

Comment by mattlerner on Reality is often underpowered · 2019-10-15T03:17:37.123Z · score: 6 (5 votes) · EA · GW

Thanks for responding. I've now reread your post (twice) and I feel comfortable in saying that I twisted myself up reading it the first time around. I don't think my comment is directly relevant to the point you're making, and I've retracted it. The point is well-taken, and I think it holds up.

Comment by mattlerner on The Future of Earning to Give · 2019-10-14T16:04:40.404Z · score: 6 (4 votes) · EA · GW
I imagine that there a large fraction of EAs who expect to be more productive in direct work than in an ETG role. But I'm not too clear why we should believe that.

I think that for some of us this is a basic assumption. I can only speak to this personally, so please ignore me if this isn't a common sentiment.

First, direct roles are (in principle) high-leverage positions. If you work, for example, as a grantmaker at an EA org, a 1% increase in your productivity or aptitude could translate into tens of thousands of dollars more in funds for effective causes. In many ETG positions, a 1% increase in productivity is unlikely to result in any measurable impact on your earnings, and even an earnings impact proportional to the productivity gain would be negligible in absolute terms. So I tend to feel like, all other things being equal, my value is higher in a direct role.

But I don't think all other things are even equal. There seems to be an assumption underlying the ETG conversation that most EA-capable people are also capable of performing comparably well in ETG roles. In a movement with many STEM-oriented individuals, this may be a statistical truth, but it's not clear to me that it's necessarily true. Though it's obviously important to be intelligent, analytical, rational, etc. in many high-impact EA roles, the skills required to get and keep a job as, say, a senior software engineer, are highly specific. They require a significant investment of time and energy to acquire, and the highest-earning positions are as competitive as (or more competitive than) top EA jobs. For EAs without STEM backgrounds, this is a very long road, and being very smart isn't necessarily enough to make it all the way.

Some EAs seem capable of making these investments solely for the sake of ETG and the
opportunity for an intellectual challenge. Others find it difficult to stay motivated to make these investments when we feel we have already made significant personal investments in building skills that would be uniquely useful in a direct role and might not have the same utility in an ETG role. Familiarity with the development literature, for example, is relatively hard-won and not particularly well-compensated outside EA.

I recognize that there's a sort of collective action problem here: there simply cannot be a direct EA role for every philosophy MA or social scientist. But I wanted to argue here that the apparent EA preference for direct roles makes some good amount of sense.

I myself have split the difference, working as a data scientist at a socially-minded organization that I hope to make more "EA-aware" and giving away a fixed percentage of my earnings. I make less than I would in a more competitive role, but I believe there is some possibility of making a positive impact through the work itself. This is my way of dealing with career uncertainty and I'm curious to hear everyone's thoughts on it.

Comment by mattlerner on A Path Forward this Century · 2019-10-13T19:54:50.459Z · score: 6 (4 votes) · EA · GW

Hey Wyatt, this is impressive! Your writing is very clear and the document overall is very digestible (I mean that as a genuine compliment). "Life stewardship" seems a reasonable enough lens with which to view these issues. I know you're still writing, so this may be premature, but I think it's probably possible to significantly pare down this document without sacrificing meaning, perhaps by more than half.

It might help us to know who the target audience is for this work. I think EAs will find these concepts familiar and may appreciate your framing; your thoughts may or may not resonate/convince. There is probably also some segment of the general public that will find this interesting.

As a work of political philosophy, I think the book is a little bit hamstrung by a lack of engagement with other work in the field. Without speaking to your specific arguments, I feel confident in saying that this will probably create some resistance among readers who have a serious interest in philosophy. Political and moral philosophers have, of course, been struggling with some of these issues for centuries, and I think it's vital to build on, respond to, rebut, and otherwise integrate the large body of existing literature that you're making a good-faith effort to contribute to.

Comment by mattlerner on Reality is often underpowered · 2019-10-10T14:55:12.463Z · score: 4 (3 votes) · EA · GW

Some very interesting thoughts here. I think your final points are excellent, particularly #2. It does seem that experts in some fields have a hard-won humility about the ability of data to answer the central questions in their fields, and that perhaps we should use this as a sort of prior guideline for distributing future research resources.

I just want to note that I think the focus on sample size here is somewhat misplaced. N = 200 is by no means a crazily small sample size for an RCT, particularly when units are villages, administrative units, etc. As you note, suitably large effect sizes are reliably statistically distinguishable from zero in this context. This is true even with considerably smaller samples-- even N = 20! Randomizations even of small samples are relatively unlikely to be unbalanced on confounders, and the p-values yielded by now-common methods like randomization inference express exactly this likelihood. To me—and I mean this exclusively in the context of rigorously designed and executed RCTs—this concern can be addressed by greater attention to the actual size of resulting p-values: our threshold for accepting the non-null finding of a high-variance, small-sample RCT should perhaps be some very much lower value.

It is true that when there is high variance across units, statistically significant effects are necessarily large; this can obviously lead to some misleading results. Your point is well-taken in this context: if, for example, there are only 20 administrative units in country X, and we are able to randomize some educational intervention across units that could plausibly increase graduation rates only by 1%, but the variance in graduation rates across units is 5%, well, we're unlikely to find anything useful. But it remains statistically possible to do so given a strong enough effect!