Posts

Comments

Comment by mattlerner on Effective Altruism and International Trade · 2019-10-16T04:05:30.173Z · score: 8 (4 votes) · EA · GW

Thanks for writing this! I take the broader point and I think you provide good reasons to think that international trade deserves more attention as an effective intervention.

I may be missing something, but I'm really not sure what to make of that $200k number. It seems low intuitively, but a little examination makes it seem even stranger. In 2018, about $3.5 billion was spent on lobbying. In the 115th congress, 2017-2019, 443 bills were passed, as in, actually became law. So it seems reasonable to say that about 200 bills became law in 2018. That's almost twenty million dollars per bill. And that's in a weird idealized scenario where spending on lobbying gets the bill passed and where all lobbying money is being spent on lobbying-for (not lobbying-against) and where the money is evenly divided across bills.

We have no idea what the distribution of effectiveness looks like, and I totally buy the idea that some bills can be passed with only $200k in lobbying funds, but that would be true at the tails of the distribution, not in expectation.

Comment by mattlerner on Reality is often underpowered · 2019-10-15T03:17:37.123Z · score: 2 (2 votes) · EA · GW

Thanks for responding. I've now reread your post (twice) and I feel comfortable in saying that I twisted myself up reading it the first time around. I don't think my comment is directly relevant to the point you're making, and I've retracted it. The point is well-taken, and I think it holds up.

Comment by mattlerner on The Future of Earning to Give · 2019-10-14T16:04:40.404Z · score: 6 (4 votes) · EA · GW
I imagine that there a large fraction of EAs who expect to be more productive in direct work than in an ETG role. But I'm not too clear why we should believe that.

I think that for some of us this is a basic assumption. I can only speak to this personally, so please ignore me if this isn't a common sentiment.

First, direct roles are (in principle) high-leverage positions. If you work, for example, as a grantmaker at an EA org, a 1% increase in your productivity or aptitude could translate into tens of thousands of dollars more in funds for effective causes. In many ETG positions, a 1% increase in productivity is unlikely to result in any measurable impact on your earnings, and even an earnings impact proportional to the productivity gain would be negligible in absolute terms. So I tend to feel like, all other things being equal, my value is higher in a direct role.

But I don't think all other things are even equal. There seems to be an assumption underlying the ETG conversation that most EA-capable people are also capable of performing comparably well in ETG roles. In a movement with many STEM-oriented individuals, this may be a statistical truth, but it's not clear to me that it's necessarily true. Though it's obviously important to be intelligent, analytical, rational, etc. in many high-impact EA roles, the skills required to get and keep a job as, say, a senior software engineer, are highly specific. They require a significant investment of time and energy to acquire, and the highest-earning positions are as competitive as (or more competitive than) top EA jobs. For EAs without STEM backgrounds, this is a very long road, and being very smart isn't necessarily enough to make it all the way.

Some EAs seem capable of making these investments solely for the sake of ETG and the
opportunity for an intellectual challenge. Others find it difficult to stay motivated to make these investments when we feel we have already made significant personal investments in building skills that would be uniquely useful in a direct role and might not have the same utility in an ETG role. Familiarity with the development literature, for example, is relatively hard-won and not particularly well-compensated outside EA.

I recognize that there's a sort of collective action problem here: there simply cannot be a direct EA role for every philosophy MA or social scientist. But I wanted to argue here that the apparent EA preference for direct roles makes some good amount of sense.

I myself have split the difference, working as a data scientist at a socially-minded organization that I hope to make more "EA-aware" and giving away a fixed percentage of my earnings. I make less than I would in a more competitive role, but I believe there is some possibility of making a positive impact through the work itself. This is my way of dealing with career uncertainty and I'm curious to hear everyone's thoughts on it.

Comment by mattlerner on A Path Forward this Century · 2019-10-13T19:54:50.459Z · score: 4 (3 votes) · EA · GW

Hey Wyatt, this is impressive! Your writing is very clear and the document overall is very digestible (I mean that as a genuine compliment). "Life stewardship" seems a reasonable enough lens with which to view these issues. I know you're still writing, so this may be premature, but I think it's probably possible to significantly pare down this document without sacrificing meaning, perhaps by more than half.

It might help us to know who the target audience is for this work. I think EAs will find these concepts familiar and may appreciate your framing; your thoughts may or may not resonate/convince. There is probably also some segment of the general public that will find this interesting.

As a work of political philosophy, I think the book is a little bit hamstrung by a lack of engagement with other work in the field. Without speaking to your specific arguments, I feel confident in saying that this will probably create some resistance among readers who have a serious interest in philosophy. Political and moral philosophers have, of course, been struggling with some of these issues for centuries, and I think it's vital to build on, respond to, rebut, and otherwise integrate the large body of existing literature that you're making a good-faith effort to contribute to.

Comment by mattlerner on Reality is often underpowered · 2019-10-10T14:55:12.463Z · score: 4 (3 votes) · EA · GW

Some very interesting thoughts here. I think your final points are excellent, particularly #2. It does seem that experts in some fields have a hard-won humility about the ability of data to answer the central questions in their fields, and that perhaps we should use this as a sort of prior guideline for distributing future research resources.

I just want to note that I think the focus on sample size here is somewhat misplaced. N = 200 is by no means a crazily small sample size for an RCT, particularly when units are villages, administrative units, etc. As you note, suitably large effect sizes are reliably statistically distinguishable from zero in this context. This is true even with considerably smaller samples-- even N = 20! Randomizations even of small samples are relatively unlikely to be unbalanced on confounders, and the p-values yielded by now-common methods like randomization inference express exactly this likelihood. To me—and I mean this exclusively in the context of rigorously designed and executed RCTs—this concern can be addressed by greater attention to the actual size of resulting p-values: our threshold for accepting the non-null finding of a high-variance, small-sample RCT should perhaps be some very much lower value.

It is true that when there is high variance across units, statistically significant effects are necessarily large; this can obviously lead to some misleading results. Your point is well-taken in this context: if, for example, there are only 20 administrative units in country X, and we are able to randomize some educational intervention across units that could plausibly increase graduation rates only by 1%, but the variance in graduation rates across units is 5%, well, we're unlikely to find anything useful. But it remains statistically possible to do so given a strong enough effect!

Comment by mattlerner on [WIP] Summary Review of ITN Critiques · 2019-10-09T21:39:32.624Z · score: 6 (4 votes) · EA · GW

Thanks for writing this. I want to emphasize a point you make implicitly here, which is that it's not always clear when ITN is being used as an informal heuristic and when it's being used for actual or abstract calculation. I think arguments made previously by Rob Wiblin and John Halstead about the conceptual and practical difficulties of this approach make it clear that it is not a suitable method for rigorously ranking causes.

Still, I think it remains a valuable heuristic and a guide for more exhaustive calculations. Though neglectedness may be the wobbliest aspect, it's a (generally) good approximation of the possibility for additional value when in-depth information on possible marginal returns to a candidate cause area is immediately unavailable.