Posts

HStencil's Shortform 2020-06-29T18:46:29.009Z
What would "doing enough" to safeguard the long-term future look like? 2020-04-22T21:47:03.389Z
Did Fortify Health receive $1 million from EA Funds? 2019-11-26T18:41:41.684Z
Credit Cards for EA Giving 2019-11-11T21:35:15.271Z

Comments

Comment by HStencil on Open Philanthropy’s Early-Career Funding for Individuals Interested in Improving the Long-Term Future - New Application Round · 2021-09-24T23:05:32.165Z · EA · GW

Great, thanks so much!

Comment by HStencil on Open Philanthropy’s Early-Career Funding for Individuals Interested in Improving the Long-Term Future - New Application Round · 2021-09-09T20:02:58.219Z · EA · GW

Does Open Phil have any plans to re-open applications for early-career funding for work on biosecurity, as well (sometime in the next 12 months, say)?

Comment by HStencil on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-08T00:19:30.926Z · EA · GW

Yeah, I mean, to be clear, my impression was that Yglesias wished this weren't required and believed that it shouldn't be required (certainly, in the abstract, it doesn't have to be), but nonetheless, it seemed like he conceded that from a practical standpoint, when this is what all your staff expect, it is required. I guess maybe then the question is just whether he could "avoid the pitfalls from his time with Vox," and I suppose my feeling is that one should expect that to be difficult and that someone in his position wouldn't want to abandon their quiet, stable, cushy Substack gig for a risky endeavor that required them to bet on their ability to do it successfully. I think too many of the relevant causes are things that you can't count on being able to control as the head of an organization, particularly at scale, over long periods of time, and I'd been inferring that this was probably one of the lessons Yglesias drew from his time at Vox.

Comment by HStencil on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T17:32:06.271Z · EA · GW

Yeah, I guess the impression I had (from comments he made elsewhere — on a podcast, I think) was that he actually agreed with his managers that at a certain point, once a publication has scaled enough, people who represent its “essence” to the public (like its founders) do need to adopt a more neutral, nonpartisan (in the general sense) voice that brings people together without stirring up controversy, and that it was because he agreed with them about this that he decided to step down.

Comment by HStencil on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T17:02:48.318Z · EA · GW

I would be extremely surprised if he had any interest in doing this, given what he’s said about his reasons for leaving Vox.

Comment by HStencil on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T16:59:35.545Z · EA · GW

https://kalshi.com/

Comment by HStencil on MichaelA's Shortform · 2021-05-25T18:39:00.618Z · EA · GW

Yeah, I think it’s very plausible that career RAs could yield meaningful productivity gains in organizations that differ structurally from “traditional” academic research groups, including, importantly, many EA research institutions. I think this depends a lot on the kinds of research that these organizations are conducting (in particular, the methods being employed and the intended audiences of published work), how the senior researchers’ jobs are designed, what the talent pipeline looks like, etc., but it’s certainly at least plausible that this could be the case.

On the parallels/overlap between what makes for a good RA and what makes for a good research manager, my view is actually probably weaker than I may have suggested in my initial comment. The reason why RAs are sometimes promoted into research management positions, as I understand it, is that effective research management is believed to require an understanding of what the research process, workflow, etc. look like in the relevant discipline and academic setting, and RAs are typically the only people without PhDs who have that context-specific understanding. Plus, they’ll also have relevant domain knowledge about the substance of the research, which is quite useful in a research manager, too. I think these are pretty much all of the reasons why RAs may make for good research managers. I don’t really think it’s a matter of skills or of mindset anywhere near as much as it’s about knowledge (both tacit and not). In fact, I think one difficulty with promoting RAs to research management roles is that often, being a successful RA seems to select for traits associated with not having good management skills (e.g., being happy spending one’s days reading academic papers alone with very limited opportunities for interpersonal contact). This is why I limited my original comment on this to RAs who can effectively manage people, who, as I suggested, I think are probably a small minority. Because good research managers are so rare, though, and because research is so management-constrained without them, if someone is such an RA and they have the opportunity, I would think that moving into research management could be quite an impactful path for them. 

Comment by HStencil on MichaelA's Shortform · 2021-05-24T04:54:44.037Z · EA · GW

I actually think full-time RA roles are very commonly (probably more often than not?) publicly advertised. Some fields even have centralized job boards that aggregate RA roles across the discipline, and on top of that, there are a growing number of formalized predoctoral RA programs at major research universities in the U.S. I am actually currently working as an RA in an academic research group that has had roles posted on the 80,000 Hours job board. While I think it is common for students to approach professors in their academic program and request RA work, my sense is that non-students seeking full-time RA positions very rarely have success cold-emailing professors and asking if they need any help. Most professors do not have both ongoing need for an (additional) RA and the funding to hire one (whereas in the case of their own students, universities often have special funding set aside for students’ research training, and professors face an expectation that they help interested students to develop as researchers).

Separately, regarding the second bullet point, I think it is extremely common for even full-time RAs to only periodically be meaningfully useful and to spend the rest of their time working on relatively low-priority “back burner” projects. In general, my sense is that work for academic RAs often comes in waves; some weeks, your PI will hand you loads of things to do, and you’ll be working late, but some weeks, there will be very little for you to do at all. In many cases, I think RAs are hired at least to some extent for the value of having them effectively on call.

Comment by HStencil on MichaelA's Shortform · 2021-05-24T04:53:53.733Z · EA · GW

For the last few years, I’ve been an RA in the general domain of ~economics at a major research university, and I think that while a lot of what you’re saying makes sense, it’s important to note that the quality of one’s experience as an RA will always depend to a very significant extent on one’s supervising researcher. In fact, I think this dependency might be just about the only thing every RA role has in common. Your data points/testimonials reasonably represent what it’s like to RA for a good supervisor, but bad supervisors abound (at least/especially in academia), and RAing for a bad supervisor can be positively nightmarish. Furthermore, it’s harder than you’d think to screen for this in advance of taking an RA job. I feel particularly lucky to be working for a great supervisor, but/because I am quite familiar with how much the alternative sucks.

On a separate note, regarding your comment about people potentially specializing in RAing as a career, I don’t really think this would yield much in the way of productivity gains relative to the current state of affairs in academia (where postdocs often already fill the role that I think you envision for career RAs). I do, however, think that it makes a lot of sense for some RAs to go into careers in research management. Though most RAs probably lack the requisite management aptitude, the ones who can effectively manage people, I think, can substantially increase the productivity of mid-to-large academic labs/research groups by working in management roles (I know J-PAL has employed former RAs in this capacity). A lot of academic research is severely management-constrained, in large part because management duties are often foisted upon PIs (and no one goes into academia because they want to be a manager, nor do PIs typically receive any management training, so the people responsible for management often enough lack both relevant interest and relevant skill). Moreover, productivity losses to bad management often go unrecognized because how well their research group is being managed is, like, literally at the very bottom of most PIs’ lists of things to think about (not just because they’re not interested in it, also because they’re often very busy and have many different things competing for their attention). Finally, one consequence of this is that bad RAs (at least in the social sciences) can unproductively consume a research group’s resources for extended periods of time without anyone taking much notice. On the other hand, even if the group tries to avoid this by employing a more active management approach, in that case a bad RA can meaningfully impede the group’s productivity by requiring more of their supervisor’s time to manage them than they save through their work. My sense is that fear of this situation pervades RA hiring processes in many corners of academia.

Comment by HStencil on The despair of normative realism bot · 2021-01-04T04:41:21.680Z · EA · GW

This discussion reminds of a comment R.M. Hare made in his 1957 essay “Nothing Matters”:

Think of one world into whose fabric values are objectively built; and think of another in which those values have been annihilated. And remember that in both worlds the people in them go on being concerned about the same things - there is no difference in the 'subjective' concern which people have for things, only in their 'objective' value. Now I ask, What is the difference between the states of affairs in these two worlds? Can any other answer be given except 'None whatever'? How, therefore can we torment ourselves with doubts about which of them our own world resembles?

In another interesting parallel, in the same essay, Hare uses the term “play-acting” to refer to describe those who claim that nothing matters.

Comment by HStencil on Donation multiplier · 2020-12-31T15:51:30.811Z · EA · GW

While not exactly the same, EA researchers are already doing something quite similar: https://givingmultiplier.org/.

Comment by HStencil on Careers Questions Open Thread · 2020-12-27T01:17:24.028Z · EA · GW

That makes perfect sense! I agree that CE probably isn't the best fit for people most interested in doing EA work to mitigate existential risks. Feel free to shoot me a DM if you'd ever like to talk any of this through at greater length, but otherwise, it seems to me like you're approaching these decisions in a very sensible way.

Comment by HStencil on Careers Questions Open Thread · 2020-12-26T22:36:30.846Z · EA · GW

Happy to help! Another thing that strikes me is that in my experience (which is in the U.S.), running an academic research team at a university (i.e., being the principal investigator on the team's grants) seems to have a lot in common with running a startup (you have a lot of autonomy/flexibility in how you spend your time; your efficacy is largely determined by how good you are at coordinating other people's efforts and setting their priorities for them; you spend a lot of time coordinating with external stakeholders and pitching your value-add; you have authority over your organization's general direction; etc.). This seems relevant because I think a lot of the top university economics research groups in the U.S. have a pretty substantial impact on policy (e.g., consider Opportunity Insights), and the same may well be true in the U.K. It seems to me that other avenues toward impacting policy (e.g., working in the government or for major, established advocacy organizations) are considerably less entrepreneurial in nature. Of course, you could also found your own advocacy organization to push for policy change, but 1) I think it's generally easier to get funding for research than for work along these lines (especially as a newcomer), in part because the advocacy space is already so crowded, and 2) founding an advocacy organization seems like the kind of thing one might do through Charity Entrepreneurship, which you seem less excited about. If you're mainly attracted to entrepreneurship by tight feedback loops, however, academia is probably the wrong way to go, as it definitely does not have those.

Comment by HStencil on Careers Questions Open Thread · 2020-12-26T17:39:09.231Z · EA · GW

It sounds based on your description that a fairly straightforward step would be for you to try to set up calls with 1) someone on the Charity Entrepreneurship leadership team, and 2) some of the founders of their incubated charities. This would help you to evaluate whether it would be a good idea for you to apply to the CE program at some point, as well as to refine your sense of which aspects of entrepreneurship you’re particularly suited to (so that if entrepreneurship doesn’t work out—maybe you discover other aspects of it that seem less appealing—you’ll be able to look for the bits you care for in positions with more established organizations). If you came out of those calls convinced that you might want to apply to Charity Entrepreneurship down the road, it seems to me that a logical next step would be to start reading up on potential causes and interventions that you might want your charity to pursue. You could also, I’m sure, do volunteer work for existing, newly launched CE charities, where given that most of them only have two staff, you’d probably be given a fair amount of responsibility and would be able to develop useful insights into the entrepreneurial process. For you, the value of information from doing that seems like it might be quite high.

Comment by HStencil on Careers Questions Open Thread · 2020-12-23T05:56:24.676Z · EA · GW

That seems like a sound line of reasoning to me — best of luck with the rest of your degree!

Comment by HStencil on Careers Questions Open Thread · 2020-12-21T23:35:34.915Z · EA · GW

I think this is a really hard question, and the right answer to it likely depends to a very significant degree on precisely what you’re likely to want to do professionally in the near and medium-term. I recently graduated from a top U.S. university, and my sense is that the two most significant benefits I reaped from where I went to school were:

  1. Having that name brand on my resume definitely opened doors for me when applying for jobs during my senior year. I’m actually fairly confident that I would not have gotten my first job out of college had I gone to a less prestigious school, though I think this only really applies to positions at a fairly narrow set of financial services firms and consulting firms, as well as in certain corners of academic research.
  2. I think I personally benefited from a significant peer effect. My specific social circle pushed me to challenge myself academically more than I likely otherwise would have (in ways that probably hurt my GPA but served me well all things considered). That said, I know that the academic research on peer effects in education is mixed to say the least, so I’d be hesitant to extrapolate much from my own experience.

I’m not sure how to weigh the importance of the first of those considerations. On the one hand, your first job is just that: your first job. It doesn’t necessarily mean anything about where you’ll end up at age 35. On the other hand, I do feel like I have observed this phenomenon of smart people graduating from relatively unknown universities and really struggling to find interesting work during their first several years out of college and then eventually resigning themselves to getting a master’s degree from a more well-known school (sometimes in a field where the educational benefit of the degree is relatively low) just so that they can get in the door to interview for jobs in their field of choice. This obviously comes at a significant cost, both in terms of time and—often but not always—in terms of money. That said, in some fields, you just do need a master’s to get in the door for a lot of roles, no matter where you went to undergrad or what you did while you were there, and maybe that’s all that’s really behind this.

Another thing potentially worth noting is that, in my experience, it seems as if U.S. research universities are most usefully divisible into three categories with respect to their undergraduate job placement: universities that “high-prestige” employers are unlikely to have heard of, universities that “high-prestige” employers are likely to have heard of and have vaguely positive associations with, and finally, the set of Harvard, Princeton, Yale, MIT, and Stanford (these are distinguished not only by their name brands but also by the extent of their funding and support for undergraduate research and internships, the robustness of their undergraduate advising, and other more “experiential” factors). There are certainly exceptions to this breakdown (the financial services and consulting firms mentioned above definitely differentiate between Penn and Michigan), but by and large, my sense has been that controlling for “ability,” the difference in early-career outcomes between a Harvard graduate and a Penn graduate is significantly larger than the difference in early-career outcomes between a Penn graduate and a Michigan graduate (note: the specific schools chosen as examples within each cohort here are completely arbitrary). Accordingly, I don’t think that very many people generally have a strong professional reason to transfer from UCLA to Brown or from the University of Virginia to Dartmouth, etc. However, I buy that those at lesser-known schools may, in many circumstances, have a strong professional reason to transfer to their flagship state school.

Other good reasons to transfer, I think, include transferring for the purpose of getting to a particular city where you know you want to work when you graduate, with an eye toward spending a portion of the remainder of your time in college networking or interning in your field of choice. In particular, I think that if you want to work in U.S. (national) policy after graduation, transferring to a school in the Washington, DC Metropolitan Area can be hugely beneficial. The same goes for financial services in the New York City Metropolitan Area, entertainment in Los Angeles, and (perhaps, though I am less sure about this) tech in the San Francisco Bay Area. In your case, it might be worthwhile to submit a transfer application to Georgetown with the aim of trying to forge some connections at the Center for Security and Emerging Technology (or perhaps the Center for Global Health Science and Security if you are interested in biosecurity policy), both of which are housed there. One other very strong reason to transfer, it seems to me, would be if you wanted to work on AI, but your current school didn’t have a computer science department, like a local state school near where I grew up. I assume from your post that that isn’t your situation, though.

Finally, I wouldn’t underestimate the importance of mental health considerations, to the extent that those may be at all relevant to your choice. Mental health during college can have a huge impact on GPA, and while where you go to undergrad will only really be a factor in determining your grad school prospects for a relatively narrow set of programs (mainly, I think, via the way it affects the kinds of research jobs you can get during and post-college), GPA is a huge determinant of grad school admissions across basically every field, so that is important to bear in mind. The transfer experience, from what I have heard, is not always easy, especially, I imagine, in academic environments that are already very high-pressure.

If you’d like to talk through this at greater length, feel free to DM me. To the extent that my perspective might be useful, I’d be more than happy to offer it, and if you’d just like someone to bounce ideas off of, I’d be happy to fill that role, as well.

Comment by HStencil on Wholehearted choices and "morality as taxes" · 2020-12-21T21:43:55.113Z · EA · GW

I really like this. To me, it emphasizes that moral reason is a species of practical reason more generally and that the way moral reasons make themselves heard to us is through the generic architecture of practical reasoning. More precisely: Acting in a manner consistent with one's moral duties is not about setting one's preferences aside and living a life of self-denial; it's about being sufficiently attentive to one's moral world that one's preferences naturally evolve in response to sound moral reasons, such that satisfying those preferences and fulfilling one's duties are one and the same.

Comment by HStencil on Incompatibility of moral realism and time discounting · 2020-12-12T23:57:41.856Z · EA · GW

This is a fascinating argument — thank you for sharing it! I think it's particularly interesting to consider it in the context of metaethical theories that don't fall neatly within the realist paradigm but share some of its features, like R.M. Hare's universal prescriptivism (see Freedom and Reason [1963] and Moral Thinking [1981]). However, I also think this probably shouldn't lead most discounting realists to abandon their moral view. My biggest issue with the argument is that I suspect (though I am still thinking this through) that there exist parallel arguments of this form that would purport to disprove all of philosophical realism (i.e. including realism about empirical descriptions of the natural world). I think statements rejecting philosophical realism are pretty epistemically fraught (maybe impossible to believe with justification), which leaves me suspicious of your argument. (It's worth noting here that special relativity itself is an empirical description of the natural world.)

I have a feeling that the right way of thinking about this is that the rise relativistic physics changed the conventional meaning of a "fact" into something like: a true statement for which its truth cannot depend upon the person thinking it within a particular inertial frame of reference. Otherwise, I think we would be forced to admit that there are not facts about the order in which events occur in time, and that seems quite obviously inconsistent with the ordinary language meanings of several common concepts to me. I know that relativity teaches that statements about time and duration are not objective descriptions of reality but are instead indexical reports of "where the speaker is" relative to a particular object, similar to "Derek Parfit's cat is to my left," but (for basically Wittgensteinian reasons) I do not think that this is actually what these statements mean.

Ultimately, if you're someone who, like me, believes that a correct analysis of the question, "What is the right thing to do?" must start with a correct analysis of the logical properties of the concepts invoked in that sentence (see R.M. Hare, especially Sorting Out Ethics [1997]), and you believe that those logical properties are determined by the way in which those concepts are used (see Wittgenstein's Philosophical Investigations [1953]), then I think this argument is mainly good evidence that the proper understanding of what moral realism means today is the following: "Moral realism holds that moral statements are facts, and the truth of a fact must be universal within the inertial frame of reference in which that fact exists; that is, that truth cannot depend upon the person thinking the fact within that inertial frame of reference."

Comment by HStencil on Careers Questions Open Thread · 2020-12-11T20:49:34.007Z · EA · GW

Glad to hear it helped! Of course, usual caveats apply about the possibility that your field is quite different from mine, so I wouldn't stop looking for advice here, but hopefully, this gives you a decent starting point!

Comment by HStencil on Careers Questions Open Thread · 2020-12-11T20:43:12.890Z · EA · GW

Regarding the data-driven policy path, my sense is that unfortunately, most policy work in the U.S. today is not that data-driven, though there's no doubt that that's in part attributable to human capital constraints. Two exceptions do come to mind, though:

  1. Macroeconomic stabilization policy (which is one of Open Philanthropy's priority areas) definitely fits the bill. Much of the work on this in the U.S. occurs in the research and statistics and forecasting groups of various branches of the Federal Reserve System (especially New York, the Board of Governors in D.C., Boston, Chicago, and San Francisco). These groups employ mathematical tools like DSGE and HANK models to predict the effects of various (mainly but not exclusively monetary) policy regimes on the macroeconomy. Staff economists working on this modeling regularly produce research that makes it onto the desks of members of the Federal Open Markets Committee and even gets cited in Committee meetings (where U.S. monetary policy is determined). To succeed on this path in the long-term you would need to get a PhD in economics, which probably has many of the same downsides as a PhD in computer science/AI, but the path might have other advantages, depending on your personal interests, skills, values, motivations, etc. One thing I would note is that it is probably easier to get into econ PhD programs with a math-CS bachelor’s than you would think (though still very competitive, etc.). The top U.S. economics programs expect an extensive background in pure math (real analysis, abstract algebra, etc.), which is more common among people who studied math in undergrad than among people who studied economics alone. A good friend of mine actually just started her PhD in economics at MIT after getting her bachelor’s in math and computer science and doing two years of research at the Fed. This is not a particularly unusual path. If you're interested and have any questions about it, feel free to dm me.
  2. At least until the gutting of the CDC under our current presidential administration, it employed research teams full of specialists in the epidemiology of infectious disease who make use of fairly sophisticated mathematical models in their work. I would consider this work to be highly quantitative/data-driven, and it's obviously pertinent to the mitigation of biorisks. To do it long-term, you would need a PhD in epidemiology (ideally) or a related field (biostatistics, computational biology, health data science, public health, etc.). These programs are also definitely easier to get into with your background than you would expect. They need people with strong technical skills, and no one leaves undergrad with a bachelor's in epidemiology. You would probably have to get some relevant domain experience before applying to an epi PhD program, though, likely either by working on the research staff at someplace like the Harvard Center for Communicable Disease Dynamics or by getting an MS in epidemiology first (you would have no trouble gaining admission to one of those programs with your background). One big advantage of epidemiology relative to macroeconomics and AI is that (my sense is) it's a much less competitive field (or at least it certainly was pre-pandemic), which probably has lots of benefits in terms of odds of success, risk of burnout, etc. Once again, feel free to dm me if this sounds interesting to you and you have any questions; I know people who have gone this route, as well.
Comment by HStencil on Careers Questions Open Thread · 2020-12-10T19:24:26.818Z · EA · GW

I think a lot of the day-to-day feelings of fulfillment in high-impact jobs come from either: 1) being part of a workplace community of people who really believe in the value of the work, or 2) seeing first-hand the way in which your work directly helped someone. I don't really think the feelings of fulfillment typically come from the particular functional category of your role or the set of tasks that you perform during the workday, so I wonder how informative your experiments with data science, for instance, would be with respect to the question of identifying the thing that you feel you "must do," as you put it. If I had to guess, I'd speculate that the feeling you're looking for will be more specific to a particular organization or organizational mission than to the role you'd be filling for organizations generally.

Comment by HStencil on Careers Questions Open Thread · 2020-12-10T17:57:20.779Z · EA · GW

If you're committed to using data science to address public policy questions in the U.S. (either in government or a think tank-type organization), I suspect you'd be best-served by a program like one of these:

https://mccourt.georgetown.edu/master-of-science-in-data-science-for-public-policy/

https://harris.uchicago.edu/academics/degrees/ms-computational-analysis-public-policy-mscapp

https://macss.uchicago.edu/  

Comment by HStencil on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-09T19:01:22.519Z · EA · GW

This is all fantastic information to have — thank you so much for explaining it! I'm really glad to have improved my understanding of this.

Comment by HStencil on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-09T18:56:54.514Z · EA · GW

Yes, that argument for veg*anism is a big part of why I’m a vegetarian, but it does not on its own entail that one should prefer giving to multiplier charities rather than to the GiveWell Maximum Impact Fund. That depends on the empirical question of how the relative expected values weigh out. My argument is that there are sound reasons to believe that in the multiplier charity case specifically, the best-guess expected values do not favor giving to multiplier charities. “Your donation to a multiplier charity might have a big positive impact if it pushes them up an impact step,” doesn’t really respond to these reasons. Obviously, I agree that my donation might have a really big positive impact. I am just skeptical that we have sufficient reason to believe that, at the end of the day, the expected value is higher. I think the main reasons why I, at least prior to conversing with Jon, was strongly inclined to think the expected value calculus favored GiveWell were:

  1. TLYCS’s multiplier likely doesn’t exceed 4x. [I have updated against this view on account of Jon’s comments.]
  2. There is a much higher likelihood that TLYCS sees diminishing marginal returns on “charitable investments” than organizations directly fighting, say, the global malaria burden. [I have updated somewhat against this view on account of Jon’s comments.]
  3. If a particularly promising opportunity for TLYCS to go up an impact step were to present itself, it most likely would get filled by a large donor, who would fully fund the opportunity irrespective of my donation. (In the case of the AMF—to continue with our earlier example—I imagine most in the donor community assume that such opportunities get funded with grants from GiveWell’s Maximum Impact Fund; it has proven to be an effective coordination mechanism in that sense.)
  4. There are good reasons to be at least somewhat suspicious of the impact estimates that multiplier charities put out about themselves, particularly given how little scrutiny or oversight exists of their activities. There’s even a reasonable argument, I think, that such organizations, in the status quo, face strong incentives (due to potential conflicts of interest) to optimize for achieving aims unrelated to having a positive impact. For instance, I think Peter Singer’s work likely is highly effective at persuading people give more to effective charities, but imagine for a moment that TLYCS were to discover that, in fact, owing to the many controversies surrounding Singer, his association with the movement on net turned people away. Based on Jon’s remarks during this forum discussion, he seems like a great person, but I don’t think we have any general reason to believe that TLYCS would respond to that discovery in a positive-impact-maximizing way. Singer is such a large part of the organization that it seems plausible to me that he would be able—if he wished—to push it to continue to raise his profile, as it does today, even if doing so were likely net negative for the EA project. Furthermore, in reality, if something like this were to occur, it would probably happen through a slow trickle of individually inconclusive pieces of evidence, not through a single decisive revelation, so subconscious bias in interpreting that evidence inside of TLYCS could lead to this sort of suboptimal outcome even without anyone doing anything they believed might be harmful. Obviously, this is a deliberately somewhat outlandish hypothetical, but hopefully, it gets the point across.

Regarding your final point, I basically agree with your reasoning here. I have not confirmed my mental model with the AMF, and it’s fair to say I should. However, I also think that 1) you’re right that benefactors in “marginal” villages may get more (or there may be more of them) on account of my donations, and 2) deworming is so cheap (as are mosquito nets) that my donations to deworming charities probably do cover entire schools, etc.

Comment by HStencil on Careers Questions Open Thread · 2020-12-09T15:34:23.638Z · EA · GW

I don’t know anything about the norms and expectations in CS, but in my field (a quantitative social science), it is basically impossible to get into PhD programs without research experience of some kind, and you would likely be advised, first and foremost, to seek a master’s as preparation, and if it went well, apply to PhD programs thereafter. The master’s programs that would be recommended would be designed for people interested in transitioning from industry to academia, and someone like you would probably have a good shot at getting in. They can be expensive, though. If you wanted to avoid that, you would need to come up with some other way of demonstrating research acumen. This could mean transitioning into an academic research staff role somewhere, which (in my field, though maybe not yours) would help your odds of admission a ton. It could also mean reconnecting with an old college professor about your interests and aspirations and seeing if they’d be willing to work on a paper with you (I know someone who did this successfully; the professor likely agreed to it because she judged that my friend’s work had a high chance of being published). Finally, you could just try to write a publishable research paper on your own. In my field, this seems to me like it would be very hard to do, especially without prior research experience, but even if it didn’t turn into a publication, if the paper were solid, you could submit it as a supplemental writing sample with your applications, and it would likely help to compensate for weaknesses in your research background (for what it’s worth on this point, a friend-of-a-friend of mine was a music conservatory student who was admitted to a philosophy doctoral program after self-studying philosophy entirely on her own).

Comment by HStencil on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-09T06:51:48.943Z · EA · GW

I really don’t think my argument is about risk aversion at all. I think it’s about risk-neutral expected (counterfactual) value. The fact that it is extraordinarily difficult to imagine my donations to a multiplier charity having any counterfactual impact informs my belief about the likely probability of my donations to such an organization having a counterfactual impact, which is an input in my expected value calculus. You’re right that under some circumstances, a risk-neutral expected value calculus will favor small donors donating to “step-functional” charities that can’t scale their operations smoothly per marginal dollar donated, but my argument was that in the specific case of multiplier charities, the odds of a small-dollar donation being counterfactually responsible for moving the organization up an impact step are particularly infinitesimal (or at least that this is the most reasonable thing for small-dollar donors without inside information to believe). The fact that impact in this context is step-functional is a part of the explanation for the argument, not the conclusion of the argument.

With respect to the question of “relative step-functionality,” though, it’s also not clear to me why, compared to a multiplier charity, one would think that giving to GiveWell’s Maximum Impact Fund would be any more step-functional on the margin. It seems odd to suggest that being counterfactually responsible for an operational expansion into a new region is among the most plausible ways that a small-dollar gift to the AMF, for instance, has an impact. Clearly, such a gift allows the AMF to distribute more nets where they are currently operating, even if no such expansion into a new region is presently on the table. Moreover, I find this particularly confusing in the case of the Maximum Impact Fund, which allocates grants to target specific funding gaps, often corresponding to very well-defined organizational initiatives (e.g. expansions into new regions), the individual cost-effectiveness of which GiveWell has modeled. It’s obviously true that regardless of whether one gives to a multiplier charity or to the Maximum Impact Fund, there is some chance that one’s donations either A) languish unused in a bank account, or B) counterfactually cause something hugely impactful to happen, but given that in the case of GiveWell, we know the end recipients have a specific, highly cost-effective use already planned out for this particular chunk of money (and if they have extra, they can just put it toward… more nets), whereas in the multiplier charity case, we don’t have any reason to believe they could use these specific funds at all (not to mention productively), doesn’t it seem like the balance of expected values here favors going with GiveWell?

Finally, while it is obviously true that most nets don’t save lives, I fail to see how that bears on the question at hand. We both agree that this is reflected in GiveWell’s cost-effectiveness analysis, which we (presumably) both agree that we have strong reason to trust. We have no such independent cost-effectiveness analysis of any multiplier charity. And the fact that most nets don’t save lives certainly isn’t a reason why the impact of donations to the AMF would not rise by some smooth function of dollars donated. The only premise on which that argument depends is that if they don’t have anything else good to do with my money (which presumably, they do, having earned a grant from the Maximum Impact Fund), they can always just buy more nets. Given the current scale of global net distribution relative to the total malaria burden, it seems wildly unlikely that a much larger percentage of those nets would fail to save lives than was the case during previous net distributions.

Comment by HStencil on Careers Questions Open Thread · 2020-12-09T02:58:43.430Z · EA · GW

It sounds like you're doing some awesome work, and these are great questions, but I very seriously doubt you will be able to get good answers to them from anyone without domain expertise in your field, so this may not be best place to look. I personally have some very cursory exposure to biostatistics and health data science (definitely less than you), but I imagine I have significantly more familiarity with the area, especially in the U.S., than most people on the EA Forum, and I have zero clue about the answers to your questions.

Comment by HStencil on Careers Questions Open Thread · 2020-12-09T02:35:21.453Z · EA · GW

I may be missing or misunderstanding something, but it seems like your worries/roadblocks about your option 1 all pertain specifically to the MBA/MPA component. If that is the case, and you think you really might want to work in tech, I'd encourage you to consider trying to transition directly to a tech company without first getting another degree. Anecdotally, my sense is that MBAs and MPAs are useful mainly for networking and allowing you to command a higher starting salary in many roles, not for what you learn during the degree (though this depends somewhat on your prior academic and professional background, as well as on the specific program you're enrolled in, of course). 

I imagine that the main reason you've been considering getting an MBA or an MPA is because you have a sense that you need it to make a significant career shift. I'm not so sure. I don't know how easy it is to spend two years as a software engineer at a tech company (instead of spending those two years in grad school) and then transition into a product management role, but I imagine that particularly at smaller or medium-sized tech companies, this must be a thing that happens. And even if I'm wrong about that, I know people who went straight from coding roles at professional services firms to product manager-track positions (e.g. product data analyst) at medium-sized tech companies (admittedly, outside of Silicon Valley). I imagine these people will become product managers faster this way than they would have if they'd gotten an MBA in the middle. Finally, regarding going into debt for an MPA, you should consider applying to Princeton's program; it's free to everyone who is admitted!

Comment by HStencil on Introduction to the Philosophy of Well-Being · 2020-12-09T01:32:14.552Z · EA · GW

Thank you — please do!

Comment by HStencil on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-08T23:21:47.076Z · EA · GW

I'm glad to hear you found my reasoning useful, and I appreciate your explanation of where you think it may go astray. I'm a fairly marginal actor in the grand scheme of the EA community and don't feel I am anywhere close to having a clear view on whether the returns to adding further vetting or oversight structures would outweigh the costs. Naïvely, it seems to me that some kinds of organizational transparency are pretty cheap. However, it occurs to me that even though I've spent a fair bit of time on the TLYCS website over the past several years and gave to your COVID-19 response fund back in the spring, I honestly have no recollection of the extent of your transparency in the status quo. In a similar vein, to put it more flippantly than you deserve, I don't think most people I know in the community (myself included) really understand what you do. I was even unaware of how high your estimated multiplier is (if you had asked me to guess prior to your comment, there's no way I would've gone higher than 4x), and now, I am quite curious about how you’re estimating that and what you think is driving such a high return. I expect this is probably my fault for not seriously investigating "multiplier charities" when deciding where to give and instead presuming that they likely aren't a good fit for small donors like me for the reasons I explained. However, I also think I am exactly the persuadable small donor who you would want to be touching with whatever outreach or marketing you're doing , so maybe there's room for improvement on your part there, as well.

For what it's worth, if you were going to invest in adding some kind of vetting or oversight structure, here are a few questions that—inspired by your comment—I would most want it to answer before making a determination about whether to give to TLYCS:

1. Why have TLYCS's expenses tripled since 2016? Other than the website overhaul and the book launch, what have you been spending on? Are you aiming to engage in similar (financial) growth again in the near term? If not, would you be if you had more support from small donors?

2. What do you mean by "communicate with more donors?" What does that involve? How costly is it on a per-donor basis? How scalable is it?

3. When you spend more money (beyond your basic operating expenses: salaries, office space if you have it, etc.), and that spending seems to be associated with an increase in donor interest in your recommended charities, what do you think generally explains that relationship, and how do you determine that such an increase in donor interest was counterfactually caused by the increase in spending?

4. More generally, and this may be an extremely dumb question/something you have explained at length elsewhere, how do you arrive at your "money moved" estimates, and how do you ensure that they are counterfactually valid?

5. Do you personally believe that TLYCS will hit diminishing marginal returns on investments in growing its base of donors to its recommended charities sometime in the near or intermediate term?

You obviously do not have to answer these questions here or at all. I wrote them out only to provide a sense of what information I feel I am missing.

Comment by HStencil on Introduction to the Philosophy of Well-Being · 2020-12-08T22:47:09.610Z · EA · GW

I suspect there may be too much inferential distance between your perspective on normative theory and my own for me to explain my view on this clearly, but I will try. To start, I find it very difficult to understand why someone would endorse doing something merely because it is “effective” without regard for what it is effective at. The most effective way of going about committing arson may be with gasoline, but surely we would not therefore recommend using gasoline to commit arson. Arson is not something we want people to be effective at! I think that if effective altruism is to make any sense, it must presuppose that its aims are worth pursuing.

Similarly, I disagree with your contention that morality isn't, as you put it, paramount. I do not think that morality exists in a special normative domain, isolated far away from concerns of prudence or instrumental reason. I think moral principles follow directly from the principle of instrumental reason, and there is no metaphysical distinction between moral reasons and other practical reasons. They are all just considerations that bear on our choices. Accordingly, the only sensible understanding of what it means to say that something is morally best is: “It is what one ought to do,” (I am skeptical of the idea of supererogation). It is a practical contradiction to say, “X is what I ought to do, but I will not do it,” in the same way that it is a theoretical contradiction to say, “It is not raining, but I believe it’s raining.” Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best). To me, that sounds like saying, “EA should do X regardless of whether or not EA should do X.”

Regarding the idea of intrinsic value, I think what Fin, Michael et al. meant by “X has intrinsic value” is “X is valuable for its own sake, not for the sake of any further end or moral good.” This is the conventional understanding of what “intrinsic value” means in academic philosophy. Under this definition, if there is an ultimate reason that in fact explains why an individual’s life is Good or Bad, then that reason must, by virtue of the logical properties of the concepts in that sentence, have grounding in some kind of intrinsic value. But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad. In this case, however, I do not think it is possible to justify why we could ever have an overriding moral reason to do anything, including to eliminate an eternal Hell, as we could not justify why that Hell was Bad for those individuals who were stuck inside. 

If you wanted to justify why that Hell was bad for those stuck inside, and you were committed to the notion that the structure of value must be determined by the subjective, evaluative judgments of people (or animals, etc.), you would wind up—deliberately or not—endorsing a “desire-based theory of wellbeing,” like one of those described in this forum post. However, as a note of caution, in order to believe that the structure of value is determined entirely by people’s subjective, evaluative judgments, probably as expressed through their preferences (on some understanding of what a preference is), you would have to consider those judgments to be ultimately without justification. Either I prefer X to Y because X is relevantly better than Y, or I prefer X to Y without justification, and there are no absolute, universal facts about what one should prefer. I think there are facts about what one should prefer and so steer clear of such theories.

Comment by HStencil on Introduction to the Philosophy of Well-Being · 2020-12-08T18:25:16.461Z · EA · GW

I’m not sure, but it seemed to me that this was the view that you were defending in your original comment. Based on this comment, I take it that this is not, in fact, your view. Could clarify which premise you reject, 1) or 2)?

Comment by HStencil on How have you become more (or less) engaged with EA in the last year? · 2020-12-08T15:47:19.577Z · EA · GW

I also think asking people questions about why they hold a view you think is wrong that suggestively indicate why you think it’s wrong can be a good approach (e.g. “But don’t you think...?”).

Comment by HStencil on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-08T15:21:43.623Z · EA · GW

I don’t think my reasoning falls neatly into any one of the categories you listed, so I’ll post it as its own comment. I don’t give to “multiplier” charities mainly because I think a huge percentage of the good that they do probably comes from running great websites, but the fixed costs that were necessary to get these websites built and online have already been paid, basically, and while I believe that initial investment probably had a large multiplier, I’m far less convinced that subsequent expenditures by these organizations (other than maintaining their websites) will have such a large multiplier (and big donors would happily step in—or the multiplier charities would tell us—if maintenance costs could not be met).

Furthermore, in the exceptional cases when subsequent expenditures would likely have large multipliers, my sense is that usually, those expenditures require atypically substantial amounts of funding, without which the investments in question cannot happen. I am not a large donor, and it just isn’t clear to me that if I give a few thousand dollars to a multiplier charity instead of to, say, the GiveWell Maximum Impact Fund, that few thousand dollars will enable anything particularly high-impact to occur that otherwise wouldn’t have. By my mental model—which may be mistaken—for each additional dollar I give to GiveWell’s Maximum Impact Fund, my impact rises by some smooth function that probably isn’t far off linear. In contrast, I think that the value of additional dollars given to a multiplier charity probably follows some kind of a step function. I understand that my donations might increase the probability of the multiplier organization being able to “go up a step” sooner, but I suspect that if the step were truly likely to have an extraordinarily high charitable return, large donors, like foundations or ultra-high net worth individuals, would fund it no matter what, and the fact that I’d chipped in a few thousand on the margin wouldn’t change their calculus on that one bit. I’m just not the limiting factor here.

Finally, multiplier charities seem like sort of obvious breeding grounds for conflicts of interest in the community, and I’m quite wary about that because 1) I think the community has had a poor track record on managing conflicts of interest historically (though this has unquestionably improved), and 2) there is effectively no oversight of multiplier charities. They don’t have to go through anywhere close to the level of scrutiny or provide nearly as much transparency as GiveWell’s top charities, so I’m much more reluctant to take many of their claims to impact at face value.

Ultimately, I feel that my giving to multiplier charities would be troublingly analogous to the fact that around a quarter of foreign aid by OECD countries never leaves the donor country because it gets spent on consultants, auditors, and evaluators domestically. There is obviously a plausible case that the consulting, auditing, and evaluation in question increases the value of the foreign aid so much that it pays for itself, but doesn’t it seem more likely that these firms get retained for bad reasons (they lobby governments, have friends in high places, employ voters, tell unrepresentative horror stories about the misuse of aid, etc.) than for good reasons? I don’t mean to implicate multiplier charities in an unsavory comparison... but for the fact that the unsavory comparison is actually a meaningful reason why I don’t give to them. I just have no idea how I could tell with confidence that the way they would use my marginal dollars would actually beat other opportunities.

Comment by HStencil on My mistakes on the path to impact · 2020-12-08T05:49:58.670Z · EA · GW

This puts to words so many intuitions that have crept up on me—not necessarily so much about EA as about job-hunting as a young generalist with broadly altruistic goals—over the last few years. Like you, earlier this year, I transitioned from a research role I didn’t find fulfilling into a new position that I have greatly enjoyed after a multi-year search for a more impactful and rewarding job. During my search, I also made it fairly deep in the application process for research roles at GiveWell and Open Phil and got a lot of positive feedback but was ultimately unable to land either those positions or any others in EA (this is not a criticism of those hiring teams — they were absolutely wonderful every step of the way). Anyway, it’s great to hear from others who have had similar experiences, and it’s wonderful that you’re doing so well now. I think this post is fantastic, and I plan to send it to a number of undergrads I know who are about to start out their careers. Thank you so much for sharing it!

Comment by HStencil on Introduction to the Philosophy of Well-Being · 2020-12-08T01:46:29.368Z · EA · GW

It’s not clear to me how one can believe 1) that there is nothing that ultimately explains what makes a person’s life go well for them, and 2) that we have an overriding moral reason to alleviate suffering. It would seem dangerously close to believing that we have an overriding moral reason to alleviate suffering in spite of the fact that it is not Bad for those who experience it. You might claim that suffering is instrumentally bad, that it makes it harder to achieve... whatever one wants to achieve, but presumably, if achieving whatever one wants to achieve is valuable, it is valuable because of the way in which it leads one’s life to “go well.” If that is the case, then you have a theory of well-being. If, on the other hand, achieving whatever one wants to achieve is not valuable in any absolute sense, then it is hard to say why it would be valuable at all, and you, again, would struggle to justify why suffering is a bad.

Comment by HStencil on What are the "PlayPumps" of Climate Change? · 2020-12-06T15:29:58.935Z · EA · GW

They explain why they offer offset recommendations (even though, like Founders Pledge, they believe CATF is likely more cost-effective) at some length in their launch post: https://forum.effectivealtruism.org/posts/xfN7AwkjYBpEbbz6x/re-launching-giving-green-an-ea-run-organization-directing

Comment by HStencil on Idea: Resumes for Politicians · 2020-12-04T04:43:52.099Z · EA · GW

A variety of different organizations have attempted projects like this in the past and have struggled to generate interest in participating among political candidates. For the most well-known, see: https://ballotpedia.org/Survey.

Comment by HStencil on Recommendations for prioritizing political engagement in the 2020 US elections · 2020-11-20T20:54:09.660Z · EA · GW

Looks like BlockPower is holding a hackathon tomorrow to help build out the platform they’re using for their GOTV efforts in the Georgia runoffs, if anyone’s interested!

Comment by HStencil on What are some EA projects for students to do? · 2020-11-18T04:05:52.745Z · EA · GW

When I was last on the job market, I spent a bunch of my free time trying to come up with well-justified cost-effectiveness estimates for a wide array of different interventions across several cause areas. I think something like this technically meets your three criteria, but I suspect it isn’t quite consistent with the spirit of what you’re looking for (i.e. projects that take longer than a week to do and will actually probably have some positive impact). Even though I don’t think my CEAs did anything at all to improve the world themselves, I’d still recommend this to early-career EAs, if only because I thought it was a huge help when applying for jobs at GiveWell and Open Phil (which, for the sake of full disclosure, I was not ultimately offered, though I made it fairly far in the process). Even for people who have no interest in working somewhere like GiveWell or Open Phil, I think doing this trains a lot of important skills: conducting literature reviews, thinking about counterfactuals and measuring counterfactual impact, thinking in terms of DALYs or QALYs, some math… etc., and it just isn’t that much of a commitment, either. I probably spent a few hours a day, most days, for up to week on each estimate, but you could spend more or less as you saw fit. It has an appealing kind of flexibility. Finally, it’s highly scalable — there’s no shortage of things to estimate the cost-effectiveness of, so there’s no reason why tons of people couldn’t all reap the human capital and intellectual benefits of doing this. In the aggregate, I think that itself could have a pretty positive impact, and if someone were to find strong evidence that some previously overlooked intervention was actually competitive with, say, the AMF, that would be a pretty great thing for the EA community to know!

Comment by HStencil on Recommendations for prioritizing political engagement in the 2020 US elections · 2020-11-17T19:00:51.118Z · EA · GW

Thanks so much -- really appreciate your taking the time to look into this stuff!

Comment by HStencil on Recommendations for prioritizing political engagement in the 2020 US elections · 2020-11-10T18:35:31.961Z · EA · GW

I'm curious whether Landslide Coalition has given any thought to how one could most effectively make an impact on the Georgia Senate run-offs. Obviously, the stakes are much lower now that Trump has lost the presidency, but I think there's still a reasonable case to be made that helping Democrats win Senate control at this moment in history is hugely important from an EA perspective. For those who think that's a priority, I assume deep canvassing through People's Action  remains among the best volunteer opportunities available, but do you know whether they could productively use additional funding right now? Or could Working America, if they're at all involved in the Georgia races? Are there any voter registration nonprofits that are running especially cost-effective programs? I'm sure someone must be focused on registering the ~23,000 teenagers who will become eligible to vote in Georgia between the 2020 general registration deadline and the Janury run-off registration deadline.... But I assume the campaigns themselves are (or will soon be) absolutely flush with cash and probably are not the most impactful groups to fund on the margin. Any thoughts?

Comment by HStencil on What types of charity will be the most effective for creating a more equal society? · 2020-10-16T20:27:42.254Z · EA · GW

I'd also add the International Consortium of Investigative Journalists. They do fantastic work in a similar vein.

Comment by HStencil on Is shareholder activism worth it? · 2020-09-08T23:35:40.119Z · EA · GW

The levers of corporate governance are pretty limited. The corporate form is designed to limit the extent to which minority owners of common stock can intervene in corporate operations. As a result, most proxy proposals of social concern pertain to public disclosures (e.g. of environmental impact, of lobbying expenditures, of pay equity data, etc.) or to the appointment of sympathetic board members/removal of unsympathetic board members. These are nowhere near a majority of total proxy proposals, but they're a sizable percentage of total shareholder proposals (most of which do not pass). For more detail on this, see this comment.

Comment by HStencil on Is shareholder activism worth it? · 2020-09-08T20:27:20.136Z · EA · GW

Ah, sorry, I must have misread your original question. Here are my top-level takes on the question you did, in fact, ask:

1) I see some of these Calvert funds have done well over the past few years, but I'm sufficiently convinced by some form of the efficient markets hypothesis to be skeptical that their above-market returns will continue to exceed their fees over the medium-to-long-term.

2) While I do think that shareholder activism through the proxy process can occasionally yield important, positive changes in the corporate world, the levers of corporate governance are limited enough that I very seriously doubt that the money you're spending on Calvert's fees is doing more good maintaining your investments in those funds than it would do if it were donated to, say, Malaria Consortium's SMC program.

3) It's important to remember the actual comparative here. It's not Calvert vs. an evil money manager; it's probably Calvert vs. BlackRock, which has been loudly pushing its portfolio companies to be more conscientious about their impact on the climate. Of course, on account of its scale, BlackRock's mutual fund and ETF offerings will be much, much cheaper than comparable funds offered by Calvert, and also on account of its scale, BlackRock controls a far larger number of shareholder votes than Calvert does. Ultimately, you have to ask yourself: How often do BlackRock and Calvert vote in different directions on issues that I think are genuinely high-impact (assuming Calvert always casts a socially optimal vote, which I also doubt)? And: How often are Calvert's votes likely to decide those shareholder elections (when BlackRock is voting the other way)? And: How often are my Calvert fund shares likely to make the difference in whether Calvert decides those shareholder elections favorably? If you were to try to model that, I suspect you'd find that investing through Calvert isn't much at all better for the world than investing through BlackRock, and to the extent that it has some extraordinarily modest advantage, it is likely inferior to the value of simply donating the fee money to a high-impact charity. Of course, if you think the Calvert funds will continue to beat the market over time, that would change the calculus, but like I said, I consider that unlikely.

Comment by HStencil on Is shareholder activism worth it? · 2020-09-05T16:59:07.231Z · EA · GW

I used to do some work in this space and may get around to writing a more in-depth response soon, but I'm pretty busy right now, so in case I don't, two things:

1) Even if you think shareholder activist strategies have outperformed the market historically, the activism space has become substantially more competitive in the last 3-4 years or so, and it has begun to face growing regulatory pressures, so I generally expect that any activism-related arbitrage opportunity that may still exist will shrink over time until it is no larger than the cost of mounting an activism campaign.

2) See here for some pertinent background on the corporate governance ecosystem.

Comment by HStencil on edoarad's Shortform · 2020-07-06T20:53:28.233Z · EA · GW

At least until quite recently, there was a fairly uniform consensus in mainstream Anglo-American economics that the convergence thesis was true. I think this was mainly because it was based on fundamental theoretical insights that were believed to be relatively unimpeachable, like the Solow Model and the Stolper-Samuelson Theorem.

The Solow Model uses a formal representation of the idea that capital can be put to better use (yielding a higher economic return) in places where it is more scarce to demonstrate that, all other things being equal, places further from a given steady-state output level will grow toward that level faster than places nearer to it. In other words, ceteris paribus, places where capital stock is lower will grow faster than places where capital stock is higher because adding a marginal unit of capital in a capital-poor economy will generate a greater return than adding a marginal unit of capital in a capital-rich economy, where all the high-yielding capital investment opportunities have already been funded. (Bear in mind, though, that “ceteris paribus” is doing a lot of work in that sentence. You might reasonably claim that the traditional Solow Model holds constant nearly everything we ought to care about in trying to explain development outcomes.) To the extent that it’s true, though, in a world with open cross-border capital flows, one would expect capital to flood from low-return investment opportunities in wealthier countries to high-return investment opportunities in poorer countries. Alas, the evidence that this is actually taking place on a large scale is mixed at best, and other factors excluded from the neoclassical theories of international trade and finance likely play a large role in determining the global allocation of capital.

The productivity term in the Solow Model also often comes up in discussions of convergence. This term, representing an economy’s efficiency at deploying its factors of production to make things, is frequently treated—for the purpose of simplification—as a representation of an economy’s level of technological advancement alone. Traditional growth economists tend to treat rates of technological advancement as largely exogenous (whether this assumption is realistic is the subject of considerable debate). However, separate models of global technological advancement are typically built around the idea that it’s cheaper to copy a technology that was developed in another country and put it to use in one’s domestic industries than it is to develop a wholly new technology from scratch, thereby advancing the technological frontier. As a result, economists often conclude that countries not yet at the technological frontier will enjoy faster productivity growth than counties that are at the technological frontier, in accordance with the convergence paradigm.

The Stolper-Samuelson Theorem shows that when a national economy specializes in the production of a good in which it has a comparative advantage and then the relative price of that good rises on global markets, the return on investment in the factor of production that most contributes to making that good will rise. For example, if a country has a comparative advantage in making blue jeans, and it specializes in making blue jeans, and labor is the most important factor of production in making blue jeans, if the relative price of blue jeans on globals markets rises, then the return on investment in labor in that country will rise. This is equivalent to saying that the marginal product of labor in that country will rise, and in a competitive labor market, the price of labor (the wage) should equal its marginal product, so producer wages should rise with, for instance, a relative increase in global demand for blue jeans (which would push up the price).

There is vigorous debate over the extent to which the Stolper-Samuelson Theorem is applicable to world in which we live today. It requires making a number of assumptions in order for its conclusion to hold (constant returns to scale, perfect competition, an equal number of factors and products). One famous counterexample to Stolper-Samuelson was proposed Raúl Prebisch and Hans Singer and was embraced by the anti-trade left of the postwar years. Prebisch and Singer propose that because complex manufactured goods (like computers) exhibit greater income elasticity of demand than simple commodities (like wheat or coffee), if a country specializes in exporting wheat (consistent with its comparative advantage), and relies on imports from foreign manufacturers to get computers, as global incomes rise, it will suffer declining terms of trade (i.e. as time passes, each imported computer will cost more and more wheat). Today, the Prebisch-Singer Hypothesis, as it’s called, has received some degree of very qualified acceptance by mainstream economists. Its fundamental proposal that it doesn’t always make sense to treat comparative advantages as destiny is quite widely accepted, though more on the basis of Paul Krugman’s work in New Trade Theory (demonstrating, e.g., that comparative advantages can arise from economies of scale in addition to from initial actor endowments) than on the basis of Prebisch and Singer’s work. However, the specifics of the hypothesis are regarded as an extremely special case, an exception to what is generally true of developing countries. There are two main reasons for this. The first is that many developing countries specialize in the extraction of metals and minerals that are necessary inputs in making complex manufactured goods, like copper and silicon. These commodities likely violate Prebisch-Singer’s assumption that simple commodity goods necessarily exhibit lower income elasticity of demand than complex manufactured goods. The second reason is that many of the complex manufactured goods that the poorest countries import from wealthier countries actually probably increase those countries’ productivity in producing basic commodities (consider, for instance, the way organizations like Precision Agriculture for Development deliver scientific agricultural guidance to farmers throughout South Asia and Subsaharan Africa via their cell phones).

I’m not sure to what extent this theoretical background will be helpful to you as you think about convergence, but regarding the facts on the ground, with very few exceptions (like Botswana), almost all of the progress toward convergence in the last four decades has taken place in East Asia. While the “Asian Miracle” is very much real, it may itself prove to be a special case, specific to the region or the historical period in which it took place. As premature deindustrialization begins to take its toll on those countries that are not yet rich, there are, I think, a number of serious concerns about the continued viability of the export-led growth models that lifted countries like South Korea and Japan out of poverty. While the theoretical insights on which those models were based are robust, it remains to be seen to what extent they continue to apply in our 21st-century economy. Similarly, the traditional convergence thesis assumes increasing liberalization of international trade and capital flows, a premise that has grown increasingly untenable over the last five years.

Comment by HStencil on HStencil's Shortform · 2020-07-01T05:26:18.489Z · EA · GW

Thanks! I booked a slot on your Calendly -- looking forward to speaking Thursday (assuming that still works)!

Comment by HStencil on HStencil's Shortform · 2020-07-01T01:44:26.535Z · EA · GW

Thank you so much for putting so much thought into this and writing up all of that advice! Your uncertainties and hesitations about the stats itself are essentially the same as my own. Last night, I passed this around to a few people who know marginally more about stats than I do, and they suggested some further robustness checks that they thought would be appropriate. I spent a bunch of time today implementing those suggestions, identifying problems with my previous work, and re-doing that work differently. In the process, I think I significantly improved my understanding of the right (or at least good) way to approach this analysis. I did, however, end up with a quite different (and less straightforward) set of conclusions than I had yesterday. I've updated the GitHub repository to reflect the current state of the project, and I will likely update the shortform post in a few minutes, too. Now that I think the analysis is in much better shape (and, frankly, that you've encouraged me), I am more seriously entertaining the idea of trying to get in touch with someone who might be able to explore it further. I think it would be fun chat about this, so I'll probably book a time on your Calendly soon. Thanks again for all your help!

Comment by HStencil on HStencil's Shortform · 2020-06-30T03:39:05.201Z · EA · GW

Thanks so much! I'm thrilled to hear you liked it. To be honest, my main reservation about doing anything non-anonymous with it is that I'm acutely aware of the difficulty of doing statistical analysis well and, more importantly, of being able to tell when you haven't done statistical analysis well. I worry that my intro-y, undergrad coursework in stats didn't give me the tools necessary to be able to pick up on the ways in which this might be wrong. That's part of why I thought posting it here as a shortform would be a good first step. In that spirit, if anyone sees anything here that looks wrong to them, please do let me know!