HStencil's Shortform 2020-06-29T18:46:29.009Z
What would "doing enough" to safeguard the long-term future look like? 2020-04-22T21:47:03.389Z
Did Fortify Health receive $1 million from EA Funds? 2019-11-26T18:41:41.684Z
Credit Cards for EA Giving 2019-11-11T21:35:15.271Z


Comment by HStencil on Is there a news-tracker about GPT-4? Why has everything become so silent about it? · 2022-10-29T20:46:08.218Z · EA · GW

There's been some anticipatory buzz about it on Twitter. No clue how credible this is, but the claim seems to be that we should expect it to be unveiled in early 2023. Also consider these comments from Sam Altman last year.

Comment by HStencil on Any Legal Consultancies for Early EA Orgs? · 2022-08-14T04:04:21.583Z · EA · GW

I'm not sure how you'd reach the entity in question, but I noticed an FTX Future Fund regrant addressing this listed on the Fund's website.

Comment by HStencil on Most Ivy-smart students aren't at Ivy-tier schools · 2022-08-07T19:27:00.712Z · EA · GW

I don't have anything approaching a clear sense of how sensitive & specific the gaokao, IIT-JEEs, etc. are at detecting extraordinary intellect, but I will say that, yeah, I have not heard good things about the incentives that those exams create for students seeking admission to university. Even in places like France, where access to higher education is much less competitive (and much less high-stakes) than in China or India, the baccalauréat seems like it distorts students' incentives in pretty unproductive directions. Whether or not that makes it worse than the current system in the U.S., I don't know.

Comment by HStencil on Most Ivy-smart students aren't at Ivy-tier schools · 2022-08-07T14:13:55.819Z · EA · GW

This assumes that the task of differentiating Ivy Smart+ applicants from mere Ivy Smart applicants is an efficiently solvable screening problem. I think it very likely isn’t and that the costs (to both universities and their applicants) of reworking the application process so that it could reliably distinguish the 99.5th percentile (by intellect) of high schoolers applying to college nationally from the 99th percentile of that group would be unacceptably high. (Notably, the SAT/ACT can’t solve this problem — they’re noisy on the order of several percentiles.)

Comment by HStencil on Most Ivy-smart students aren't at Ivy-tier schools · 2022-08-07T04:06:03.099Z · EA · GW

Like, I guess, if pressed, I’d concede that maybe the mean at Yale is a little higher than the mean at Georgetown, but I’d also assume this should be attributed almost entirely to a handful of outliers in the distant right tail of the distribution at Yale and that the rest of the two schools’ distributions overlap nearly in their entirety. [referring here to imaginary distributions of “true g,” not to distributions of standardized test scores]

Comment by HStencil on Most Ivy-smart students aren't at Ivy-tier schools · 2022-08-07T03:47:10.861Z · EA · GW

I’m in general perfectly willing to draw distinctions in between different “tiers” of universities, but I have to say, as an ~Ivy League Person~, the notion that students at Georgetown (or Northwestern, UCLA, Johns Hopkins, Duke, WashU, UMich, UVA, etc.) might generally be of lower caliber than students on Ivy League campuses has literally never crossed my mind, nor is it one I would have guessed that more than a trivial number of other people in highly educated communities would endorse. I may be wrong, but I’ve really never thought I was particularly progressive in this respect. I’ve always understood the view supported by the data/graphs in this post to be the conventional one.

Comment by HStencil on Becoming an EA Architect: My First Month as an Independent Researcher · 2022-05-14T00:39:04.766Z · EA · GW

Glad to help!

Comment by HStencil on Becoming an EA Architect: My First Month as an Independent Researcher · 2022-05-13T22:59:12.164Z · EA · GW

This group isn’t exactly EA-aligned, but they’re working on questions that are very relevant to a number of the topics you raised, so you might want to give them a look.

Comment by HStencil on Where should I donate? · 2022-03-06T04:58:35.452Z · EA · GW

Hey, sorry, I totally forgot about this until I stumbled across this recent discussion on donating to help with the situation in Ukraine earlier this week. I've pasted a bibliography of relevant papers below.

Aker, Jenny C., Paul Collier, and Pedro C. Vicente. “Is Information Power? Using Mobile Phones and Free Newspapers during an Election in Mozambique.” The Review of Economics and Statistics 99, no. 2 (May 2017): 185–200.

Armand, Alex, Alexander Coutts, Pedro C. Vicente, and Inês Vilela. “Does Information Break the Political Resource Curse? Experimental Evidence from Mozambique.” American Economic Review 110, no. 11 (November 1, 2020): 3431–53.

Banerjee, Abhijit, Nils T. Enevoldsen, Rohini Pande, and Michael Walton. “Public Information Is an Incentive for Politicians: Experimental Evidence from Delhi Elections.” Working Paper. Working Paper Series. National Bureau of Economic Research, April 2020.

Besley, Timothy, and Robin Burgess. “The Political Economy of Government Responsiveness: Theory and Evidence from India.” The Quarterly Journal of Economics 117, no. 4 (November 1, 2002): 1415–51.

Bruns, Christian, and Oliver Himmler. “Newspaper Circulation and Local Government Efficiency: Newspaper Circulation and Local Government Efficiency.” Scandinavian Journal of Economics 113, no. 2 (June 2011): 470–92.

Casey, Katherine. “Crossing Party Lines: The Effects of Information on Redistributive Politics.” American Economic Review 105, no. 8 (August 1, 2015): 2410–48.

Conroy-Krutz, Jeffrey. “Media Exposure and Political Participation in a Transitional African Context.” World Development 110 (October 2018): 224–42.

Drago, Francesco, Tommaso Nannicini, and Francesco Sobbrio. “Meet the Press: How Voters and Politicians Respond to Newspaper Entry and Exit.” American Economic Journal: Applied Economics 6, no. 3 (July 1, 2014): 159–88.

Enikolopov, Ruben, Maria Petrova, and Konstantin Sonin. “Social Media and Corruption.” American Economic Journal: Applied Economics 10, no. 1 (January 1, 2018): 150–74.

Enikolopov, Ruben, Maria Petrova, and Ekaterina Zhuravskaya. “Media and Political Persuasion: Evidence from Russia.” American Economic Review 101, no. 7 (December 1, 2011): 3253–85.

Enríquez, José Ramón, Horacio Larreguy, John Marshall, and Alberto Simpser. “Online Political Information, Electoral Saturation, and Electoral Accountability in Mexico.” SSRN Electronic Journal, 2021.

Ferraz, Claudio, and Frederico Finan. “Exposing Corrupt Politicians: The Effects of Brazil’s Publicly Released Audits on Electoral Outcomes.” Quarterly Journal of Economics 123, no. 2 (May 2008): 703–45.

Gao, Pengjie, Chang Lee, and Dermot Murphy. “Financing Dies in Darkness? The Impact of Newspaper Closures on Public Finance.” Journal of Financial Economics 135, no. 2 (February 2020): 445–67.

Grácio, Matilde, and Pedro C. Vicente. “Information, Get-out-the-Vote Messages, and Peer Influence: Causal Effects on Political Behavior in Mozambique.” Journal of Development Economics 151 (June 2021): 102665.

Grossman, Guy, and Kristin Michelitch. “Information Dissemination, Competitive Pressure, and Politician Performance between Elections: A Field Experiment in Uganda.” American Political Science Review 112, no. 2 (May 2018): 280–301.

Larreguy, Horacio, and John Marshall. “The Incentives and Effects of Independent and Government-Controlled Media in the Developing World.” In The Oxford Handbook of Electoral Persuasion, edited by Elizabeth Suhay, Bernard Grofman, and Alexander H. Trechsel, 589–617. Oxford University Press, 2020.

Larreguy, Horacio, John Marshall, and James M. Snyder. “Publicising Malfeasance: When the Local Media Structure Facilitates Electoral Accountability in Mexico.” The Economic Journal 130, no. 631 (October 16, 2020): 2291–2327.

Moskowitz, Daniel J. “Local News, Information, and the Nationalization of U.S. Elections.” American Political Science Review 115, no. 1 (February 2021): 114–29.

Pande, Rohini. “Can Informed Voters Enforce Better Governance? Experiments in Low-Income Democracies.” Annual Review of Economics 3, no. 1 (September 1, 2011): 215–37.

Reinikka, Ritva, and Jakob Svensson. “Fighting Corruption to Improve Schooling: Evidence from a Newspaper Campaign in Uganda.” Journal of the European Economic Association 3, no. 2/3 (2005): 259–67.

Comment by HStencil on College Public Service Pipeline · 2022-01-19T02:18:35.867Z · EA · GW

[To clarify in case this was unclear: I am just a random outsider and have no association with this Amherst student group.]

I’m a bit skeptical that just trying to get more nonprofits to recruit on campus is a winning strategy here. Among other things, the vast majority of nonprofits don’t have dedicated recruiting staff, and the people responsible for hiring don’t have the time to travel to college campuses to recruit for entry-level positions. The same is going to be true of most public sector openings at the entry level, too. (I do think there are exceptions to this — you might be able to get some of the RA programs run by the Federal Reserve System to recruit on campus, which I think would be awesome.) Regardless of whether or not you get these organizations to come to your campus, though, I think you face the even more significant obstacle of many students believing it just isn’t a good career move to take a job in public service straight out of college. I’m not sure getting these jobs more visibility changes that. My sense in college was not that the people entering finance or consulting would instead have gone into government if they’d been aware of the availability of government jobs.

I think that to make a difference here you have to change the way people think about the opportunities presented by analyst programs at consulting and financial services firms. You have to show people that these aren’t the best things they could do out of college, given basically any set of public service career objectives.

As I see it, this pitch might look something like this (obviously, the details would vary based on the individual in question’s career goals, but for the sake of argument…): 

Let’s say you’re a senior in college, and you really, really want to work in the Executive Office of the President (of the U.S.) one day. There are basically three types of ways you could get there: 1) you could work in a Congressional office, in which case it would be reasonable to try to get a job on the Hill straight out of college; 2) you could enter the campaign world and try to attach yourself to a particularly promising candidate; or 3) you could attend a graduate program that would position you to enter top policy jobs. Options 2 and 3 could both plausibly include a stint in consulting right out of college, but I don’t think that’s likely to be the optimal path in either of those directions for most people making such a choice. In the case of option 2, if you want to work in communications in the EOP, you will probably want a comms job on a campaign, and if you’re about to graduate and don’t see a campaign you’re itching to join, then I think joining a political communications firm or the press office of an elected official at the state level or on the Hill would be your best bet. This template can also be applied to working in ops in the EOP -> ops on a campaign -> ops in state government or a Hill office (-> college student). If you’re considering option 2 but are hoping to do policy work in the EOP, then you’re probably going to want to do option 3 before doing option 2 anyway, so let’s jump there. 

To get a job in the EOP, there are basically three kinds of graduate school paths: 1) law school; 2) a top public policy (or similar) master’s program; 3) a PhD in a field in which the EOP needs people with subject-matter expertise (economics, certain scientific fields, etc.). If you want to go to law school, there is no marginal advantage to working in an elite management consulting or financial services job post-college relative to (more) attainable options in public service (to pick an arbitrary example, this), and there may be disadvantages (e.g., less time to study for the LSAT). If you want to get an MPP/MPA from Harvard, Princeton, etc., there is no marginal advantage to working in an elite management consulting or financial services job post-college relative to… basically any job in government — working on the Hill for a few years or for the municipal government of any major city would make an applicant more appealing to those programs. And if you want to get a PhD in nearly any field, working in academic research for 1-3 years will be wayyy more advantageous when you’re applying than spending two years at Bain or Goldman would be. In many fields, if you’re the sort who could get a job at Bain or Goldman, you could also get an RA job working for a world-renowned scholar, and that goes a very, very long way in PhD admissions. 

The rejoinder here, I imagine, is: “What if you don’t have any idea at all what you want to do? At the very least, an MBB consulting job won’t close any doors.” I definitely believed this argument when I was a senior in college, but in the years since I graduated, I’ve come to think that the high option value provided by elite analyst positions in finance and consulting is, to a large extent, limited to private sector roles and is thus of considerably lesser value to people aiming to have careers in the public interest. My sense is that hiring managers (outside of finance/consulting, and especially in public service) are almost never looking mainly for generalists who can legibly signal high intelligence. For the vast majority of positions, the marginal benefit of additional points on the SAT, or whatever, pales in comparison to the marginal benefit of relevant domain experience and motivation/commitment to the work of this position. As a result, a lot of government jobs strongly prefer people with a demonstrated history of working in public service. The thought is that committing to a career in public service is a costly signal, and people who are willing to pay the cost to send it are more likely to be motivated and less likely to jump ship to climb a professional ladder. On top of that, the competition for some of these positions is so steep that they don’t even have to compromise on other desirable attributes to get people who can demonstrate that commitment; they can just use it as a tie-breaker to distinguish otherwise identically qualified candidates at the top of their pool. So, more generally, I now think that to underspecialize is to lose options and good ones, too, as most really cool jobs require some degree of specialization. 

This obviously doesn’t answer the question of how to choose what to do after one graduates, which, admittedly, is genuinely hard. At a minimum, though, there’s some reassurance in the fact that the costs to making the wrong call aren’t that high. People in the first few years of their career typically enjoy a lot of latitude to pivot and try new things as long as they have a good attitude about it and aren’t in a rush to climb any ladders professionally. I basically did this and think it ended well. On that note, one thing that really helped me was having a lot of free time to reflect about my goals (and research how to best achieve them) in my first year out of college, when I was feeling very professionally unsatisfied. (I would hate to be very professionally unsatisfied and not have a lot of free time to figure out how to remedy the situation, which I imagine would have been the case if I’d been in consulting or finance.) I generally think that having free time to reflect and pursue independent projects in one’s first few years out of college is really underrated by most people, especially among EAs, for whom the returns to reflection are probably especially high. I’ll conclude just by saying: One of the best pieces of career advice I’ve ever gotten was to prioritize working in places where my incentives would be aligned with my values and where I would be surrounded by people who would support me in making professional decisions in a manner consistent with my values. I think people tend to underestimate the impact of their environment on the possibilities that they can imagine for themselves. I know I did for a very long time. 

Comment by HStencil on Concrete Biosecurity Projects (some of which could be big) · 2022-01-17T23:54:24.330Z · EA · GW

I would say the same.

Comment by HStencil on College Public Service Pipeline · 2022-01-15T03:04:12.094Z · EA · GW

"Public service" is obviously a huge and diverse category, but my strong impression is that many public interest jobs (including at the entry level) offer substantially better exit opportunities within public service than nearly any  management consulting gig (and I think this is true to an even greater extent if the comparison is with entry-level roles at investment banks or hedge funds). The problem, I think, is that at least in the U.S., there are very few public interest jobs that are 1) entry-level, 2) open to generalists without prior experience in some very specific area, 3) involve the kind of "substantive" responsibilities that would make them comparable "learning opportunities" to the positions available in consulting and finance, and 4) are at least in the vicinity of moderately high-impact work (very broadly construed). And the positions that do exist that meet these criteria are, of course, extraordinarily hard to get. Basically, I don't think the issue is actually that trying to enter public service  straight out of college is a bad career move.  I think it's often quite a good career move, and I definitely think more people should do it, but I think a big part of the reason more people don't is that it's a very risky thing to commit to as an undergraduate (compared to the options available in the private sector). Conditional on, e.g., actually managing to land a position doing the kind of work you want to do within an executive agency, though, I think the public servant is probably better-positioned for impact (including over a multi-decade time horizon) than the management consultant or the investment banker.

Comment by HStencil on Where should I donate? · 2021-11-24T17:22:33.305Z · EA · GW

Yeah, I’d be happy to, but I may not get around to it until next week, if that’s alright.

Comment by HStencil on Announcing our 2021 charity recommendations · 2021-11-24T16:07:47.205Z · EA · GW

New Harvest is also listed as a standout charity in spite of (my impression is) an even narrower focus on cell-cultured product innovation than GFI (which also supports plant-based meat substitutes). I too would love some clarity from ACE on this.

Comment by HStencil on Where should I donate? · 2021-11-22T21:51:29.666Z · EA · GW

In the vein of “democracy promotion” and “longer-term/less measurable global development interventions,” you might consider donating to the International Consortium of Investigative Journalists and/or Partnership for Transparency Fund. I know more about ICIJ than Partnership for Transparency, but both strike me as a very strong organizations with impressive track records in fighting corruption in low- and middle-income countries. In addition to anecdotes of their achievements, there is also a growing body of evidence in economics showing that local investigative journalism can have really striking (positive) effects on various sorts of favorable political outcomes. Admittedly, most of this evidence, as far as I’m aware, is not from LMICs. Assuming it generalizes to that context, though (and I think there is good reason to believe it does), ICIJ in particular may be one of the few organizations out there with a reasonable prospect of cost-effectively improving the quality of institutions in LMICs, which (as others have noted elsewhere on this forum) is likely quite important for bringing about faster economic growth and other related positive development outcomes.

Comment by HStencil on There's a role for small EA donors in campaign finance · 2021-11-06T16:29:11.885Z · EA · GW

Answering the question of whether a candidate is “good,” might well (at least on certain EA world views) be sufficient to answer the question of whether donating to the candidate would be (sufficiently) cost-effective (given evidence that 1) donations matter for getting elected, and 2) getting elected allows one to influence policy). Consider the case of a candidate running on a longtermist platform. My impression is that when longtermist grantmakers evaluate giving opportunities in existential risk mitigation, their decision process is much closer to “determine whether the opportunity in question has a reasonable chance of improving humanity’s longterm trajectory within a range of broadly acceptable costs” than to “conduct a thorough, systematic, GiveWell-style cost-effectiveness analysis.” I would think that roughly the same principles that apply to donations to organizations that lobby Congress for better biosecurity policy apply to donations to candidates for Congress who strongly favor better biosecurity policy. This seems to be the thinking behind OP’s post. The back-of-the-envelope intuition here is pretty straightforward; insisting on a GiveWell-style CEA in its place reads like an isolated demand for rigor.

Comment by HStencil on There's a role for small EA donors in campaign finance · 2021-11-06T03:29:30.955Z · EA · GW

If the concern is that donations don't have any impact on electoral outcomes, there is a good bit of high-quality social science research indicating that television advertising, at least, does, particularly (as OP notes) in down-ballot races. If the concern is that it nonetheless isn't worth its cost, that's plausible, but I don't think OP said anything to suggest strong grounds to believe campaign donations beat GiveWell's Maximum Impact Fund, nor (I assume) would most readers leap to that conclusion, given the unique depth and rigor  of GiveWell's research process and the far greater difficulty of modeling cost-effectiveness in politics. The thrust of this post seems to be more that this is something worth  considering, which seems like a fair assessment, particularly given the extent of preexisting EA activity in this area (and the reasonable argument that there are decreasing returns to scale).

Comment by HStencil on Open Philanthropy’s Early-Career Funding for Individuals Interested in Improving the Long-Term Future - New Application Round · 2021-09-24T23:05:32.165Z · EA · GW

Great, thanks so much!

Comment by HStencil on Open Philanthropy’s Early-Career Funding for Individuals Interested in Improving the Long-Term Future - New Application Round · 2021-09-09T20:02:58.219Z · EA · GW

Does Open Phil have any plans to re-open applications for early-career funding for work on biosecurity, as well (sometime in the next 12 months, say)?

Comment by HStencil on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-08T00:19:30.926Z · EA · GW

Yeah, I mean, to be clear, my impression was that Yglesias wished this weren't required and believed that it shouldn't be required (certainly, in the abstract, it doesn't have to be), but nonetheless, it seemed like he conceded that from a practical standpoint, when this is what all your staff expect, it is required. I guess maybe then the question is just whether he could "avoid the pitfalls from his time with Vox," and I suppose my feeling is that one should expect that to be difficult and that someone in his position wouldn't want to abandon their quiet, stable, cushy Substack gig for a risky endeavor that required them to bet on their ability to do it successfully. I think too many of the relevant causes are things that you can't count on being able to control as the head of an organization, particularly at scale, over long periods of time, and I'd been inferring that this was probably one of the lessons Yglesias drew from his time at Vox.

Comment by HStencil on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T17:32:06.271Z · EA · GW

Yeah, I guess the impression I had (from comments he made elsewhere — on a podcast, I think) was that he actually agreed with his managers that at a certain point, once a publication has scaled enough, people who represent its “essence” to the public (like its founders) do need to adopt a more neutral, nonpartisan (in the general sense) voice that brings people together without stirring up controversy, and that it was because he agreed with them about this that he decided to step down.

Comment by HStencil on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T17:02:48.318Z · EA · GW

I would be extremely surprised if he had any interest in doing this, given what he’s said about his reasons for leaving Vox.

Comment by HStencil on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T16:59:35.545Z · EA · GW

Comment by HStencil on MichaelA's Shortform · 2021-05-25T18:39:00.618Z · EA · GW

Yeah, I think it’s very plausible that career RAs could yield meaningful productivity gains in organizations that differ structurally from “traditional” academic research groups, including, importantly, many EA research institutions. I think this depends a lot on the kinds of research that these organizations are conducting (in particular, the methods being employed and the intended audiences of published work), how the senior researchers’ jobs are designed, what the talent pipeline looks like, etc., but it’s certainly at least plausible that this could be the case.

On the parallels/overlap between what makes for a good RA and what makes for a good research manager, my view is actually probably weaker than I may have suggested in my initial comment. The reason why RAs are sometimes promoted into research management positions, as I understand it, is that effective research management is believed to require an understanding of what the research process, workflow, etc. look like in the relevant discipline and academic setting, and RAs are typically the only people without PhDs who have that context-specific understanding. Plus, they’ll also have relevant domain knowledge about the substance of the research, which is quite useful in a research manager, too. I think these are pretty much all of the reasons why RAs may make for good research managers. I don’t really think it’s a matter of skills or of mindset anywhere near as much as it’s about knowledge (both tacit and not). In fact, I think one difficulty with promoting RAs to research management roles is that often, being a successful RA seems to select for traits associated with not having good management skills (e.g., being happy spending one’s days reading academic papers alone with very limited opportunities for interpersonal contact). This is why I limited my original comment on this to RAs who can effectively manage people, who, as I suggested, I think are probably a small minority. Because good research managers are so rare, though, and because research is so management-constrained without them, if someone is such an RA and they have the opportunity, I would think that moving into research management could be quite an impactful path for them. 

Comment by HStencil on MichaelA's Shortform · 2021-05-24T04:54:44.037Z · EA · GW

I actually think full-time RA roles are very commonly (probably more often than not?) publicly advertised. Some fields even have centralized job boards that aggregate RA roles across the discipline, and on top of that, there are a growing number of formalized predoctoral RA programs at major research universities in the U.S. I am actually currently working as an RA in an academic research group that has had roles posted on the 80,000 Hours job board. While I think it is common for students to approach professors in their academic program and request RA work, my sense is that non-students seeking full-time RA positions very rarely have success cold-emailing professors and asking if they need any help. Most professors do not have both ongoing need for an (additional) RA and the funding to hire one (whereas in the case of their own students, universities often have special funding set aside for students’ research training, and professors face an expectation that they help interested students to develop as researchers).

Separately, regarding the second bullet point, I think it is extremely common for even full-time RAs to only periodically be meaningfully useful and to spend the rest of their time working on relatively low-priority “back burner” projects. In general, my sense is that work for academic RAs often comes in waves; some weeks, your PI will hand you loads of things to do, and you’ll be working late, but some weeks, there will be very little for you to do at all. In many cases, I think RAs are hired at least to some extent for the value of having them effectively on call.

Comment by HStencil on MichaelA's Shortform · 2021-05-24T04:53:53.733Z · EA · GW

For the last few years, I’ve been an RA in the general domain of ~economics at a major research university, and I think that while a lot of what you’re saying makes sense, it’s important to note that the quality of one’s experience as an RA will always depend to a very significant extent on one’s supervising researcher. In fact, I think this dependency might be just about the only thing every RA role has in common. Your data points/testimonials reasonably represent what it’s like to RA for a good supervisor, but bad supervisors abound (at least/especially in academia), and RAing for a bad supervisor can be positively nightmarish. Furthermore, it’s harder than you’d think to screen for this in advance of taking an RA job. I feel particularly lucky to be working for a great supervisor, but/because I am quite familiar with how much the alternative sucks.

On a separate note, regarding your comment about people potentially specializing in RAing as a career, I don’t really think this would yield much in the way of productivity gains relative to the current state of affairs in academia (where postdocs often already fill the role that I think you envision for career RAs). I do, however, think that it makes a lot of sense for some RAs to go into careers in research management. Though most RAs probably lack the requisite management aptitude, the ones who can effectively manage people, I think, can substantially increase the productivity of mid-to-large academic labs/research groups by working in management roles (I know J-PAL has employed former RAs in this capacity). A lot of academic research is severely management-constrained, in large part because management duties are often foisted upon PIs (and no one goes into academia because they want to be a manager, nor do PIs typically receive any management training, so the people responsible for management often enough lack both relevant interest and relevant skill). Moreover, productivity losses to bad management often go unrecognized because how well their research group is being managed is, like, literally at the very bottom of most PIs’ lists of things to think about (not just because they’re not interested in it, also because they’re often very busy and have many different things competing for their attention). Finally, one consequence of this is that bad RAs (at least in the social sciences) can unproductively consume a research group’s resources for extended periods of time without anyone taking much notice. On the other hand, even if the group tries to avoid this by employing a more active management approach, in that case a bad RA can meaningfully impede the group’s productivity by requiring more of their supervisor’s time to manage them than they save through their work. My sense is that fear of this situation pervades RA hiring processes in many corners of academia.

Comment by HStencil on The despair of normative realism bot · 2021-01-04T04:41:21.680Z · EA · GW

This discussion reminds of a comment R.M. Hare made in his 1957 essay “Nothing Matters”:

Think of one world into whose fabric values are objectively built; and think of another in which those values have been annihilated. And remember that in both worlds the people in them go on being concerned about the same things - there is no difference in the 'subjective' concern which people have for things, only in their 'objective' value. Now I ask, What is the difference between the states of affairs in these two worlds? Can any other answer be given except 'None whatever'? How, therefore can we torment ourselves with doubts about which of them our own world resembles?

In another interesting parallel, in the same essay, Hare uses the term “play-acting” to refer to describe those who claim that nothing matters.

Comment by HStencil on Donation multiplier · 2020-12-31T15:51:30.811Z · EA · GW

While not exactly the same, EA researchers are already doing something quite similar:

Comment by HStencil on Careers Questions Open Thread · 2020-12-27T01:17:24.028Z · EA · GW

That makes perfect sense! I agree that CE probably isn't the best fit for people most interested in doing EA work to mitigate existential risks. Feel free to shoot me a DM if you'd ever like to talk any of this through at greater length, but otherwise, it seems to me like you're approaching these decisions in a very sensible way.

Comment by HStencil on Careers Questions Open Thread · 2020-12-26T22:36:30.846Z · EA · GW

Happy to help! Another thing that strikes me is that in my experience (which is in the U.S.), running an academic research team at a university (i.e., being the principal investigator on the team's grants) seems to have a lot in common with running a startup (you have a lot of autonomy/flexibility in how you spend your time; your efficacy is largely determined by how good you are at coordinating other people's efforts and setting their priorities for them; you spend a lot of time coordinating with external stakeholders and pitching your value-add; you have authority over your organization's general direction; etc.). This seems relevant because I think a lot of the top university economics research groups in the U.S. have a pretty substantial impact on policy (e.g., consider Opportunity Insights), and the same may well be true in the U.K. It seems to me that other avenues toward impacting policy (e.g., working in the government or for major, established advocacy organizations) are considerably less entrepreneurial in nature. Of course, you could also found your own advocacy organization to push for policy change, but 1) I think it's generally easier to get funding for research than for work along these lines (especially as a newcomer), in part because the advocacy space is already so crowded, and 2) founding an advocacy organization seems like the kind of thing one might do through Charity Entrepreneurship, which you seem less excited about. If you're mainly attracted to entrepreneurship by tight feedback loops, however, academia is probably the wrong way to go, as it definitely does not have those.

Comment by HStencil on Careers Questions Open Thread · 2020-12-26T17:39:09.231Z · EA · GW

It sounds based on your description that a fairly straightforward step would be for you to try to set up calls with 1) someone on the Charity Entrepreneurship leadership team, and 2) some of the founders of their incubated charities. This would help you to evaluate whether it would be a good idea for you to apply to the CE program at some point, as well as to refine your sense of which aspects of entrepreneurship you’re particularly suited to (so that if entrepreneurship doesn’t work out—maybe you discover other aspects of it that seem less appealing—you’ll be able to look for the bits you care for in positions with more established organizations). If you came out of those calls convinced that you might want to apply to Charity Entrepreneurship down the road, it seems to me that a logical next step would be to start reading up on potential causes and interventions that you might want your charity to pursue. You could also, I’m sure, do volunteer work for existing, newly launched CE charities, where given that most of them only have two staff, you’d probably be given a fair amount of responsibility and would be able to develop useful insights into the entrepreneurial process. For you, the value of information from doing that seems like it might be quite high.

Comment by HStencil on Careers Questions Open Thread · 2020-12-23T05:56:24.676Z · EA · GW

That seems like a sound line of reasoning to me — best of luck with the rest of your degree!

Comment by HStencil on Careers Questions Open Thread · 2020-12-21T23:35:34.915Z · EA · GW

I think this is a really hard question, and the right answer to it likely depends to a very significant degree on precisely what you’re likely to want to do professionally in the near and medium-term. I recently graduated from a top U.S. university, and my sense is that the two most significant benefits I reaped from where I went to school were:

  1. Having that name brand on my resume definitely opened doors for me when applying for jobs during my senior year. I’m actually fairly confident that I would not have gotten my first job out of college had I gone to a less prestigious school, though I think this only really applies to positions at a fairly narrow set of financial services firms and consulting firms, as well as in certain corners of academic research.
  2. I think I personally benefited from a significant peer effect. My specific social circle pushed me to challenge myself academically more than I likely otherwise would have (in ways that probably hurt my GPA but served me well all things considered). That said, I know that the academic research on peer effects in education is mixed to say the least, so I’d be hesitant to extrapolate much from my own experience.

I’m not sure how to weigh the importance of the first of those considerations. On the one hand, your first job is just that: your first job. It doesn’t necessarily mean anything about where you’ll end up at age 35. On the other hand, I do feel like I have observed this phenomenon of smart people graduating from relatively unknown universities and really struggling to find interesting work during their first several years out of college and then eventually resigning themselves to getting a master’s degree from a more well-known school (sometimes in a field where the educational benefit of the degree is relatively low) just so that they can get in the door to interview for jobs in their field of choice. This obviously comes at a significant cost, both in terms of time and—often but not always—in terms of money. That said, in some fields, you just do need a master’s to get in the door for a lot of roles, no matter where you went to undergrad or what you did while you were there, and maybe that’s all that’s really behind this.

Another thing potentially worth noting is that, in my experience, it seems as if U.S. research universities are most usefully divisible into three categories with respect to their undergraduate job placement: universities that “high-prestige” employers are unlikely to have heard of, universities that “high-prestige” employers are likely to have heard of and have vaguely positive associations with, and finally, the set of Harvard, Princeton, Yale, MIT, and Stanford (these are distinguished not only by their name brands but also by the extent of their funding and support for undergraduate research and internships, the robustness of their undergraduate advising, and other more “experiential” factors). There are certainly exceptions to this breakdown (the financial services and consulting firms mentioned above definitely differentiate between Penn and Michigan), but by and large, my sense has been that controlling for “ability,” the difference in early-career outcomes between a Harvard graduate and a Penn graduate is significantly larger than the difference in early-career outcomes between a Penn graduate and a Michigan graduate (note: the specific schools chosen as examples within each cohort here are completely arbitrary). Accordingly, I don’t think that very many people generally have a strong professional reason to transfer from UCLA to Brown or from the University of Virginia to Dartmouth, etc. However, I buy that those at lesser-known schools may, in many circumstances, have a strong professional reason to transfer to their flagship state school.

Other good reasons to transfer, I think, include transferring for the purpose of getting to a particular city where you know you want to work when you graduate, with an eye toward spending a portion of the remainder of your time in college networking or interning in your field of choice. In particular, I think that if you want to work in U.S. (national) policy after graduation, transferring to a school in the Washington, DC Metropolitan Area can be hugely beneficial. The same goes for financial services in the New York City Metropolitan Area, entertainment in Los Angeles, and (perhaps, though I am less sure about this) tech in the San Francisco Bay Area. In your case, it might be worthwhile to submit a transfer application to Georgetown with the aim of trying to forge some connections at the Center for Security and Emerging Technology (or perhaps the Center for Global Health Science and Security if you are interested in biosecurity policy), both of which are housed there. One other very strong reason to transfer, it seems to me, would be if you wanted to work on AI, but your current school didn’t have a computer science department, like a local state school near where I grew up. I assume from your post that that isn’t your situation, though.

Finally, I wouldn’t underestimate the importance of mental health considerations, to the extent that those may be at all relevant to your choice. Mental health during college can have a huge impact on GPA, and while where you go to undergrad will only really be a factor in determining your grad school prospects for a relatively narrow set of programs (mainly, I think, via the way it affects the kinds of research jobs you can get during and post-college), GPA is a huge determinant of grad school admissions across basically every field, so that is important to bear in mind. The transfer experience, from what I have heard, is not always easy, especially, I imagine, in academic environments that are already very high-pressure.

If you’d like to talk through this at greater length, feel free to DM me. To the extent that my perspective might be useful, I’d be more than happy to offer it, and if you’d just like someone to bounce ideas off of, I’d be happy to fill that role, as well.

Comment by HStencil on Wholehearted choices and "morality as taxes" · 2020-12-21T21:43:55.113Z · EA · GW

I really like this. To me, it emphasizes that moral reason is a species of practical reason more generally and that the way moral reasons make themselves heard to us is through the generic architecture of practical reasoning. More precisely: Acting in a manner consistent with one's moral duties is not about setting one's preferences aside and living a life of self-denial; it's about being sufficiently attentive to one's moral world that one's preferences naturally evolve in response to sound moral reasons, such that satisfying those preferences and fulfilling one's duties are one and the same.

Comment by HStencil on Incompatibility of moral realism and time discounting · 2020-12-12T23:57:41.856Z · EA · GW

This is a fascinating argument — thank you for sharing it! I think it's particularly interesting to consider it in the context of metaethical theories that don't fall neatly within the realist paradigm but share some of its features, like R.M. Hare's universal prescriptivism (see Freedom and Reason [1963] and Moral Thinking [1981]). However, I also think this probably shouldn't lead most discounting realists to abandon their moral view. My biggest issue with the argument is that I suspect (though I am still thinking this through) that there exist parallel arguments of this form that would purport to disprove all of philosophical realism (i.e. including realism about empirical descriptions of the natural world). I think statements rejecting philosophical realism are pretty epistemically fraught (maybe impossible to believe with justification), which leaves me suspicious of your argument. (It's worth noting here that special relativity itself is an empirical description of the natural world.)

I have a feeling that the right way of thinking about this is that the rise relativistic physics changed the conventional meaning of a "fact" into something like: a true statement for which its truth cannot depend upon the person thinking it within a particular inertial frame of reference. Otherwise, I think we would be forced to admit that there are not facts about the order in which events occur in time, and that seems quite obviously inconsistent with the ordinary language meanings of several common concepts to me. I know that relativity teaches that statements about time and duration are not objective descriptions of reality but are instead indexical reports of "where the speaker is" relative to a particular object, similar to "Derek Parfit's cat is to my left," but (for basically Wittgensteinian reasons) I do not think that this is actually what these statements mean.

Ultimately, if you're someone who, like me, believes that a correct analysis of the question, "What is the right thing to do?" must start with a correct analysis of the logical properties of the concepts invoked in that sentence (see R.M. Hare, especially Sorting Out Ethics [1997]), and you believe that those logical properties are determined by the way in which those concepts are used (see Wittgenstein's Philosophical Investigations [1953]), then I think this argument is mainly good evidence that the proper understanding of what moral realism means today is the following: "Moral realism holds that moral statements are facts, and the truth of a fact must be universal within the inertial frame of reference in which that fact exists; that is, that truth cannot depend upon the person thinking the fact within that inertial frame of reference."

Comment by HStencil on Careers Questions Open Thread · 2020-12-11T20:49:34.007Z · EA · GW

Glad to hear it helped! Of course, usual caveats apply about the possibility that your field is quite different from mine, so I wouldn't stop looking for advice here, but hopefully, this gives you a decent starting point!

Comment by HStencil on Careers Questions Open Thread · 2020-12-11T20:43:12.890Z · EA · GW

Regarding the data-driven policy path, my sense is that unfortunately, most policy work in the U.S. today is not that data-driven, though there's no doubt that that's in part attributable to human capital constraints. Two exceptions do come to mind, though:

  1. Macroeconomic stabilization policy (which is one of Open Philanthropy's priority areas) definitely fits the bill. Much of the work on this in the U.S. occurs in the research and statistics and forecasting groups of various branches of the Federal Reserve System (especially New York, the Board of Governors in D.C., Boston, Chicago, and San Francisco). These groups employ mathematical tools like DSGE and HANK models to predict the effects of various (mainly but not exclusively monetary) policy regimes on the macroeconomy. Staff economists working on this modeling regularly produce research that makes it onto the desks of members of the Federal Open Markets Committee and even gets cited in Committee meetings (where U.S. monetary policy is determined). To succeed on this path in the long-term you would need to get a PhD in economics, which probably has many of the same downsides as a PhD in computer science/AI, but the path might have other advantages, depending on your personal interests, skills, values, motivations, etc. One thing I would note is that it is probably easier to get into econ PhD programs with a math-CS bachelor’s than you would think (though still very competitive, etc.). The top U.S. economics programs expect an extensive background in pure math (real analysis, abstract algebra, etc.), which is more common among people who studied math in undergrad than among people who studied economics alone. A good friend of mine actually just started her PhD in economics at MIT after getting her bachelor’s in math and computer science and doing two years of research at the Fed. This is not a particularly unusual path. If you're interested and have any questions about it, feel free to dm me.
  2. At least until the gutting of the CDC under our current presidential administration, it employed research teams full of specialists in the epidemiology of infectious disease who make use of fairly sophisticated mathematical models in their work. I would consider this work to be highly quantitative/data-driven, and it's obviously pertinent to the mitigation of biorisks. To do it long-term, you would need a PhD in epidemiology (ideally) or a related field (biostatistics, computational biology, health data science, public health, etc.). These programs are also definitely easier to get into with your background than you would expect. They need people with strong technical skills, and no one leaves undergrad with a bachelor's in epidemiology. You would probably have to get some relevant domain experience before applying to an epi PhD program, though, likely either by working on the research staff at someplace like the Harvard Center for Communicable Disease Dynamics or by getting an MS in epidemiology first (you would have no trouble gaining admission to one of those programs with your background). One big advantage of epidemiology relative to macroeconomics and AI is that (my sense is) it's a much less competitive field (or at least it certainly was pre-pandemic), which probably has lots of benefits in terms of odds of success, risk of burnout, etc. Once again, feel free to dm me if this sounds interesting to you and you have any questions; I know people who have gone this route, as well.
Comment by HStencil on Careers Questions Open Thread · 2020-12-10T19:24:26.818Z · EA · GW

I think a lot of the day-to-day feelings of fulfillment in high-impact jobs come from either: 1) being part of a workplace community of people who really believe in the value of the work, or 2) seeing first-hand the way in which your work directly helped someone. I don't really think the feelings of fulfillment typically come from the particular functional category of your role or the set of tasks that you perform during the workday, so I wonder how informative your experiments with data science, for instance, would be with respect to the question of identifying the thing that you feel you "must do," as you put it. If I had to guess, I'd speculate that the feeling you're looking for will be more specific to a particular organization or organizational mission than to the role you'd be filling for organizations generally.

Comment by HStencil on Careers Questions Open Thread · 2020-12-10T17:57:20.779Z · EA · GW

If you're committed to using data science to address public policy questions in the U.S. (either in government or a think tank-type organization), I suspect you'd be best-served by a program like one of these:  

Comment by HStencil on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-09T19:01:22.519Z · EA · GW

This is all fantastic information to have — thank you so much for explaining it! I'm really glad to have improved my understanding of this.

Comment by HStencil on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-09T18:56:54.514Z · EA · GW

Yes, that argument for veg*anism is a big part of why I’m a vegetarian, but it does not on its own entail that one should prefer giving to multiplier charities rather than to the GiveWell Maximum Impact Fund. That depends on the empirical question of how the relative expected values weigh out. My argument is that there are sound reasons to believe that in the multiplier charity case specifically, the best-guess expected values do not favor giving to multiplier charities. “Your donation to a multiplier charity might have a big positive impact if it pushes them up an impact step,” doesn’t really respond to these reasons. Obviously, I agree that my donation might have a really big positive impact. I am just skeptical that we have sufficient reason to believe that, at the end of the day, the expected value is higher. I think the main reasons why I, at least prior to conversing with Jon, was strongly inclined to think the expected value calculus favored GiveWell were:

  1. TLYCS’s multiplier likely doesn’t exceed 4x. [I have updated against this view on account of Jon’s comments.]
  2. There is a much higher likelihood that TLYCS sees diminishing marginal returns on “charitable investments” than organizations directly fighting, say, the global malaria burden. [I have updated somewhat against this view on account of Jon’s comments.]
  3. If a particularly promising opportunity for TLYCS to go up an impact step were to present itself, it most likely would get filled by a large donor, who would fully fund the opportunity irrespective of my donation. (In the case of the AMF—to continue with our earlier example—I imagine most in the donor community assume that such opportunities get funded with grants from GiveWell’s Maximum Impact Fund; it has proven to be an effective coordination mechanism in that sense.)
  4. There are good reasons to be at least somewhat suspicious of the impact estimates that multiplier charities put out about themselves, particularly given how little scrutiny or oversight exists of their activities. There’s even a reasonable argument, I think, that such organizations, in the status quo, face strong incentives (due to potential conflicts of interest) to optimize for achieving aims unrelated to having a positive impact. For instance, I think Peter Singer’s work likely is highly effective at persuading people give more to effective charities, but imagine for a moment that TLYCS were to discover that, in fact, owing to the many controversies surrounding Singer, his association with the movement on net turned people away. Based on Jon’s remarks during this forum discussion, he seems like a great person, but I don’t think we have any general reason to believe that TLYCS would respond to that discovery in a positive-impact-maximizing way. Singer is such a large part of the organization that it seems plausible to me that he would be able—if he wished—to push it to continue to raise his profile, as it does today, even if doing so were likely net negative for the EA project. Furthermore, in reality, if something like this were to occur, it would probably happen through a slow trickle of individually inconclusive pieces of evidence, not through a single decisive revelation, so subconscious bias in interpreting that evidence inside of TLYCS could lead to this sort of suboptimal outcome even without anyone doing anything they believed might be harmful. Obviously, this is a deliberately somewhat outlandish hypothetical, but hopefully, it gets the point across.

Regarding your final point, I basically agree with your reasoning here. I have not confirmed my mental model with the AMF, and it’s fair to say I should. However, I also think that 1) you’re right that benefactors in “marginal” villages may get more (or there may be more of them) on account of my donations, and 2) deworming is so cheap (as are mosquito nets) that my donations to deworming charities probably do cover entire schools, etc.

Comment by HStencil on Careers Questions Open Thread · 2020-12-09T15:34:23.638Z · EA · GW

I don’t know anything about the norms and expectations in CS, but in my field (a quantitative social science), it is basically impossible to get into PhD programs without research experience of some kind, and you would likely be advised, first and foremost, to seek a master’s as preparation, and if it went well, apply to PhD programs thereafter. The master’s programs that would be recommended would be designed for people interested in transitioning from industry to academia, and someone like you would probably have a good shot at getting in. They can be expensive, though. If you wanted to avoid that, you would need to come up with some other way of demonstrating research acumen. This could mean transitioning into an academic research staff role somewhere, which (in my field, though maybe not yours) would help your odds of admission a ton. It could also mean reconnecting with an old college professor about your interests and aspirations and seeing if they’d be willing to work on a paper with you (I know someone who did this successfully; the professor likely agreed to it because she judged that my friend’s work had a high chance of being published). Finally, you could just try to write a publishable research paper on your own. In my field, this seems to me like it would be very hard to do, especially without prior research experience, but even if it didn’t turn into a publication, if the paper were solid, you could submit it as a supplemental writing sample with your applications, and it would likely help to compensate for weaknesses in your research background (for what it’s worth on this point, a friend-of-a-friend of mine was a music conservatory student who was admitted to a philosophy doctoral program after self-studying philosophy entirely on her own).

Comment by HStencil on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-09T06:51:48.943Z · EA · GW

I really don’t think my argument is about risk aversion at all. I think it’s about risk-neutral expected (counterfactual) value. The fact that it is extraordinarily difficult to imagine my donations to a multiplier charity having any counterfactual impact informs my belief about the likely probability of my donations to such an organization having a counterfactual impact, which is an input in my expected value calculus. You’re right that under some circumstances, a risk-neutral expected value calculus will favor small donors donating to “step-functional” charities that can’t scale their operations smoothly per marginal dollar donated, but my argument was that in the specific case of multiplier charities, the odds of a small-dollar donation being counterfactually responsible for moving the organization up an impact step are particularly infinitesimal (or at least that this is the most reasonable thing for small-dollar donors without inside information to believe). The fact that impact in this context is step-functional is a part of the explanation for the argument, not the conclusion of the argument.

With respect to the question of “relative step-functionality,” though, it’s also not clear to me why, compared to a multiplier charity, one would think that giving to GiveWell’s Maximum Impact Fund would be any more step-functional on the margin. It seems odd to suggest that being counterfactually responsible for an operational expansion into a new region is among the most plausible ways that a small-dollar gift to the AMF, for instance, has an impact. Clearly, such a gift allows the AMF to distribute more nets where they are currently operating, even if no such expansion into a new region is presently on the table. Moreover, I find this particularly confusing in the case of the Maximum Impact Fund, which allocates grants to target specific funding gaps, often corresponding to very well-defined organizational initiatives (e.g. expansions into new regions), the individual cost-effectiveness of which GiveWell has modeled. It’s obviously true that regardless of whether one gives to a multiplier charity or to the Maximum Impact Fund, there is some chance that one’s donations either A) languish unused in a bank account, or B) counterfactually cause something hugely impactful to happen, but given that in the case of GiveWell, we know the end recipients have a specific, highly cost-effective use already planned out for this particular chunk of money (and if they have extra, they can just put it toward… more nets), whereas in the multiplier charity case, we don’t have any reason to believe they could use these specific funds at all (not to mention productively), doesn’t it seem like the balance of expected values here favors going with GiveWell?

Finally, while it is obviously true that most nets don’t save lives, I fail to see how that bears on the question at hand. We both agree that this is reflected in GiveWell’s cost-effectiveness analysis, which we (presumably) both agree that we have strong reason to trust. We have no such independent cost-effectiveness analysis of any multiplier charity. And the fact that most nets don’t save lives certainly isn’t a reason why the impact of donations to the AMF would not rise by some smooth function of dollars donated. The only premise on which that argument depends is that if they don’t have anything else good to do with my money (which presumably, they do, having earned a grant from the Maximum Impact Fund), they can always just buy more nets. Given the current scale of global net distribution relative to the total malaria burden, it seems wildly unlikely that a much larger percentage of those nets would fail to save lives than was the case during previous net distributions.

Comment by HStencil on Careers Questions Open Thread · 2020-12-09T02:58:43.430Z · EA · GW

It sounds like you're doing some awesome work, and these are great questions, but I very seriously doubt you will be able to get good answers to them from anyone without domain expertise in your field, so this may not be best place to look. I personally have some very cursory exposure to biostatistics and health data science (definitely less than you), but I imagine I have significantly more familiarity with the area, especially in the U.S., than most people on the EA Forum, and I have zero clue about the answers to your questions.

Comment by HStencil on Careers Questions Open Thread · 2020-12-09T02:35:21.453Z · EA · GW

I may be missing or misunderstanding something, but it seems like your worries/roadblocks about your option 1 all pertain specifically to the MBA/MPA component. If that is the case, and you think you really might want to work in tech, I'd encourage you to consider trying to transition directly to a tech company without first getting another degree. Anecdotally, my sense is that MBAs and MPAs are useful mainly for networking and allowing you to command a higher starting salary in many roles, not for what you learn during the degree (though this depends somewhat on your prior academic and professional background, as well as on the specific program you're enrolled in, of course). 

I imagine that the main reason you've been considering getting an MBA or an MPA is because you have a sense that you need it to make a significant career shift. I'm not so sure. I don't know how easy it is to spend two years as a software engineer at a tech company (instead of spending those two years in grad school) and then transition into a product management role, but I imagine that particularly at smaller or medium-sized tech companies, this must be a thing that happens. And even if I'm wrong about that, I know people who went straight from coding roles at professional services firms to product manager-track positions (e.g. product data analyst) at medium-sized tech companies (admittedly, outside of Silicon Valley). I imagine these people will become product managers faster this way than they would have if they'd gotten an MBA in the middle. Finally, regarding going into debt for an MPA, you should consider applying to Princeton's program; it's free to everyone who is admitted!

Comment by HStencil on Introduction to the Philosophy of Well-Being · 2020-12-09T01:32:14.552Z · EA · GW

Thank you — please do!

Comment by HStencil on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-08T23:21:47.076Z · EA · GW

I'm glad to hear you found my reasoning useful, and I appreciate your explanation of where you think it may go astray. I'm a fairly marginal actor in the grand scheme of the EA community and don't feel I am anywhere close to having a clear view on whether the returns to adding further vetting or oversight structures would outweigh the costs. Naïvely, it seems to me that some kinds of organizational transparency are pretty cheap. However, it occurs to me that even though I've spent a fair bit of time on the TLYCS website over the past several years and gave to your COVID-19 response fund back in the spring, I honestly have no recollection of the extent of your transparency in the status quo. In a similar vein, to put it more flippantly than you deserve, I don't think most people I know in the community (myself included) really understand what you do. I was even unaware of how high your estimated multiplier is (if you had asked me to guess prior to your comment, there's no way I would've gone higher than 4x), and now, I am quite curious about how you’re estimating that and what you think is driving such a high return. I expect this is probably my fault for not seriously investigating "multiplier charities" when deciding where to give and instead presuming that they likely aren't a good fit for small donors like me for the reasons I explained. However, I also think I am exactly the persuadable small donor who you would want to be touching with whatever outreach or marketing you're doing , so maybe there's room for improvement on your part there, as well.

For what it's worth, if you were going to invest in adding some kind of vetting or oversight structure, here are a few questions that—inspired by your comment—I would most want it to answer before making a determination about whether to give to TLYCS:

1. Why have TLYCS's expenses tripled since 2016? Other than the website overhaul and the book launch, what have you been spending on? Are you aiming to engage in similar (financial) growth again in the near term? If not, would you be if you had more support from small donors?

2. What do you mean by "communicate with more donors?" What does that involve? How costly is it on a per-donor basis? How scalable is it?

3. When you spend more money (beyond your basic operating expenses: salaries, office space if you have it, etc.), and that spending seems to be associated with an increase in donor interest in your recommended charities, what do you think generally explains that relationship, and how do you determine that such an increase in donor interest was counterfactually caused by the increase in spending?

4. More generally, and this may be an extremely dumb question/something you have explained at length elsewhere, how do you arrive at your "money moved" estimates, and how do you ensure that they are counterfactually valid?

5. Do you personally believe that TLYCS will hit diminishing marginal returns on investments in growing its base of donors to its recommended charities sometime in the near or intermediate term?

You obviously do not have to answer these questions here or at all. I wrote them out only to provide a sense of what information I feel I am missing.

Comment by HStencil on Introduction to the Philosophy of Well-Being · 2020-12-08T22:47:09.610Z · EA · GW

I suspect there may be too much inferential distance between your perspective on normative theory and my own for me to explain my view on this clearly, but I will try. To start, I find it very difficult to understand why someone would endorse doing something merely because it is “effective” without regard for what it is effective at. The most effective way of going about committing arson may be with gasoline, but surely we would not therefore recommend using gasoline to commit arson. Arson is not something we want people to be effective at! I think that if effective altruism is to make any sense, it must presuppose that its aims are worth pursuing.

Similarly, I disagree with your contention that morality isn't, as you put it, paramount. I do not think that morality exists in a special normative domain, isolated far away from concerns of prudence or instrumental reason. I think moral principles follow directly from the principle of instrumental reason, and there is no metaphysical distinction between moral reasons and other practical reasons. They are all just considerations that bear on our choices. Accordingly, the only sensible understanding of what it means to say that something is morally best is: “It is what one ought to do,” (I am skeptical of the idea of supererogation). It is a practical contradiction to say, “X is what I ought to do, but I will not do it,” in the same way that it is a theoretical contradiction to say, “It is not raining, but I believe it’s raining.” Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best). To me, that sounds like saying, “EA should do X regardless of whether or not EA should do X.”

Regarding the idea of intrinsic value, I think what Fin, Michael et al. meant by “X has intrinsic value” is “X is valuable for its own sake, not for the sake of any further end or moral good.” This is the conventional understanding of what “intrinsic value” means in academic philosophy. Under this definition, if there is an ultimate reason that in fact explains why an individual’s life is Good or Bad, then that reason must, by virtue of the logical properties of the concepts in that sentence, have grounding in some kind of intrinsic value. But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad. In this case, however, I do not think it is possible to justify why we could ever have an overriding moral reason to do anything, including to eliminate an eternal Hell, as we could not justify why that Hell was Bad for those individuals who were stuck inside. 

If you wanted to justify why that Hell was bad for those stuck inside, and you were committed to the notion that the structure of value must be determined by the subjective, evaluative judgments of people (or animals, etc.), you would wind up—deliberately or not—endorsing a “desire-based theory of wellbeing,” like one of those described in this forum post. However, as a note of caution, in order to believe that the structure of value is determined entirely by people’s subjective, evaluative judgments, probably as expressed through their preferences (on some understanding of what a preference is), you would have to consider those judgments to be ultimately without justification. Either I prefer X to Y because X is relevantly better than Y, or I prefer X to Y without justification, and there are no absolute, universal facts about what one should prefer. I think there are facts about what one should prefer and so steer clear of such theories.

Comment by HStencil on Introduction to the Philosophy of Well-Being · 2020-12-08T18:25:16.461Z · EA · GW

I’m not sure, but it seemed to me that this was the view that you were defending in your original comment. Based on this comment, I take it that this is not, in fact, your view. Could clarify which premise you reject, 1) or 2)?

Comment by HStencil on How have you become more (or less) engaged with EA in the last year? · 2020-12-08T15:47:19.577Z · EA · GW

I also think asking people questions about why they hold a view you think is wrong that suggestively indicate why you think it’s wrong can be a good approach (e.g. “But don’t you think...?”).