Posts

Leveraging labor shortages as a pathway to career impact 2022-07-14T13:32:31.040Z
The Effective Institutions Project is hiring 2022-04-15T10:48:42.226Z
A Landscape Analysis of Institutional Improvement Opportunities 2022-03-21T00:15:52.311Z
What will be some of the most impactful applications of advanced AI in the near term? 2022-03-03T15:26:25.716Z
Free/low-cost decision support services for the EA community 2021-09-29T17:04:31.142Z
Improving Institutional Decision-Making: Which Institutions? (A Framework) 2021-08-23T02:26:57.525Z
Vitalik Buterin just donated $54M (in ETH) to GiveWell 2021-05-14T01:30:42.225Z
AMA: Ian David Moss, strategy consultant to foundations and other institutions 2021-03-02T16:55:48.183Z
Improving Institutional Decision-Making: a new working group 2020-12-28T05:47:29.194Z
Recommendations for prioritizing political engagement in the 2020 US elections 2020-10-14T13:52:23.564Z
When does it make sense to support/oppose political candidates on EA grounds? 2020-10-14T13:51:38.090Z
Prioritizing COVID-19 interventions & individual donations 2020-05-06T21:29:12.249Z
All causes are EA causes 2016-09-25T18:44:42.347Z
Reflections on EA Global from a first-time attendee 2016-09-18T13:38:25.752Z

Comments

Comment by IanDavidMoss on A Landscape Analysis of Institutional Improvement Opportunities · 2022-12-03T21:08:19.402Z · EA · GW

Hi David, thanks for your interest in our work! I need to preface this by emphasizing that the primary purpose of the quantitative model was to help us assess the relative importance of and promise of engaging with different institutions implicated in various existential risk scenarios. There was less attention given to the challenge of nailing the right absolute numbers, and so those should be taken with a super-extra-giant grain of salt.

With that said, the right way to understand the numbers in the model is that the estimates were about the impact over 100 years from a single one-time $100M commitment (perhaps distributed over multiple years) focusing on a single institution. The comment in the summary about $100 million/year was assuming that the funder(s) would focus on multiple institutions. Thus, the 100 basis points per billion figure is the "correct" one provided our per-institution estimates are in the right order of magnitude.

We're about to get started on our second iteration of this work and will have more capacity to devote to the cost-effectiveness estimates this time around, so hopefully that will result in less speculative outputs.

Comment by IanDavidMoss on A socialist's view on liberal progressive criticisms of EA · 2022-11-22T12:56:22.568Z · EA · GW

Dustin & Cari were also among the largest donors in 2020: https://www.vox.com/recode/2020/10/20/21523492/future-forward-super-pac-dustin-moskovitz-silicon-valleys

Comment by IanDavidMoss on The FTX Future Fund team has resigned · 2022-11-11T14:41:43.290Z · EA · GW

Wow, I didn't see it at the time but this was really well written and documented. I'm sorry it got downvoted so much and think that reflects quite poorly on Forum voting norms and epistemics.

Comment by IanDavidMoss on FTX Crisis. What we know and some forecasts on what will happen next · 2022-11-09T01:01:06.618Z · EA · GW

I think it would have been very easy for Jonas to communicate the same thing in less confrontational language. E.g., "FWIW, a source of mine who seems to have some inside knowledge told me that the picture presented here is too pessimistic." This would have addressed JP's first point and been received very differently, I expect.

Comment by IanDavidMoss on Why does Elon Musk suck so much at calibration? · 2022-11-07T01:45:05.349Z · EA · GW

I understood the heart of the post to be in the first sentence: "what should be of greater importance to effective altruists anyway is how the impacts of all [Musk's] various decisions are, for lack of better terms, high-variance, bordering on volatile." While Evan doesn't provide examples of what decisions he's talking about, I think his point is a valid one: Musk is someone who is exceptionally powerful, increasingly interested in how he can use his power to shape the world, and seemingly operating without the kinds of epistemic guardrails that EA leaders try to operate with. This seems like an important development, if for no other reason that Musk's and EA's paths seem more likely to collide than diverge as time goes on.

Comment by IanDavidMoss on Request for input: how has 80,000 Hours influenced the direction of the EA community over the past 2 years? · 2022-10-30T20:35:40.932Z · EA · GW

I agree this is an important point, but also think identifying top-ranked paths and problems is one of 80K's core added values, so don't want to throw out the baby with the bathwater here.

One less extreme intervention that could help would be to keep the list of top recommendations, but not rank them. Instead 80K could list them as "particularly promising pathways" or something like that, emphasizing in the first paragraphs of text that personal fit should be a large part of the decision of choosing a career and that the identification of a top tier of careers is intended to help the reader judge where they might fit.

Another possibility, I don't know if you all have thought of this, would be to offer something that's almost like a wizard interface where a user inputs or checks boxes relating to various strengths/weaknesses they have, where they're authorized to work, core beliefs or moral preferences, etc., and then the program spits back a few options of "you might want to consider careers x, y, and z -- for more, sign up for a session with one of our advisors." Then promote that as the primary draw for the website more than the career guides. Just a thought?

Comment by IanDavidMoss on Request for input: how has 80,000 Hours influenced the direction of the EA community over the past 2 years? · 2022-10-30T20:23:07.176Z · EA · GW

I was also going to say that it's pretty confusing that this list is not the same as either the top problem areas listed elsewhere on the site or the top-priority career paths, although it seems derived from the latter. Maybe there are some version control issues here?

Comment by IanDavidMoss on Backyard EA: a podcast proposal · 2022-10-22T14:04:58.538Z · EA · GW

I feel like this proposal conflates two ideas that are not necessarily that related:

  1. Lots of people who want to do good in the world aren't easily able to earn-to-give or do direct work at an EA organization.
  2. Starting altruistically-motivated independent projects is plausibly good for the world.

I agree with both of these premises, but focusing on their intersection feels pretty narrow and impact-limiting to me. As an example of an alterative way of looking at the first problem, you might consider  instead or in addition having people on who work in high(ish)-impact jobs where there are currently labor shortages.

Overall, I think it would be better if you picked which of the two premises you're most excited about  and then went all-in on making the best podcast you could focused on that one.

Comment by IanDavidMoss on Brief evaluations of top-10 billionnaires · 2022-10-22T13:56:15.240Z · EA · GW

Hmm, I guess I'm more optimistic about 3 than you are. Billionaires are both very competitive and often care a lot about how they're perceived, and if a scaled-up and properly framed version of this evaluation were to gain sufficient currency (e.g. via the billionaires who score well on it), you might well see at least some incremental movement. I'd put the chances of that around 5%.

Comment by IanDavidMoss on Fantastic impacts through ordinary, plausible, practical acts · 2022-10-16T18:06:16.806Z · EA · GW

I thought this was great! With a good illustrator and some decent connections I think you could totally get it published as a picture book. A couple of feedback notes:

  • The transition from helping people in Johnny's life to helping people far away via the internet felt a bit forced. If Johnny is supposed to be a student in primary school like the intended reader, it wasn't clear where he gets his donation budget from  and I wonder how relatable that would be (a donation of $25 is mentioned, which I guess could come from allowance/gift money, but it's implied that it's only one of many donations). It might be better and more realistic to depict Johnny fundraising for these charities from his parents and other people in his life or community.
  • One great thing about the first part is that you see the impact of Johnny's help on his teacher, the bullied kid, etc., whereas that becomes more obscure once he transitions to the internet. I wonder if you could fix that by temporarily switching the focus of the story to the person who got their eyes fixed because of Johnny, showing how meaningful it was for them. I think it's really critical in a story like this to demonstrate that far-away people are just as real and lead just as worthwhile lives as those close to us.
Comment by IanDavidMoss on Warning Shots Probably Wouldn't Change The Picture Much · 2022-10-07T02:53:40.937Z · EA · GW

I'm not aware of anyone working on it really seriously!

Comment by IanDavidMoss on Warning Shots Probably Wouldn't Change The Picture Much · 2022-10-06T20:51:31.082Z · EA · GW

It's possible there's a more comprehensive writeup somewhere, but I can offer two data points regarding the removal of $30B in pandemic preparedness funding that was originally part of Biden's Build Back Better initiative (which ultimately evolved into the Inflation Reduction Act):

  • I had an opportunity to speak earlier this summer with a former senior official in the Biden administration who was one of the main liaisons between the White House and Congress in 2021 when these negotiations were taking place. According to this person, they couldn't fight effectively for the pandemic preparedness funding because it was not something that representatives' constituents were demanding.
  • During his presentation at EA Global DC a few weeks ago, Gabe Bankman-Fried from Guarding Against Pandemics said that Democratic leaders in Congress had polled Senators and Representatives about their top three issues as Build Back Better was being negotiated in order to get a sense for what could be cut without incurring political backlash. Apparently few to no members named pandemic preparedness as one of their top three. (I'm paraphrasing from memory here, so may have gotten a detail or two wrong.)

The obvious takeaway here is that there wasn't enough attention to motivating grassroots support for this funding, but to be clear I don't think that is always the bottleneck -- it just seems to have been in this particular case.

I also think it's true that if the administration had wanted to, it probably could have put a bigger thumb on the scale to pressure Congressional leaders to keep the funding. Which suggests that the pro-preparedness lobby was well-connected enough within the administration to get the funding on the agenda, but not powerful enough to protect it from competing interests.

Comment by IanDavidMoss on Warning Shots Probably Wouldn't Change The Picture Much · 2022-10-06T13:40:43.126Z · EA · GW

I have some sympathy for the second view, although I'm skeptical that sane advisors have significant real impact. I'd love a way to test it as decisively as we've tested the "government (in its current form) responds appropriately to warning shots" hypotheses.

On my own models, the "don't worry, people will wake up as the cliff-edge comes more clearly into view" hypothesis has quite a lot of work to do. In particular, I don't think it's a very defensible position in isolation anymore....if you want to argue that we do need government support but (fortunately) governments will start behaving more reasonably after a warning shot, it seems to me like these days you have to pair that with an argument about why you expect the voices of reason to be so much louder and more effectual in 2041 than they were in 2021.

(Which is then subject to a bunch of the usual skepticism that applies to arguments of the form "surely my political party will become popular, claim power, and implement policies I like".)

I think the second view is basically correct for policy in general, although I don't have a strong view yet of how it applies to AI governance specifically. One thing that's become clear to me as I've gotten more involved in institution-focused work and research is that large governments and other similarly impactful organizations are huge, sprawling social organisms, such that I think EAs simultaneously underestimate and overestimate the amount of influence that's possible in those settings. The more optimistic among us tend to get too excited about isolated interventions (e.g., electing a committed EA to Congress, getting a voting reform passed in one jurisdiction) that, even if successful, would only address a small part of the problem. On the other hand, skeptics see the inherent complexity and failures of past efforts and conclude that policy/advocacy/improving institutions is fundamentally hopeless, neglecting to appreciate that critical decisions by governments are, at the end of the day, made by real people with friends and colleagues and reading habits just like anyone else.

Viewed through that lens, my opinion and one that I think you will find is shared by people with experience in this domain is that the reason we have not seen more success influencing large-scale bureaucratic systems is that we have have been under-resourcing it as a community.  By "under-resourcing it" I don't just mean in terms of money, because as the Flynn campaign showed us it's easy to throw millions of dollars at a solution that hits rapidly diminishing returns. I mean that we have not been investing enough in strategic clarity, a broad diversity of approaches that complement one another and collectively increase the chances of success, and the patience to see those approaches through. In the policy world outside of EA, activists consider it normal to have a 6-10 year timeline to get significant legislation or reforms enacted, with the full expectation that there will be many failed efforts along the way. But reforms do happen -- just look at the success of the YIMBY movement, which Matt Yglesias wrote about today, or recent legislation to allow Medicare to negotiate prescription drug prices, which was in no small part the result of an 8-year, $100M campaign by Arnold Ventures.

Progress in the institutional sphere is not linear. It is indeed disappointing that the United States was not able to get a pandemic preparedness bill passed in the wake of COVID, or that the NIH is still funding ill-advised research. But we should not confuse this for the claim that we've been able to do "approximately nothing." The overall trend for EA and longtermist ideas being taken seriously at increasingly senior levels over the past couple of years is strongly positive. Some of the diverse factors include the launch of the Future Fund and the emergence of SBF as a key political donor; the publication of Will's book and the resulting book tour; the networking among high-placed government officials by EA-focused or -influenced organizations such as Open Philanthropy,  CSET, CLTR, the Simon Institute, Metaculus, fp21, Schmidt Futures, and more; and the natural emergence of the initial cohort of EA leaders into the middle third of their careers. Just recently, I had one senior person tell me that Longview Philanthropy's hiring of Carl Robichaud, a nuclear security grantmaker with 20 years of experience, is what got them to pay attention to EA for the first time. All of it is, by  itself, not enough to make a difference, and judged on its own terms will look like a failure. But all of it combined is what creates the possibility that more can be accomplished the next time around, and all of the time in between.

Comment by IanDavidMoss on List of donation opportunities (focus: non-US longtermist policy work) · 2022-09-30T16:42:00.552Z · EA · GW

Amazing resource, thanks so much! I'll add that the Effective Institutions Project is in the process of setting up an innovation fund to support initiatives like these, and we are planning to make our first recommendations and disbursements later this year. So if anyone's interested in supporting this work generally but doesn't have the time/interest to do their own vetting, let us know and we can get you set up as a participant in our pooled fund (you can reach me via PM on the Forum or write info@effectiveinstitutionsproject.org).

Comment by IanDavidMoss on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-27T21:20:00.150Z · EA · GW

Also worth noting that you can be influential on Twitter without necessarily having a large audience (e.g., by interacting strategically with elites and frequently enough that they get to know you).

Comment by IanDavidMoss on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-27T21:17:29.664Z · EA · GW

It seems worth noting that you can get famous on Twitter for tweeting, or you can happen to be famous on Twitter as a result of becoming famous some other way. The two pathways imply very different promotional strategies and theories of impact. But my sense is that it's pretty hard to grow an audience on Twitter through tweeting alone, no matter how good your content is.

Comment by IanDavidMoss on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-27T21:05:03.968Z · EA · GW

He seems like a natural fit for the American economist-public intellectual cluster (Yglesias/Cowen/WaitButWhy/etc.) that's already pretty sympathetic to EA. The twitter content is basically "EA in depth," but retaining the normie socially responsible brand they've come to expect and are comfortable with. Max Roser would be another obvious candidate to promote Peter. I'd start there and see where it goes.

Comment by IanDavidMoss on The Onion Test for Personal and Institutional Honesty · 2022-09-27T17:59:40.100Z · EA · GW

I'm curious how this applies to infohazards specifically. Without actually spilling any infohazards, could you comment on how one could do a good job applying this model in such a situation?

Comment by IanDavidMoss on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-27T17:54:25.561Z · EA · GW

https://askell.io/

Comment by IanDavidMoss on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-27T17:42:45.431Z · EA · GW

I'm a little surprised that Rob Wiblin doesn't have more followers, but he's already high-profile enough that it wouldn't take that big of a push to get him into another tier. He's also the most logical person to leverage 80K's broader content on social media given his existing profile and activity. (ETA: although Habiba could do this too, per your suggestion.)

Comment by IanDavidMoss on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-27T17:38:16.832Z · EA · GW

Amanda Askell consistently has thoughtful and underrated takes on Twitter.

Comment by IanDavidMoss on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-27T17:37:47.539Z · EA · GW

Peter Wildeford is an A+ follow on Twitter IMHO. I think it's realistic to get him a bunch more followers if that's something he wanted.

Comment by IanDavidMoss on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-27T17:37:15.666Z · EA · GW

I assume you're being modest in not suggesting "Nathan Young," so I'll do it for you.

Comment by IanDavidMoss on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-27T17:35:40.206Z · EA · GW

Do we know that he doesn't already have a social media manager? He's had a lot of help to promote the book.

Comment by IanDavidMoss on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-27T17:34:44.113Z · EA · GW

In light of the two-factor voting, I'm unclear what you mean by "upvote." I would suggest using the "agree/disagree" box as the scoring, with "upvote/downvote" meant to refer to your wisdom in suggesting the person and/or the analysis you provided. But I think you should clarify which one you intend to actually pay attention to.

Comment by IanDavidMoss on Ask Me Anything about parenting as an Effective Altruist · 2022-09-27T04:34:42.138Z · EA · GW

I think raising one's own kids is often significantly more rewarding than raising adopted kids, just because one's own kids will share so much more of one's cognitive traits, personality traits, quirks, etc, that you can empathize better with them.

I'm extremely skeptical of this claim. Many parents I know with multiple biological children report that they have immensely different personalities, and it seems intuitively obvious that any statistical correlations of such traits between child and parent that are driven by genes will be overwhelmed by statistical noise in a family with an n of, say, 3 or fewer children. As someone with two biological children, IMHO almost all of the rewarding aspects of being a parent come from the experience of watching them grow up on a daily basis and directly contributing to that growth, not from picking out physical or other characteristics that happen to remind me of myself.

Comment by IanDavidMoss on Earn To Give $1M/year or Work Directly? · 2022-08-31T15:31:00.084Z · EA · GW

Haha, well it would depend a lot on the specifics but we'd probably at least be up for having a conversation about it :)

Comment by IanDavidMoss on Earn To Give $1M/year or Work Directly? · 2022-08-31T02:00:40.297Z · EA · GW

Maybe indirectly? Addressing talent gaps within the EA community isn't a primary focus of ours, but it does seem that our outreach is helping to increase the pool of mid-career and senior people out in the world who take EA seriously.

Comment by IanDavidMoss on Earn To Give $1M/year or Work Directly? · 2022-08-30T00:24:46.736Z · EA · GW

Effective Institutions Project here. As of now I'd say our number is more like $150-200K, assuming we're talking about an annual commitment. The number is lower because our networks give us access to a large talent pool and I'm fairly optimistic that we can fill openings easily once we have the budget for them.

Comment by IanDavidMoss on Some concerns about policy work funding and the Long Term Future Fund · 2022-08-15T16:39:57.155Z · EA · GW

Thanks for the response!

I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside

That's fair, and I should also be clear that I'm less familiar with LTFF's grantmaking than some others in the EA universe.

It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.

Oh, I totally agree that the kind of risk analysis I mentioned is not costless, and for EA Funds in particular it seems like too much to expect. My main point is that in the absence of it, it's not necessarily an optimal strategy to substitute an extreme version of the precautionary principle instead.

Overall, I agree that judging policy/institution-focused projects primarily based on upside makes sense.

Comment by IanDavidMoss on Some concerns about policy work funding and the Long Term Future Fund · 2022-08-13T20:11:39.549Z · EA · GW

I strongly agree with Sam on the first point regarding downside risks. My view, based on a range of separate but similar interactions with EA funders, is that they tend to overrate the risks of accidental harm [1] from policy projects, and especially so for more entrepreneurial, early-stage efforts.

To back this up a bit, let's take a closer look at the risk factors Asya cited in the comment above. 

  • Pushing policies that are harmful. In any institutional context where policy decisions matter, there is a huge ecosystem of existing players, ranging from industry lobbyists to funders to media outlets to think tanks to agency staff to policymakers themselves, who are also trying to influence the outcomes of legislation/regulation/etc. in their preferred direction. As a result, making policies actually become reality is inherently quite hard and almost impossible to make happen without achieving buy-in from a diverse range of stakeholders. While that process can be frustrating and often results in watering down really good ideas to something less inspiring, it is actually quite good for mitigating the downside risks from bad policies! It's understandable to think of such a volatile mix of influences as scary and something to be avoided, but we should also consider the possibility that it is a productive way to stress-test ideas coming out of EA/longtermist communities by exposing them to audiences with different interests and perspectives. After all, these interests at least in part do reflect the landscape of competing motivations and goals in the public more generally, and thus are often relevant for whether a policy idea will be successful or not.
  • Making key issues partisan. My view is that this is much more likely to happen by way of involvement in electoral politics than traditional policy-advocacy work. Importantly, though, we just had a high-profile test of this idea in the form of Carrick Flynn's bid for Congress. By the logic of EA grantmakers worried about partisan politicization, my sense is that the Flynn campaign is one of the riskiest things this community has ever taken on (and remember, we only saw the primary -- if he had won and run in the general, many Republican politicians' and campaign strategists' first exposure to EA and longtermism would have been by way of seeing a Democrat supported by two of the largest Democratic donors running on EA themes in a competitive race against one of their own.) And yet as it turned out, it did not result in longtermism being politicized basically at all. So while the jury is still out, perhaps a reasonable working hypothesis based on what we've seen thus far is that "try to do good and help people" is just not a very polarizing POV for most people, and therefore we should stress out about it a little less.
  • Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with. I think this one is pretty easily avoided. If you have someone leading a policy initiative who is any of those things, they probably aren't going to make much progress and their work thus won't cause much harm (other than wasting the grantmaker's money). Furthermore, the increasing media coverage of longtermism and the fact that longtermism has credible allies in society (multiple billionaires, an increasing number of public intellectuals, etc.) both significantly mitigate against the concern expressed here, as the former factors are much more likely to influence a broad set of policymakers' opinions and actions.
  • “Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project. This seems to be more of a general concern about grantmaking to early-stage organizations and doesn't strike me as unique to the policy space at all. If anything, it seems to rest on a questionable premise that there is only one channel for communicating with policymakers and only one organization or individual can occupy that channel at a time. As I stated earlier, policymakers already have huge ecosystems of people trying to influence policy outcomes; another entrant into the mix isn't going to take up much space at all. But also, policymakers themselves are part of a huge bureaucratic apparatus and there are many, many potential levers and points of access that can't all possibly be covered by a single organization. I do agree that coordination is important and desirable, but we shouldn't let in itself that be a barrier to policy entrepreneurship, IMHO.

To be clear, I do think these risks are all real and worth thinking about! But to my reasonably well-informed understanding of at least three EA grantmakers' processes, most of these projects are not judged by way of a sober risk analysis that clearly articulates specific threat models, assigns probabilities to each, and weighs the resulting estimates of harm against a similarly detailed model of the potential benefits. Instead, the risks are assessed on a holistic and qualitative basis, with the result that many things that seem potentially risky are not invested in even if the upside of them working out could really be quite valuable. Furthermore, the risks of not acting are almost never assessed -- if you aren't trying to get the policymaker's attention tomorrow, who's going to get their ear instead, and how likely might it be that it's someone you'd really prefer they didn't listen to?

While there are always going to be applications that are not worth funding in any grantmaking process, I think when it comes to policy and related work we are too ready to let perfect be the enemy of the good.

  1. ^

    Important to note that the observations here are most relevant to policymaking in Western democracies; the considerations in other contexts are very different.

Comment by IanDavidMoss on Why EA needs Operations Research: the science of decision making · 2022-07-21T20:05:13.782Z · EA · GW

Re: "Why haven't I heard of OR?", I think your comments on the fragmentation and branding challenges are extremely on point. Last year Effective Institutions Project did a scoping exercise looking at different fields and academic disciplines that intersect with institutional decision-making, and it was amazing to see the variety of names and frames for what is ultimately a collection of pretty similar ideas. With that said, I think the directions that have been explored under the OR banner are particularly interesting and impressive, and am really glad to have someone in the community who knows that field well!

Comment by IanDavidMoss on Senior EA 'ops' roles: if you want to undo the bottleneck, hire differently · 2022-07-11T11:24:12.315Z · EA · GW

One thing that occurs to me is that your post assumes that the only way to address the issues raised here is to hire different people and/or give them different responsibilities. But another possible route is for EA organizations to make more use of management consultancies. That could be a path worth considering for small nonprofits whose leaders mainly do just want to hire someone to take care of all the tasks they don't want to do themselves, and whose opportunity to make use of more strategic and advanced operations expertise is likely to be too sporadic to satisfy an experienced operations professional, especially one whose experience is mostly with larger organizations or companies and who is not strongly aligned with EA values. Said experienced ops pros could in turn perhaps do more of the work they want to do (and be better paid for it) working for a consultancy rather than in-house at a small organization.

I know there have been some efforts to get an EA-branded management consulting agency going since Luke's post last year but am not aware of any of them hitting paydirt quite yet -- happy to connect you or others interested to relevant people as appropriate. The main barrier as I understand it so far has been EA orgs' lack of demonstrated demand for the services, but I wouldn't necessarily take this as a signal that the resources are already there in-house or that there would be no benefit to the organizations from accessing them.

Comment by IanDavidMoss on What is Operations Management? · 2022-07-10T16:30:34.192Z · EA · GW

I think this post is excellent overall, but I do want to register a disagreement with your bid to separate operations work from the work that PAs do in most small nonprofit organizations. You have a keen observation about how the nature of operations work changes with scale: at top levels of a multinational corporation, the notion of a senior operations executive doing PA-style work is ludicrous. But for most EA organizations, that comparison is kind of nonsensical; we're talking about small outfits with 2-6 staff members and a mishmash of interns, contractors, volunteers and other loosely affiliated workers, not a 100,000-person behemoth with offices around the world. In the context of a small nonprofit, the proportion of operations work that looks like PA work is typically much larger than is the case in a huge company like that. Similarly, I disagree with statements like "You can make a decent argument that the janitor taking out the garbage is necessary for the core functions of the business to go forward (because nobody could work if the floor was covered with garbage), but I think you would be hard pressed to find somebody who considers the janitor to be part of the operations department." In my experience, organizations (such as schools) that hire janitors consider them maintenance staff, they are situated in the Facilities department, and Facilities is overseen by the COO or equivalent.

Comment by IanDavidMoss on Before There Was Effective Altruism, There Was Effective Philanthropy · 2022-06-26T23:06:33.148Z · EA · GW

Do you think that some of the people who would have been attracted to effective philanthropy in the past now just join effective altruism?

Some, sure. EA seems to be a lot more mainstream now than it was even 3-4 years ago, so that's probably the main reason.

While I think EP has been influential, I just didn't find the work from CEP and similar places as intellectually engaging as what EA puts out (or as important overall).

I think the main thing EA has going for it over EP is that it has a much better track record of taking ideas seriously. EP explored a lot of promising directions and anticipated a number of things that EA organizations ended up doing (e.g., incorporating expected value estimates into grantmaking). But in my view the key players, in trying to optimize for elite credibility at the same time as intellectual rigor, didn't give themselves enough weirdness points to work with. As a result, they both failed to pursue their best ideas to their logical conclusion and didn't do enough to distinguish between transformative ideas and mediocre ones.

Comment by IanDavidMoss on Before There Was Effective Altruism, There Was Effective Philanthropy · 2022-06-26T21:14:05.613Z · EA · GW

I wasn't there at the very beginning, but have followed the effective philanthropy "scene" since 2007 or so. My sense is that most EA community members aren't very knowledgeable about this whole side of institutional philanthropy, so I was pleasantly surprised to see the history recounted pretty accurately here! With that said, one quibble is that the book you cited entitled Effective Philanthropy by Mary Ellen Capek and Molly Mead is not one I'd ever heard of before reading this post; I think this is just a case of a low-profile resource happening to get good Google search results years later.

Here is a bit of additional background on the key players and some of their intersections, as I understand it:

  • The effective philanthropy movement was very much a child of the original dot-com boom in the late 1990s. While CEP is based in Boston, the scene was mostly driven by an earlier generation of West Coast tech magnates who were interested in bringing business concepts like results-based management to philanthropy. Education funding was viewed as a major priority and there were close ties to the charter school movement, which saw a number of influential organizations like KIPP incubated by funders looking to put these ideas into practice. With that said, CEP's Phil Buchanan has consistently pushed back against the idea that nonprofits are analogous to businesses, despite his own MBA from Harvard Business School.
  • The William and Flora Hewlett Foundation has an Effective Philanthropy Program and has been a major financial supporter of CEP for a long time. Hewlett's former president Paul Brest (2000-2012) pioneered the notion of "strategic philanthropy" which is closely related both in spirit and sociologically to this movement. Fun trivia note: Hewlett's Effective Philanthropy program was an early funder of GiveWell at the time when that organization was precariously situated (i.e., pre-Dustin & Cari).
  • Stanford Social Innovation Review was closely associated with this scene as well. With startup funding from Hewlett, I believe it was intended to be a Harvard Business Review for the social sector when it was founded in 2003. (HBR had published the original article on "venture philanthropy" in 1997.)
  • Some other funders that have been influential include Mario Marino's Venture Philanthropy Partners and his Leap of Reason community, the Edna McConnell Clark Foundation, the Robin Hood Foundation, and REDF (which developed the social return on investment methodology, a form of cost-benefit analysis).

Over the past decade, the consensus among US-based staffed foundations has shifted hard against some of the technocratic premises that drove the effective philanthropy movement, in particular its emphasis on measurable outcomes and tendency to invest lots of funder resources in strategy development. The Whitman Institute's work probably contributed in a minor way to that dynamic, but in my read a much stronger influence has been the growing emphasis on racial justice in the nonprofit sector since the dawn of the Black Lives Matter movement that, via a variety of pathways including the widespread socialization of Tema Okun's work, caused so-called "top-down" approaches like effective/strategic philanthropy to feel out of touch with the moment. One of the earliest points of tension was a series put out beginning in 2009 by the National Committee for Responsive Philanthropy called "Philanthropy at its Best" critiquing current foundation practices, which Brest wrote a four-part essay responding to in 2011. A parallel thread of critique comes from complexity science, via the argument that the wicked problems philanthropy is trying to solve are knotty enough that trying to predict the outcomes of philanthropic investments with any meaningful level of detail is a fool's errand, and funders should therefore defer to the expertise of grantees wherever possible. On that front, this essay from one of the co-founders of FSG (a philanthropy consultancy closely associated with Harvard Business School and the early days of venture philanthropy) was particularly influential.

I don't believe there was one single event that caused the momentum around effective philanthropy to fall apart, but by 2016 or so it was clear that its peak was in the rear-view mirror; a particularly dramatic turn was when Hal Harvey, Paul Brest's co-author on their 2008 book Money Well Spent which was written while Brest was still president of Hewlett, wrote an op-ed apologizing for his role advancing strategic philanthropy. There's a much longer conversation to have about to what extent and which of the critiques of effective philanthropy are worth attending to, and how they relate to effective altruism, but I'm happy to see it pointed out that many of the topics EA is most concerned with have been discussed at length in other venues.

Comment by IanDavidMoss on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-16T19:48:45.536Z · EA · GW

I don't have any inside info here, but based on my work with other organizations I think each of your first three hypotheses are plausible, either alone or in combination.

Another consideration I would mention is that it's just really hard to judge how to interpret advocacy failures over a short time horizon. Given that your first try failed, does that mean the situation is hopeless and you should stop throwing good money after bad? Or does it mean that you meaningfully moved the needle on people's opinions and the next campaign is now likelier to succeed? It's not hard for me to imagine that in 2016-17 or so, having seen some intermediate successes that didn't ultimately result in legislation signed into law, OP staff might have held out genuine hope that victory was still close at hand. Or after the First Step Act was passed in 2018 and signed into law by Trump, maybe they thought they could convert Trump into a more consistent champion on the issue and bring the GOP along with him. Even as late as 2020, when the George Floyd protests broke out, Chloe's grantmaking recommendations ended up being circulated widely and presumably moved a lot of money; I could imagine there was hope at that time for transformative policy potential. Knowing when to walk away from sustained but not-yet-successful efforts at achieving low-probability, high-impact results, especially when previous attempts have unknown correlations with the probability of future success, is intrinsically a very difficult estimation problem. (Indeed, if someone at QURI could develop a general solution to this, I think that would be a very useful contribution to the discourse!)

Comment by IanDavidMoss on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-16T18:00:47.391Z · EA · GW

One context note that doesn't seem to be reflected here is that in 2014, there was a lot of optimism for a bipartisan political compromise on criminal justice reform in the US. The Koch network of charities and advocacy groups had, to some people's surprise, begun advocating for it in their conservative-libertarian circles, which in turn motivated Republican participation in negotiations on the hill. My recollection is that Open Phil's bet on criminal justice reform funding was not just a "bet on Chloe," but also a bet on tractability: i.e., that a relatively cheap investment could yield a big win on policy because the political conditions were such that only a small nudge might be needed. This seems to have been an important miscalculation in retrospect, as (unless I missed something) a limited-scope compromise bill took until the end of 2018 to get passed. I'm not aware of any significant other criminal justice legislation that has passed in that time period. [Edit: while this is true at the national level, arguably there has been a lot of progress on CJR at state and local levels since 2014, much of which could probably be traced back to advocacy by groups like those Open Phil funded.]

This information strongly supports the "Leverage Hypothesis," which was cited by Open Phil staff themselves, so I think it ought to be weighted pretty strongly in your updates.

Comment by IanDavidMoss on What’s the theory of change of “Come to the bay over the summer!”? · 2022-06-09T14:22:37.158Z · EA · GW

Separating out how important networking is for different kinds of roles seems valuable, not only for the people trying to climb the ladder but also for the people already on the ladder. (e.g., maybe some of these folks desperate to find good people to own valuable projects that otherwise wouldn't get done should be putting more effort into recruiting outside of the Bay.)

Comment by IanDavidMoss on What’s the theory of change of “Come to the bay over the summer!”? · 2022-06-09T14:12:32.022Z · EA · GW

I like this comment because it does a great job of illustrating how socioeconomic status influences the risks one can take. Consider the juxtaposition of these two statements:

(from the comment)

Maybe this is mainly targeted at undergraduate students, who are more likely to have a few months of time over the summer with no commitments. But in that case how do they have the money to do what is basically an extended vacation? Most students aren't earning much/any money. 

  • Maybe this is only targeted at students who have wealthy families willing to fund expensive adventures.

(from the OP)

It’s unclear from the outside:

  • How easy it is to start a project and how secure this is relative to starting ambitious things outside of EA. Funding, advisors, a high-trust community, and social prestige are available...Looking at what scale EA projects in the bay operate at disperses false notions of limits and helps shoot for the correct level of ambition

Even once you know these things intellectually, it’s hard to act in accordance with them before knowing them viscerally, e.g., viscerally feel secure in starting an ambitious project. Coming to Berkeley really helps with that.

Let's say that for a typical motivated early-career EA, there's a 60% chance that moving to the Bay will result in desirable full-time employment within one month. (I have no idea if that's the correct number, just taking a wild guess.) From an expected-value standpoint, that seems like a great deal! Of course you would do that! But for someone who's resource-constrained, that 40% combined with the high living costs are really big red flags. What happens if things don't work out? What happens is that you've now blown all your savings and are up shit creek, and if you didn't embed yourself in the community well enough during that time to get a job, you probably don't have enough good friends to help you out of a financial hole either. So do you make the leap? Without a safety net or an upfront commitment, it's so much harder to opt for high-upside but riskier pathways, and that in turn ends up impacting the composition of the community.

Comment by IanDavidMoss on Unflattering reasons why I'm attracted to EA · 2022-06-03T19:10:42.991Z · EA · GW

Really appreciate you writing this! Echoing others, I think many of these more self-serving motivations are pretty common in the community. With that said, I think some of these are much more potentially problematic than others, and the list is worth disaggregating on that dimension. For example, your comment about EA helping you not feel so fragile strikes me as prosocial, if anything, and I don't think anyone would have a problem with about someone gaining hope that their own suffering could be reduced from engaging in EA.

The ones that I think are most worrying and worth pushing back on (not just for you, but for all of us in the community) are:

  • Affiliation with EA aligns me with high-status people and elite institutions, which makes me feel part of something special, important and exclusive (even if it's not meant to be)
  • EA is partly an intellectual puzzle, and gives me opportunities to show off and feel like I'm right and other people are wrong / I don't have to get my hands dirty helping people, yet I can still feel as or more legitimate than someone who is actually on the front line
  • It is a way to feel morally superior to other people, to craft a moral dominance hierarchy where I am higher than other people

The first one is tricky, as affiliation with high-status people and organizations can be instrumentally quite useful for achieving impact--indeed, in some contexts it's essential--and for that reason we shouldn't reject it on principle. And just like I think it's okay to enjoy money, I think it's okay to enjoy the feeling of doing something special and important! The danger is in having the status become its own reward, replacing the drive for impact. I feel that this is something we need to be constantly vigilant about, as it's easy to mistake social signals of importance for actual importance (aka LARPing at impact.)

I grouped the "intellectual puzzle" and "get my hands dirty" items because I see them as two sides of the same coin. In recent years it feels to me that EA has lost touch a bit with its emotional core, which is arguably easier to bring forward in the contexts of animal welfare and global poverty than x-risk (and to the extent there is an emotional core to x-risk, it is mostly one of fear rather than compassion). I personally love solving intellectual puzzles and it's a big reason why I keep coming back to this community, but it mustn't come at the expense of the A in EA. I group this with "get my hands dirty" because I think for many of us, hard intellectual puzzles are our bread and butter and actually take less effort/provoke less discomfort than putting ourselves in a position to help people suffering right in front of us. I similarly see this one as a balance to strike.

The last one is the only one that I think is just unambiguously bad. Not only is it incorrect on its face, or at least at odds with what I see as EA's core values, but it is a surefire way to turn off people who might otherwise be motivated to help. And indeed there has been a history of people in EA publicly communicating in a way that came across to others as morally arrogant, especially in early years of the movement, which created rifts with mainstream nonprofit/social sector practice that are still there today (e.g.).

Comment by IanDavidMoss on Revisiting the karma system · 2022-05-30T17:20:00.459Z · EA · GW

I think the issue is more that different users have very disparate norms about how often to vote, when to use a strong vote, and what to use it on. My sense (from a combination of noticing voting patterns and reading specific users' comments about how they vote) is that most are pretty low-key about voting, but a few high-karma users are much more intense about it and don't hesitate to throw their weight around. These users can then have a wildly disproportionate effect on discourse because if their vote is worth, say, 7 points, their opinion on one piece of content vs. another can be and often is worth a full 14 points.

In addition to scaling down the weight of strong votes as MichaelStJules suggested, another corrective we could think about is giving all users a limited allocation of strong upvotes/downvotes they can use, say, each month. That way high-karma users can still act in a kind of decentralized moderator role on the level of individual posts and comments, but it's more difficult for one person to exert too much influence over the whole site.

Comment by IanDavidMoss on Revisiting the karma system · 2022-05-30T17:06:53.438Z · EA · GW

Sorry if I'm being dense, but where is this 4-tuple available?

Comment by IanDavidMoss on Revisiting the karma system · 2022-05-29T17:58:12.922Z · EA · GW

I would be in favor of eliminating strong downvotes entirely. If a post or comment is going to be censored or given less visibility, it should be because a lot of people wanted that to happen rather than just two or three.

Comment by IanDavidMoss on Focus on Civilizational Resilience over Cause Areas · 2022-05-26T21:13:14.971Z · EA · GW

Thought-provoking! I think one organization that illustrates the split you're talking about well is the Centre for Long Term Resilience. This is an EA-aligned advocacy group focused on increasing awareness of and capacity to respond to extreme risks in the UK government. What's interesting with regard to your post is that they've divided their work into three divisions. The first two, AI and biosecurity, are issue-specific. But the third division focuses on generalized "risk management," or as they put it, "the process of both transforming risk governance, and of identifying, assessing and mitigating all extreme risks."

I think I agree with you that the community could benefit from more investment in cross-issue and exploratory work of this type. The rest of the world is already too siloed and EA set itself apart in the first place with its commitment to cause neutrality. It's important to retain that openness and integrative approach even as we get deeper into implementation work on high-impact causes.

Comment by IanDavidMoss on Early spending research and Carrick Flynn · 2022-05-19T12:15:55.294Z · EA · GW

Why isn't it easy for me to look up a beautiful 80,000 Hours article on how to build a campaign in a contested US Congressional primary?

FWIW, there are several US-based candidate training programs out there that aren't EA-specific but would give people the advice and skills they need to run a competitive campaign, while also developing helpful networks in the political sphere. For example, Run for Something is well positioned on the Democratic side and in the midst of a massive expansion this year. New Politics may be of interest to some as well, as they focus on candidates with a service background (e.g., military, Americorps) and work on both sides of the aisle.

Comment by IanDavidMoss on The Many Faces of Effective Altruism · 2022-05-19T00:55:05.262Z · EA · GW

I enjoyed this! I don't quite get the distinction between #1 and #6, though. Is the primary axis around the weirdness of the ideas you're into? So, Wholesome EAs are anti-weirdness, cheerful utilitarians are weirdness-neutral, and contrarians are pro-weird?

Comment by IanDavidMoss on Some potential lessons from Carrick’s Congressional bid · 2022-05-18T12:50:25.730Z · EA · GW

I've been thinking about this too. I was really struck by the contrast between the high level of explicit support for "one of our own" running for office vs. the usual resistance to political activism or campaigning otherwise. Personally, I'm strongly in favor of good-faith political campaigning on EA grounds, but from my perspective explicit ties to the EA community shouldn't matter so much in that calculus -- rather, what matters is our expectations of what the candidates would do to advance or block EA-aligned priorities, whether the candidates are branded as EA or not.

In 2020 I suggested that it might be a good idea to set up an entity to vet and endorse candidates for office on EA grounds. While I'm sure such an entity would have still supported Carrick in retrospect, I think one benefit of having a resource like this is that it would allow us to identify, support, and develop relationships with other politicians around the US and in the rest of the world who would be really helpful to have in office while not facing some of the disadvantages of being a newcomer/outsider that Carrick faced.

Comment by IanDavidMoss on Some potential lessons from Carrick’s Congressional bid · 2022-05-18T12:39:18.861Z · EA · GW

Yes, I strongly agree with this. Almost all money in politics goes to establishing and maintaining narratives about the candidates, but money becomes a problem rather than a help in politics when the supporter and candidate allow the money itself to become the narrative. This is especially true in a Democratic primary.

Comment by IanDavidMoss on Norms and features for the Forum · 2022-05-16T12:31:08.899Z · EA · GW

I expect high karma to cause a post to get read more, if only because of readers' fear of missing out.

I would have phrased this claim a bit more confidently, as there are systems in place that basically ensure this will be the case, at least on average. For example, higher-karma posts stay on the front page longer and are more likely to be selected for curation in the EA Forum Newsletter, the EA Newsletter, and other off-site amplification channels.