Posts

Free/low-cost decision support services for the EA community 2021-09-29T17:04:31.142Z
Improving Institutional Decision-Making: Which Institutions? (A Framework) 2021-08-23T02:26:57.525Z
Vitalik Buterin just donated $54M (in ETH) to GiveWell 2021-05-14T01:30:42.225Z
AMA: Ian David Moss, strategy consultant to foundations and other institutions 2021-03-02T16:55:48.183Z
Improving Institutional Decision-Making: a new working group 2020-12-28T05:47:29.194Z
Recommendations for prioritizing political engagement in the 2020 US elections 2020-10-14T13:52:23.564Z
When does it make sense to support/oppose political candidates on EA grounds? 2020-10-14T13:51:38.090Z
Prioritizing COVID-19 interventions & individual donations 2020-05-06T21:29:12.249Z
All causes are EA causes 2016-09-25T18:44:42.347Z
Reflections on EA Global from a first-time attendee 2016-09-18T13:38:25.752Z

Comments

Comment by IanDavidMoss on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-19T00:46:38.685Z · EA · GW

Great to see more attention on this topic! I think there is an additional claim embedded in this proposal which you don't call out:

6. Categories of intervention in the wisdom/intelligence space are sufficiently differentiated in long-term impact potential for a prioritization exercise to yield useful insights.

I notice that I'm intuitively skeptical about this point, even though I basically buy your other premises. It strikes me that there is likely to be much more variation in impact potential between specific projects or campaigns, e.g. at the level of a specific grant proposal, than there is between whole categories, which are hard to evaluate in part because they are quite complementary to each other and the success of one will be correlated with the success of others. You write, "We wouldn’t want to invest a lot of resources into one field, to realize 10 years later that we could have spent them better in another." But what's to say that this is the only choice we face? Why not invest across all of these areas and chase optimality by judging opportunities on a project-by-project basis rather than making big bets on one category vs. another?

Comment by IanDavidMoss on The Cost of Rejection · 2021-10-13T12:19:56.018Z · EA · GW

As another option to get feedback, many colleges and universities' career development offices offer counseling to their schools' alumni, and resume review (often in the context of specific applications to specific jobs) is one of the standard services they provide at no extra charge.

Comment by IanDavidMoss on Noticing the skulls, longtermism edition · 2021-10-06T00:30:29.557Z · EA · GW

But essential to the criticism is that I shouldn't decide for them.

It seems like this is a central point in David's comment, but I don't see it addressed in any of what follows. What exactly makes it morally okay for us to be the deciders?

It's worth noting that in both US philanthropy and the international development field, there is currently a big push toward incorporating affected stakeholders and people with firsthand experience with the issue at hand directly into decision-making for exactly this reason. (See participatory grantmaking, the Equitable Evaluation Initiative, and the process that fed into the Sustainable Development Goals, e.g.) I recognize that longtermism is premised in part on representing the interests of moral patients who can't represent themselves. But the question remains: what qualifies us to decide on their behalf? I think the resistance to longtermism in many quarters has much more to do with a suspicion that the answer to that question is "not much" than any explicit valuation of present people over future people.

Comment by IanDavidMoss on Improving Institutional Decision-Making: Which Institutions? (A Framework) · 2021-09-23T23:35:43.848Z · EA · GW

Thanks for the comment!

Do you have any further thoughts since posting this regarding how difficult vs valuable it is to attempt quantification of the values? Approximately how time-consuming is such work in your experience?

With the caveat that I'm someone who's pretty pro-quantification in general and also unusually comfortable with high-uncertainty estimates, I didn't find the quantification process to be all that burdensome. In constructing the FDA case study, far more of my time was spent on qualitative research to understand the potential role the FDA might play in various x-risk scenarios than coming up with and running the numbers. Hope that helps!

Comment by IanDavidMoss on Does the Forum Prize lead people to write more posts? · 2021-09-21T18:40:21.926Z · EA · GW

I agree with other commenters who have pointed out that using "more posts by previous prize-winning authors" as a proxy for the stated goal of "the creation of more content of the sort we want to see on the Forum"  seems like a strange way to evaluate the efficacy of the Forum Prize. In addition to the points already mentioned, I would add two more:

  • It doesn't consider potential variation in quality among posts by the same author. If prize-winning authors feel that they have set a standard that they feel it's important to continue meeting in the future and that means they post less frequent but more thoughtful articles, that's generally a trade I'd be happy to accept as a reader.
  • It ignores the potential impact of the Forum Prize on other people's writing. How many people have been inspired to write something either because of the existence of the prize itself or because of some piece of writing that they learned about because of the prize? I would bet it's not zero.

Indeed, I would argue that the prize adjudication process itself offers a useful infrastructure for evaluating the Forum experience. Since you have a record of the scores that posts received each month as well as the qualitative opinions of longtime judges, you have the tools you need to assess in a semi-rigorous way whether the quality of the top posts has increased or decreased over time.

I also wanted to express that if CEA really is ceasing the Forum Prize as such, that seems like a fairly major decision that should get its own top-level post, as the prize announcements themselves do. As it is, it's buried in an article whose title poses what I think most people would consider to be a pretty esoteric research question, so I expect that a lot of people will miss it.

Comment by IanDavidMoss on Disentangling "Improving Institutional Decision-Making" · 2021-09-14T12:49:23.003Z · EA · GW

Once again, I think I agree, although I think there are some rationality/decision-making projects that are popular but not very targeted or value-oriented. Does that seem reasonable?

It does, and I admittedly wrote that part of the comment before fully understanding your argument about classifying the development of general-use decision-making tools as being value-neutral. I agree that there has been a nontrivial focus on developing the science of forecasting and other approaches to probability management within EA circles, for example, and that those would qualify as value-neutral using your definition, so my earlier statement that value-neutral is "not really a thing" in EA was unfair.

If I were to draw this out, I would add power/scope of institutions as a third axis or dimension (although I would worry about presenting a false picture of orthogonality between power and decision quality). The impact of an institution would then be related to the relevant volume  of a rectangular prism, not the relevant area of a rectangle. 

Yeah, I also thought of suggesting this, but think it's problematic as well. As you say, power/scope is correlated with decision quality, although more on a long-term time horizon than in the short term and more for some kinds of organizations (corporations, media, certain kinds of nonprofits) than others (foundations, local/regional governments). I think it would be more parsimonious to just replace decision quality with institutional capabilities on the graphs and to frame DQ in the text as a mechanism for increasing the latter, IMHO. (Edited to add: another complication is that the line between institutional capabilities that come from DQ and capabilities that come from value shift is often blurry. For example, a nonprofit could decide to change its mission in such a way that the scope of its impact potential becomes much larger, e.g., by shifting to a wider geographic focus. This would represent a value improvement by EA standards, but it also means that it might open itself up to greater possibilities for scale from being able to access new funders, etc.)

Would you mind if I added an excerpt from this or a summary to the post?

No problem, go ahead!

Comment by IanDavidMoss on Disentangling "Improving Institutional Decision-Making" · 2021-09-14T07:08:44.223Z · EA · GW

Wow! It's really great to see such an in-depth response to the definitional and foundational work that's been taking place around IIDM over the past year, plus I love your hand-drawn illustrations! As the author or co-author of several of the pieces you cited, I thought I'd share a few thoughts and reactions to different issues you brought up. First, on the distinctions and delineations between the value-neutral and value-oriented paradigms (I like those labels, by the way):

  • I don't quite agree that Jess Whittlestone's problem profile for 80K falls into what you're calling the "value neutral" category, as she stresses at several points the potential of working with institutions that are working on "important problems" or similar. For example, she writes: "Work on 'improving decision-making' very broadly isn’t all that neglected. There are a lot of people, in both industry and academia, trying out different techniques to improve decision-making....However, there seems to be very little work focused on...putting the best-proven techniques into practice in the most influential institutions." The definition of "important" or "influential" is left unstated in that piece, but from the context and examples provided, I read the intention as one of framing opportunities from the standpoint of broad societal wellbeing rather than organizations' parochial goals.
  • This segues nicely into my second response, which is that I don't think the value-neutral version of IIDM is really much of a thing in the EA community. CES is sort of an awkward example to use because a core tenet of democracy is the idea that one citizen's values and policy preferences shouldn't count more than another's; I'd argue that the impartial welfarist perspective that's core to EA philosophy is rooted in similar ideas. By contrast, I think people in our community are much more willing to say that some organizations' values are better than others, both because organizations don't have the same rights as human beings and also because organizations can agglomerate disproportionate power more easily and scalably than people.  I've definitely seen disagreement about how appropriate or effective it is to try to change organizations' values, but not so much about the idea that they're important to take into account in some way.
  • There is a third type of value-oriented approach that you don't really explore but I think is fairly common in the EA community as well as outside of it: looking for opportunities to make a positive impact from an impartial welfarist perspective on a smaller scale within a non-aligned organization (e.g., by working with a single subdivision or team, or on one specific policy decision) without trying to change the organization's values in a broader sense.

I appreciated your thought-provoking exploration of the two indirect pathways to impact you proposed. Regarding the second pathway (selecting which institutions will survive and flourish), I would propose that an additional complicating factor is that non-value-aligned institutions may be less constrained by ethical considerations in their option set, which could give them an advantage over value-aligned institution from the standpoint of maximizing power and influence.

I did have a few critiques about the section on directly improving the outcomes of institutions' decisions:

  • I think the 2x2 grid you use throughout is a bit misleading. It looks like you're essentially using decision quality as a proxy for institutional power, and then concluding that intentions x capability = outcomes. But decision quality is only one input into institutional capabilities, and in the short term is dominated by institutional resources—e.g., the government of Denmark might have better average decision quality than the government of the United States, but it's hard to argue that Denmark's decisions matter more. For that reason, I think that selecting opportunities on the basis of institutional power/positioning is at least as important as value alignment. The visualization approach you took in the "A few overwhelmingly harmful institutions" graph seems to be on the right track in this respect.
  • One issue you don't really touch on except in a footnote is the distinction between stated values and de facto values for institutions, or internal alignment among institutional stakeholders. For example, consider a typical private health insurer in the US. In theory, its goal is to increase the health and wellbeing of millions of patients—a highly value-aligned goal! Yet in practice, the organization engages in many predatory practices to serve its own growth, enrichment of core stakeholders, etc. So is this an altruistic institution or not? And does bringing its (non-altruistic) actions into greater alignment with its (altruistic) goals count as improving decision quality or increasing value alignment under your paradigm?

While overall I tend to agree with you that a value-oriented approach is better, I don't think you give a fair shake to the argument that "value-aligned institutions will disproportionately benefit from the development of broad decision-making tools." It's important to remember that improving institutional decision-making in the social sector and especially from an EA perspective is a very recent concept. The professional world is incredibly siloed, and it's not hard at all for me to imagine that ostensibly publicly available resources and tools that anyone could use would, in practice, be distributed through networks that ensure disproportionate adoption by well-intentioned individuals and groups. I believe that something like this is happening with Metaculus, for example.

One final technical note: you used "generic-strategy" in a different way that we did in the "Which Institutions?" post—our definition imagines a specific organization that is targeted through a non-specific strategy, whereas yours imagines a specific strategy not targeted to any specific organization. I agree that the latter deserves its own label, but suggest a different one than "generic-strategy" to avoid confusion with the previous post.

I've focused mostly on criticisms here for the sake of efficiency, but I really was very impressed with this article and hope to see more writing from you in the future, on this topic and others!

Comment by IanDavidMoss on Miranda_Zhang's Shortform · 2021-07-26T20:55:44.079Z · EA · GW

(I'm also wondering whether I am being overly concerned with theoretically justifying things!)

I think I would agree with this. It seems like you're trying to demonstrate your knowledge of a particular framework or set of frameworks through this exercise and you're letting that constrain your choices a lot. Maybe that will be a good choice if you're definitely going into academia as a political scientist after this, but otherwise, I would structure the approach around how research happens most naturally in the real world, which is that you have a research question that would have concrete practical value if it were answered, and then you set out to answer it using whatever combination of theories and methods makes sense for the question.

Comment by IanDavidMoss on Miranda_Zhang's Shortform · 2021-07-26T14:14:07.737Z · EA · GW

Suggestion: use an expert lens, but make the division you're looking at [experts connected to/with influence in the Biden administration] vs. ["outside" experts].

Rationale: The Biden administration thinks of and presents itself to the public as technocratic and guided by science, but as with any administration politics and access play a role as well. As you noted, the Biden administration did a clear about-face on this despite a lack of a clear consensus from experts in the public sphere. So why did that happen, and what role did expert influence play in driving it? Put another way, which experts was the administration listening to, and what does that suggest for how experts might be able to make change during the Biden administration's tenure?

Comment by IanDavidMoss on Miranda_Zhang's Shortform · 2021-07-25T17:27:13.648Z · EA · GW

These both seem like great options! Of the two, I think the first has more to play with as there is a pretty clear delineation between the epistemic vs. moral elements of the second, whereas I think debates about the first have those all jumbled up and it's thus more interesting/valuable to untangle them. I don't totally understand your hesitation so I'm afraid I can't offer much insight there, but with respect to long-term policymaking/shared beliefs, it does seem like the fault lines mapped onto fairly clear pro-free-market vs. pro-redistributive ideologies that drew the types of advocates one would have predicted given that divide.

Comment by IanDavidMoss on Thoughts on new DAF legislation? · 2021-07-21T21:00:54.555Z · EA · GW

FYI, there is an existing discussion of this question on the forum here.

Comment by IanDavidMoss on The case against “EA cause areas” · 2021-07-21T20:49:40.226Z · EA · GW

Great piece! FYI, I wrote an essay with a similar focus and some of the same arguments about five years ago called All Causes are EA Causes. This article adds some helpful arguments, though, in particular the point about the risk of being over-identified with particular cause areas undermining the principle of cause neutrality itself. I continue to be an advocate for applying EA-style critical thinking within cause areas, not just across them!

Comment by IanDavidMoss on Further thoughts on charter cities and effective altruism · 2021-07-21T15:24:37.664Z · EA · GW

Aside from that, neither Open Phil nor Good Ventures are structured as private foundations (Open Phil is an LLC), so Moskowitz & Tuna aren't subject to the 5% payout rule anyway.

Comment by IanDavidMoss on New blog: Cold Takes · 2021-07-14T19:36:28.569Z · EA · GW

This comment made me laugh out loud, all the more so because I couldn't tell whether you were joking.

Comment by IanDavidMoss on Refining improving institutional decision-making as a cause area: results from a scoping survey · 2021-07-03T14:17:35.147Z · EA · GW

Perhaps a small number of people who have thought about IIDM carefully and systematically could share their object-level arguments on which approaches seem the most promising to them.

Hi Jonas, I can share some personal reflections on this. Please note that the following are better described as hunches and impressions based on my experiences rather than strongly held opinions -- I'm hopeful that some of the analysis and knowledge synthesis EIP is doing this year will help us and me take more confident positions in the future. 

  1. Re: institutional design/governance specifically, I would guess that this scored highly because of its holistic and highly leveraged nature. Many institutions are strongly shaped and highly constrained by rules and norms that are baked into the way they operate from the very beginning or close to it, which in turn can make other kinds of reforms much more difficult or less likely to succeed. The most common problem I see in this area is not so much bad design as lack of design, i.e., silos and practices that may have made sense at one particular moment for one particular set of stakeholders, but weren't implemented with any larger vision in mind for how everything would need to function together. This is a common failure mode when organizations grow opportunistically rather than intentionally. My sense is that opportunities to make interventions into institutional design and governance are few and far between, but can be tremendously impactful when they do appear. It's generally easiest to make changes to institutional design early in the life of an institution, but because the scale of operations is often smaller and the prospects for success unclear at that point, it's not always obvious to the participants how much downstream impact their decisions during that period can have.
  2. One of the biggest bottlenecks to improved decision-making in institutions is simply the level of priority and attention the issue receives. There tends to be much more focus in institutions on specific policies and strategies than on the process by which those priorities are determined. At the same time, institutional cultures tend to reflect their leaders' priorities, especially if the leaders are in place for a while. Thus, I'm optimistic about interventions that target the selection and recruitment of leaders with an eye toward choosing people who understand the importance of decision-making processes and are committed to making high-quality decision-making a priority in the organizations they come into.
  3. I think there's a version of moral circle expansion that is very relevant to institutional contexts. Institutions tend to prioritize first and foremost their direct stakeholders, i.e. the interests of people close to the institution. If more of them took seriously the effects of their decisions on everyone, not just those who are their primary voting constituents or intended beneficiaries or paying customers, that would represent a dramatic cultural shift that would make lots of other improvements more feasible. I see this as more of a long-term strategy that will not be easy to pull off, but the potential benefits from making progress on this dimension are massive.
Comment by IanDavidMoss on EA needs consultancies · 2021-06-29T12:24:38.362Z · EA · GW

If anyone's thinking seriously about doing as Linch suggests and would like to talk about the nuts and bolts of consulting, feel free to get in touch. I've been consulting independently for four years and am happy to share what I know/discuss potential collaborations.

Comment by IanDavidMoss on Why scientific research is less effective in producing value than it could be: a mapping · 2021-06-14T11:51:02.915Z · EA · GW

This is a really great post, and I particularly appreciated the visual diagrams laying out the "problem tree." A number of aspects of what you're writing about (particularly choice of research questions, the lack of connection with the end user in designing research questions, challenges around research/evidence use in the real world, and incentives created by funders and organizational culture) strongly resonated with me. You might find it interesting to read a couple of articles I've written along these lines:

  • A short piece called "The Crisis of Evidence Use" gathers some empirical data illuminating just how deep the problem you're describing runs. From my perspective the amount of waste in our collective knowledge-building systems is just, one might say, astronomical.
  • For strengthening the connection between commissioned research and end users, I've proposed a model of adding a decision support "wrapper" around the analytical activities to ensure relevance to stakeholder concerns. I welcome feedback and would love to find more partners to help test this idea in practice, so if you know anyone who's interested, please get in touch.

Finally, I just wanted to note a number of overlaps between this post (as well as the meta-science conversation more generally) and issues we're exploring in the improving institutional decision-making community. If you haven't already, I'd like to invite you to join our discussion spaces on Facebook and Slack, and it may be worth a conversation down the line to explore how we can support each other's efforts.

Comment by IanDavidMoss on How well did EA-funded biorisk organisations do on Covid? · 2021-06-05T00:37:53.916Z · EA · GW

It's actually worse than that. As I discovered when researching COVID giving opportunities for the FRAPPE donor group last year, Johns Hopkins experts explicitly recommended against wearing DIY masks in early March (a position reversed by the end of the month) and were not discouraging people from pressing ahead with travel plans as late as March 6. Sanjay had a phone call with them about a year ago in which he confronted them about these reversals, and they offered a sort of half-hearted defense.

I don't have any inside information about why CHS made the choices it did, but my naive view is that I agree with your comment that mistakes like these should reflect poorly on CHS. CHS's core competency may be more in the area of pandemic preparedness than dealing with the pandemic once it's already here, but their experts were quoted in the media a TON last spring and had significant ability (= responsibility) to shape the public conversation about COVID, particularly in the US. And yet lots and lots of people far less credentialed than CHS epidemiologists had correctly figured out by the first week of March that it was smart to wear a mask and to avoid being around others more than was absolutely necessary. It was left to pop-up initiatives led by non-medical experts like #Masks4All to upend the conventional wisdom about masks that had been propagated by the WHO and CDC. I feel like CHS ought to have been well positioned to challenge the prevailing narrative and was instead getting in the way at a time when it really mattered.

Comment by IanDavidMoss on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-03T19:29:46.454Z · EA · GW

(Disclaimer: speaking for myself here, not the IIDM group.)

My understanding is that Max is concerned about something fairly specific here, which is a situation in which we are successful in capturing a significant share of the EA community's interest, talent, and/or funding, yet failing to either imagine or execute on the best ways of leveraging those resources.

While I could imagine something like this happening, it's only really a big problem if either a) the ways in which we're falling short remain invisible to the relevant stakeholders, or b) our group proves to be difficult to influence. I'm not especially worried about a) given that critical feedback is pretty much the core competency of the EA community and most of our work will have some sort of public-facing component. b) is something we can control and, while it's not always easy to judge how to balance external feedback against our inside-view perspectives, as you've pointed out we've been pretty intentional about trying to work well with other people in the space and cede responsibility/consider changing direction where it seems appropriate to do so.

Comment by IanDavidMoss on RyanCarey's Shortform · 2021-05-22T20:52:55.266Z · EA · GW

I actually think you are an unusually skilled moderator, FWIW.

Comment by IanDavidMoss on Vitalik Buterin just donated $54M (in ETH) to GiveWell · 2021-05-17T11:17:59.440Z · EA · GW

Amazing! It seems not-totally-crazy to think you may have had a hand in this :)

Comment by IanDavidMoss on Vitalik Buterin just donated $54M (in ETH) to GiveWell · 2021-05-16T18:42:12.565Z · EA · GW

What Facebook threads are you referring to?

Comment by IanDavidMoss on Three charitable recommendations for COVID-19 in India · 2021-05-11T16:07:27.642Z · EA · GW

Thanks for adding the rec! It looks like they are working together, actually. From Swasti's updates page: "The campaign is in association with Swasti.org which in-turn is working with the Swasth Alliance & ACT to procure oxygen concentrators for the most in-distress areas in the country." It sounds like you've been in touch with Swasti directly, have you heard differently?

Comment by IanDavidMoss on Three charitable recommendations for COVID-19 in India · 2021-05-06T00:11:33.942Z · EA · GW

Excellent work! Do you know if there's any relationship between Swasti and Swasth, which also has an oxygen campaign?

Comment by IanDavidMoss on On Mike Berkowitz's 80k Podcast · 2021-04-21T14:30:45.028Z · EA · GW

The professional politicians of the Republican party were not close to siding with Trump. Will the Republican speaker (elected by the median house Republican) see higher expected value in supporting a coup or rejecting it? The party loses massive membership if they support, and gains defacto political power if they win. But Republicans just want to veto bills, so why transition to a populist regime. It will never be a good choice for the party.

The Republican House Minority Leader, Kevin McCarthy, was on Fox News November 6 saying, "Donald Trump won this election, so everyone who's listening: do not be quiet. Do not be silent about this. We cannot allow this to happen before our very eyes...Join together and let's stop this." He later  signed onto an amicus brief supporting a lawsuit that, if successful, would have overturned the election in four states after the results were already certified. He then voted to reject certification of the election results in Arizona and Pennsylvania after the insurrection, along with most of his caucus.

Comment by IanDavidMoss on Share your views on the scope of improving institutional decision-making as a cause area · 2021-04-15T12:52:05.649Z · EA · GW

Hi Ramiro, that would be fine, although I recommend you caveat with the context that this is all in development/subject to change/etc. Thanks!

Comment by IanDavidMoss on Announcing "Naming What We Can"! · 2021-04-08T18:34:21.910Z · EA · GW

In fairness, David Moss was doing useful things in EA way before me, so I should probably be Ian David NO NOT THAT DAVID Moss!

Comment by IanDavidMoss on Announcing "Naming What We Can"! · 2021-04-08T18:30:34.840Z · EA · GW

David, I hate to remind you that EA interventions are supposed to be tractable...

Comment by IanDavidMoss on EA Funds has appointed new fund managers · 2021-03-23T23:51:09.928Z · EA · GW

I also found that confusing, for what it's worth.

Comment by IanDavidMoss on AMA: Ian David Moss, strategy consultant to foundations and other institutions · 2021-03-18T17:34:48.190Z · EA · GW

As part of the working group's activities this year, we're currently in the process of developing a prioritization framework for selecting institutions to engage with. In the course of setting up that framework, we realized that the traditional Importance/Tractability/Neglectedness schematic doesn't really have an explicit consideration for downside risk. So we've added that in the context of what it would look like to engage with an institution. With the caveat that this is still in development, here are some mechanisms we've come up with by which an intervention to improve decision-making could cause more harm than good:

  • The involvement of people from our community in a strategy to improve an institution's decision-making reduces the chances of that strategy succeeding, or its positive impact if it does succeed
    • (This seems most likely to be a reputation/optics effect, e.g. for whatever reason we are not credible messengers for the strategy or bring controversy to the effort where it didn't exist before. It will be most relevant where there is already capacity in place among other stakeholders or players in the system to make a change, whereby there is something to lose by us getting involved.)
  • The strategy selected leads to worse outcomes than the status quo due to poor implementation or an incomplete understanding of its full implications for the organization
    • (One way I've seen this go wrong is with reforms intended to increase the amount of information available to decision-makers at the expense of some ongoing investment of time. Often, there is insufficient attention put toward ensuring use of the additional information, with the result that the benefits of the reform aren't realized but the cost in time is still there.)
  • A failed attempt to execute on a particular strategy at the next available opportunity crowds out a what would otherwise be a more successful strategy in the near future
    • (This one could go either way; sometimes it takes several attempts to get something done and previous pushes help to lay the groundwork for future efforts rather than crowding them out. However, there are definitely cases where a particularly bad execution of a strategy can poison critical relationships or feed into a damaging counter-narrative that then makes future efforts more difficult.)
  • The strategy succeeds in improving decision quality at that particular institution, but it doesn't actually improve world outcomes because of insufficient altruistic intent on the part of the institution
    • (We do define this sort of value alignment as a component of decision quality, but since it's only one element it would theoretically be possible to engage in a way that solely focuses on the technical aspects of decision-making, only to see the improved capability directed toward actions that cause global net harm even if they are good for some of the institution's stakeholders. I think that there's a lot our community can do in practice to mitigate this risk, but in some contexts it will loom large.)

I think all of these risks are very real but also ultimately manageable. The most important way to mitigate them is to approach engagement opportunities carefully and, where possible, in collaboration with people who have a strong understanding of the institutions and/or individual decision-makers within them.

Comment by IanDavidMoss on Is pursuing EA entrepreneurship becoming more costly to the individual? · 2021-03-12T15:37:10.126Z · EA · GW

To clarify, when I wrote "without the promise of scale on the other side it's really hard to justify taking risks," I was talking from the perspective of the founder pouring time and career capital into a project, not a funder deciding whether to fund it.

Comment by IanDavidMoss on Is pursuing EA entrepreneurship becoming more costly to the individual? · 2021-03-12T14:27:50.080Z · EA · GW

I generally think that full-time social entrepreneurship (in the sense of being dependent on contributed income) early in one's career is quite risky and a bad idea for most people no matter what context or community you're talking about. I would say that, if anything, EA has made this proposition seem artificially attractive in recent years because of a) the unusual amount of money it's been able to attract to the cause during its first decade of existence and b) the high profile of a few outlier founders in the community who managed to defy the odds and become very successful. But the fundamental underlying reality is that it's really hard to scale anything without a self-sustaining business model, and without the promise of scale on the other side it's really hard to justify taking risks.

With that being said, I do think that risk-taking is really valuable to the community and EA is unusually well positioned to enable it without forcing founders to incur the kinds of costs you're talking about. One option, as tamgent mentioned in another comment, is to encourage entrepreneurship as a side project to be pursued alongside a job, full-time studies, or other major commitment. After all, that's how GiveWell, Giving What We Can, and 80,000 Hours all got started, and the lack of a single founder on the job full-time at the very beginning certainly didn't harm their growth. Another option, as EA Funds is now encouraging, is to make a point of generously funding time-limited experiments or short-term projects that provide R&D value for the community without necessarily setting back a founder or project manager in their career. Finally, EA funders could seek to form stronger relationships with funders outside of the community that are aligned on specific cause areas or other narrow points of interest to be better referral sources and advocates for projects that expect to require significant funds over an extended period.

But coming back to your core point, I would definitely encourage most EAs to pursue full-time employment outside of the EA community, even if they choose to stay within the social sector broadly. It's a vast, vast world out there, and all too easy to draw a misleading line from EA's genuinely impressive growth and reach to a wild overestimate of the share of relevant opportunities it represents for anyone trying to make the world a better place.

Comment by IanDavidMoss on AMA: Ian David Moss, strategy consultant to foundations and other institutions · 2021-03-09T02:33:07.814Z · EA · GW

Would you include even cases that rely on things like believing there's a non-trivial chance of at least ~10 billion humans per generation for some specified number of generations, with a similar or greater average wellbeing than the current average wellbeing? Or cases that rely on a bunch of more specific features of the future, like what kind of political systems, technologies, and economic systems they'll have?

My general intuition is that if there's a strong case that some action today is going to make a huge difference for humanity dozens or hundreds of generations into the future, that case is still going to be pretty strong if we limit our horizon to the next 100 years or so. Aside from technologies to prevent an asteroid from hitting the earth and similarly super-rare cataclysmic natural events, I'm hard pressed to think of examples of things that are obviously worth working on that don't meet that test. But I'm happy to be further educated on this subject.

How do you feel about longtermist work that specifically aims at one of the following?

Yeah, that sort of "anti-fragile" approach to longtermism strikes me as completely reasonable, and obviously it has clear connections to the IIDM cause area as well.

Comment by IanDavidMoss on AMA: Ian David Moss, strategy consultant to foundations and other institutions · 2021-03-09T02:07:27.328Z · EA · GW

A part of it, definitely. At the same time, there are other projects that may not offer much opportunity for innovation but where I still feel I can make a difference because I happen to be good at the thing they want me to do. So a more complete answer to your original question is that I choose and seek out projects based on a matrix of factors including the scale/scope of impact, how likely I am to get the gig, how much of an advantage I think working with me would offer them over whatever the replacement or alternative would be, how much it would pay, the level of intrinsic interest I have in the work, how much I would learn from doing it, and how well it positions me for future opportunities I care about.

Comment by IanDavidMoss on AMA: Ian David Moss, strategy consultant to foundations and other institutions · 2021-03-08T20:09:52.370Z · EA · GW
  1. Be aware of your decisions in the first place! It's really easy to get so caught up in our natural habits of decision-making that we forget that anything out of the ordinary is happening. Try to set up tripwires in your team meetings, Slack chats, and other everyday venues for communication to flag when an important fork in the road is before you and resist the natural pressure people will feel to get to resolution immediately. Then commit to a clear process of framing the choice, gathering information, considering alternatives, and choosing a path forward.
  2. Match the level of information-gathering and analysis you give a decision to its stakes. Often organizations have rote processes set up for analysis that aren't actually connected to anything worth worrying about, while much more consequential decisions are made in a single meeting or in a memo from the CEO. Try to establish a discipline of asking how much of your/your team's time it's worth spending on getting a decision right. Try to ensure that every piece of knowledge your team collects has at least one clear, easily foreseen use case in a decision-making context, and dump any that are just taking up space.
  3. Try to structure decisions for flexibility and option value. Look for ways to run experiments and give yourself an out if they don't work, ways to condition a decision on some other event or decision so that you aren't backed into making a choice before you have to, ways to hedge against multiple scenarios. Obviously, there will be situations when there is one correct choice and you need to go all-in on that choice. But in my experience those are pretty rare, and clients are more likely to make the opposite error of overcommitting to decision sequences that overly narrow the set of reasonable future options and cause problems down the line because of that.
Comment by IanDavidMoss on AMA: Ian David Moss, strategy consultant to foundations and other institutions · 2021-03-08T19:52:22.394Z · EA · GW

Besides theory of change, which tessa mentioned,  I've found myself increasingly focusing on the "front end" of decision-making rather than very detailed tools to choose from among defined alternatives, because in my experience  leaders and teams generally need help putting more structure around their decision-making process before they can engage productively with such methods.

One innovation I've been working on is a tool called the decision inventory, which is a way for clients to get a sense of the landscape of decisions facing them and prioritize among those decisions. It's a much more intuitive exercise and can be done much more quickly than a formal decision analysis or cost-benefit model, so it lends itself well to introducing the concepts and building buy-in among a team to do this kind of work. It can be especially helpful for teams because different team members have a different view of the decision landscape, and will have different ideas about what decisions are important for which reasons, so activating that collective intelligence can be educational for leaders.

Comment by IanDavidMoss on AMA: Ian David Moss, strategy consultant to foundations and other institutions · 2021-03-08T19:36:16.838Z · EA · GW
  1. You should hope that the transition will be painless, but prepare for it to be really, really hard just in case. I definitely recommend starting out with at least 6-9 months of runway for basic living expenses so that you can manage stress about being able to support yourself. It also helps if you can have one or two client engagements lined up before you actually make the jump. In retrospect, I did this transition in Hard Mode by switching cause area focus at the same time as I went from being an employee to an entrepreneur, which necessitated essentially rebuilding my network from scratch. Don't do this. If you do want to make a career switch, you'll have a much easier time if you get another job in your preferred area first and then go independent after that.
  2. One thing I've learned since I started is that client work is itself the best business development. There's really no comparison between a pitch and a referral -- the latter is dramatically more effective in making the case. Another tip is that you can create a lot of opportunities by doing the legwork of chasing obscure RFPs for projects you want to do but are not really qualified for, and then approaching other consultants (who are qualified for them) to ask if they want to partner with you on a bid. That way you get to know that firm and you gain relevant experience if your team wins the contract.
  3. This really threw me for a loop my first few years. The money is one thing, but being under-utilized for a while can also be really bad for your sense of self-worth -- and scrambling to meet a million deadlines obviously has its downsides as well. I've generally found the valleys to be more challenging to manage than the peaks, as very few of my projects are so time-sensitive that pushing off a deadline here or there is going to cause a catastrophe. I've found it helpful to maintain an active learning and writing practice as part of my portfolio of activities that can expand or contract to meet the moment. These are things I want to do anyway, and so if I find I have extra time to do them it's almost a blessing rather than something to be bummed about.
Comment by IanDavidMoss on AMA: Ian David Moss, strategy consultant to foundations and other institutions · 2021-03-08T19:12:24.052Z · EA · GW

One of the realities of consulting is that, unless you get very lucky,  you generally do have to be at least somewhat opportunistic in taking projects early on. I'm now in the fourth year of running my business and I'm able to be a lot pickier than I was when I first started, but if I limited my work to clients that were only focused on typical EA cause areas, I'd run out of clients pretty quickly. So I've cast my net quite a bit more broadly, which not only expands the opportunity set but also hedges against me getting typecast and positions me to be competitive/relevant in a wider range of professional networks, which I think is valuable for all sorts of reasons.

Another thing to keep in mind is that I've found that having clients that look great on paper doesn't always mean that you are able to achieve a lot of impact with them. Some of my most successful projects have been with clients that did smaller-scale work or were less sophisticated in their approach, because they knew they needed guidance from an outside expert and were willing to cede a lot of authority and creative input to me as part of the process. When you're really trying to innovate and move the field forward, it helps a lot to have clients like these because they aren't anchored on the usual ways of doing things, which makes them more open to trying out ideas. A lot of the sales process for consulting comes down to reassurance that someone else has done this thing and it worked out great for them, so getting those first few case studies locked down can be really important.

Comment by IanDavidMoss on AMA: Ian David Moss, strategy consultant to foundations and other institutions · 2021-03-08T18:53:57.329Z · EA · GW

Great questions!

  1. I'm on record as believing that working on EA-style optimization within causes, even ones that don't rise to the top of the most important causes to work on, is EA work that should be recognized as such and welcomed into the community. I got a lot of pushback when I published that post over four years ago, although I've since seen a number of people make similar arguments. I think EA conventional wisdom sometimes sets up a rather unrealistic, black-and-white understanding of why other people engage in altruistic acts: it's either 100% altruistic, in which case it goes into the EA bucket and you should try to optimize it, or it's not altruistic at all, which case it's out of scope for us and we don't need to talk about it. In reality, I think many people pursue both donations and careers out of a combination of altruistic and selfish factors, and finding ways to engage productively about increasing the impact of the altruism while respecting the boundaries put in place by self-interest is a relatively unexplored frontier for this community that has the potential to be very, very productive.
  2. This depends on whether you center your perspective on the EA community or not. There are lots of folks out there in the wider world trying to improve the functioning of institutions, but most of them aren't making any explicit attempt to prioritize among them beyond whether they are primarily mission- or profit-driven. In this respect, the EA community's drive to prioritize IIDM work based on opportunity to improve the world is quite novel and even a bit radical. On the EA side of things, however, I think there's not enough recognition of the value that comes from engaging with fellow travelers who have been doing this kind of work for a lot longer, just without the prioritization that EA brings to the table. IIDM is an incredibly interdisciplinary field, and one of the failure modes that I see a lot is that good ideas gain traction within a short period of time among some subset of the professional universe, and then get more or less confined to that subset over time. I think EA's version of IIDM is in danger of meeting the same fate if we don't very aggressively try to bridge across sectoral, country, and disciplinary boundaries where people are using different language to talk about/try to do the same kinds of things.
  3. My main discomfort with longtermism has long been that there's something that feels kind of imperialist, or at least foolish, about trying to determine outcomes for a far future that we know almost nothing about. Much of IIDM work involves trying to get explicit about one's uncertainty, but the forecasting literature suggests that we don't have a very good language or tools for precisely estimating very improbable events. To be clear, I have no issue with longtermist work that attacks "known unknowns" -- risks from AI, nuclear war, etc. are all pretty concrete even in the time horizon of our own lives. But if someone's case for the importance of something relies on imagining what life will be like more than a few generations from now, I'm generally going to be pretty skeptical that it's more valuable than bednets.
  4. My own career direction has shifted pretty radically over the past five years, and EA-style thinking has had a lot to do with that. Even though I stand by my position in point #1 that cause neutrality shouldn't be a prerequisite for engaging in EA, I have personally found that embracing cause neutrality was very empowering for me and I now wish I had done it sooner. It's something I hope to write more about in the future.
Comment by IanDavidMoss on AMA: Ian David Moss, strategy consultant to foundations and other institutions · 2021-03-08T18:20:03.665Z · EA · GW

I love the way you phrased this question -- in fact, one of the reasons why I'm such a big believer in theories of change (so much so that I wrote an introductory explainer about them) is that they are excellent for revealing strategic mistakes in a client's thinking.

A frequent pitfall I come across is that the originators of an organization or program often fall in love with the solution rather than the problem. By that I mean they see a problem, think immediately of a very detailed solution for that problem -- whether it's a software platform, some other kind of technology or innovation, an adaptation of an existing idea to a new audience or environment, etc. -- and get so invested in executing on that solution that it doesn't even occur to them to think about modifications or alternatives that might have higher potential. Alternatively, the solution can become so embedded in the organization's identity that people who join or lead it later on see the specific manifestation of the solution as the organization's reason to exist rather than the problem it was trying to solve or opportunity it was trying to take advantage of.

This often shows up when doing a theory of change for a program or organization years down the line after reality has caught up to the original vision -- day-to-day activities, carried out by employees or successors and shaped through repeated concessions to convenience or other stakeholders, often imply a very different set of goals than are stated in the mission or vision statement! For that reason, when doing a theory of change, I try to encourage clients to map backwards from their goals or the impact they want to create and forget for a moment about the programs that currently exist, to encourage them to see a whole universe of potential solutions and think critically about why they are anchored on one in particular.

Comment by IanDavidMoss on Improving Institutional Decision-Making: a new working group · 2021-03-08T17:17:57.597Z · EA · GW

Excellent points, Michael! I agree with much of what you wrote here, especially the first three points. I think the most important theme you bring up is the relevance of indirect influence, which you're absolutely right isn't reflected well enough in the definition as currently written. We are working on an operationalization of this definition now for the purposes of prioritizing key institutions, and I believe the way we've structured it will allow us to take considerations like these into account. Would love to have your feedback on it and will PM you with more info.

Comment by IanDavidMoss on AMA: Ian David Moss, strategy consultant to foundations and other institutions · 2021-03-05T14:30:15.692Z · EA · GW

Hi Michael, there are some sample project descriptions over at my website, but I'll paste a couple here for convenience: 

For more than 18 months, I worked with Democracy Fund’s Strategy, Impact and Learning team to bolster organizational capacity for strategic decision-making and develop a framework for risk assessment and mitigation across the organization. Deliverables included a training for 35+ senior and program staff covering forecasting skills and decision analysis, a concept paper, and recommendations to strengthen the approval process for nearly $40M in annual grantmaking. (2018-20)

I advised the Omidyar Network on the development of its Learning & Impact plan, with particular attention to designing team and organization-wide accountability systems that incentivize smart decision-making habits and practices as an alternative to traditional outcomes-based accountability. In addition, this engagement helped support the creation of a framework to help philanthropic institutions respond to the uncertainty created by the COVID crisis. (2020)

In partnership with BYP Group, I developed theories of change for two grant programs administered by Melbourne, Australia-based Creative Victoria (a state government agency). In addition, I worked directly with nine Creative Victoria grantees to create evaluation frameworks for their funded projects, which sought to use creative industry assets to accelerate progress on longstanding social issues such as mental health, social cohesion, and gender equality. (2018)

Those should give you a high-level sense of what I do, but I'm happy to answer more specific questions as bandwidth allows.

Comment by IanDavidMoss on Introducing High Impact Athletes · 2021-02-24T20:49:58.161Z · EA · GW

I think the core of our disagreement here stems from the fact that you are treating diversity considerations and meeting the objectives of the organization as separate, e.g.:

My sense is that any time spent on doing this would pretty directly trade off against the time used to drive the core organizational objective forward. 

Throughout this thread, I've been trying to make the point that time spent on doing this sometimes IS time to drive the core organizational objective forward, and in such cases a statement like yours makes no more sense than saying that spending time finding investors for your project or developing technical infrastructure for it is time that detracts from organizational objectives. I'll give you an example from my own past to illustrate the point. Many years ago, I embarked on a project to expand what had been a successful personal blog into a more formal think tank. I recruited the initial team from a set of trusted colleagues I'd previously worked with, all of whom happened to be white. This subsequently became a significant liability for our work when the field we were working in became increasingly focused on racial justice. This was because:

  • My own life experiences and biases led me to underestimate the degree to which my colleagues cared about racial justice and fail to see certain ways in which it was relevant to our core mission, which in turn led me to make different choices about what we covered and prioritized than I would have if the team had been more diverse from the beginning.
  • It was difficult to publish content that would carry credibility on that topic without including perspectives from people of color, and simultaneously difficult to recruit highly qualified candidates of color to our team because we were seen (correctly) as a predominantly white institution. Overcoming those barriers required a ton of time and emotional energy on the part of me and everyone on the team, but without it, we wouldn't have been able to create what ended up being probably the most impactful and enduring content we ever produced.

It's not just me: I've seen the story above unfold over and over again with dozens of other organizations that I've had nothing to do with. E.g., social sector consulting firms whose entire business model is now upside down because for a long time they were a collection of white people getting paid to be the "brains" behind  strategies to address issues in communities of color, and now foundations and other clients would prefer to pay people who have directly experienced those issues to develop those strategies. In both their case and mine, it would have been so much easier to just attend to this from the beginning rather than let it play out and clean up the mess later on, and more than likely it would have improved the quality of the products and services that we were able to offer our constituencies.

I'm going to make this my last comment because I find this format rather difficult to engage in, so in closing I'll just say that my only agenda here is to help people in this community, which I care about a lot, avoid the mistakes that I made. Your argument is that founders don't have spare time to think about diversity, but most founders are going to have to spend time thinking about diversity at some point in an organization's life if it gets big and high-profile enough, and if the intention from the beginning is for an organization to get big and high-profile, then it makes sense to think about diversity from the beginning too.

Comment by IanDavidMoss on Introducing High Impact Athletes · 2021-02-24T16:24:57.152Z · EA · GW

It's been a while, but I wanted to come back to this since it seemed like the nuance I was trying to convey with my last comment wasn't coming through.

You're correct, of course, that any time you elevate a consideration up the priority chain, it will necessarily result in deprioritizing something else. What I was trying to say is that team effectiveness, whether we define it as the ease with which the team works together or as the overall impact they're able to collectively create, does not have to be the victim of that tradeoff. Specifically, I can think of two ways in which diversity could be prioritized by trading off against other considerations besides effectiveness:

  1. Time. For a founder or founders without a very diverse network, assembling a more diverse team that's as good or better than a non-diverse team will likely require additional calendar time and effort. Perhaps the project is a very time-sensitive one that hinges on getting up and running quickly, and putting in that effort doesn't make sense. But in a lot of cases, it might -- plus then they have a more diverse network for anything else they might need it for later.
  2. Personal involvement. In some cases, founders might be so excited about an idea that they overestimate their personal fit for bringing it to fruition. Depending on the specifics of the project, it might be better for it to be led forward by someone who does have a diverse network they can bring to bear right away. This is potentially quite relevant, for example, in situations where most of the intended beneficiaries of the product or service have backgrounds that are very unlike the founder's.

It's easy (for me, at least) to imagine situations in which making either of these tradeoffs would be net neutral or positive to team effectiveness in expectation. As I mentioned, though, this will depend on a lot on the goals and target audience of the initiative.

Comment by IanDavidMoss on Any updates to high-impact COVID-19 charities? · 2021-02-08T16:39:10.082Z · EA · GW

Hi Warren, if you haven't seen it, I recommend checking out the more recent analysis of COVID giving opportunities that Catherine Olsson and I wrote up for the FRAPPE donor circle last spring and summer. We reconvened the group last month and have identified the following top recommendations:

  • Fast Grants: a top choice in our analysis last year, Fast Grants is now supporting efforts to sequence new virus variants in the United States, Europe, and India, and to accelerate deployment of COVID-19 treatments in countries that are not likely to complete vaccination this year (which includes almost all poorer countries). Our group has committed $283,000 to Fast Grants in this round.
  • COVID-19 Vaccine Equity Project: CVEP is a partnership of the Sabin Vaccine Institute, Dalberg (which is leading the Country Readiness plank of the COVAX initiative), and JSI Research & Training offering technical assistance on vaccine rollout and distribution across multiple low- and middle-income countries. With the main vaccine distribution effort effort led by Gavi, WHO, and country-level ministries of health, CVEP's role is to fill in gaps and try to solve bottlenecks on the ground. Skoll Foundation has supported a pilot phase that unfolded in late 2020, and CVEP is now trying to raise $19 million to scale up over the course of 2021.

Feel free to get in touch with any questions.

Comment by IanDavidMoss on 2018-19 Donor Lottery Report, pt. 2 · 2021-01-11T16:28:41.116Z · EA · GW

I would love for the new Improving Institutional Decision-Making working group to be considered for funding. You can find a description of our planned activities for 2021 at the link, and this comment includes a more detailed explanation of why funding would be helpful. Happy to provide additional information on request.

Comment by IanDavidMoss on Improving Institutional Decision-Making: a new working group · 2021-01-02T07:23:58.181Z · EA · GW

Glad you think so! :) Here are some brief answers to your questions:

How did this working group come about?
It's been a very gradual and organic process. An abbreviated version of the story is as follows: In 2019, I pitched the EA Global organizers on organizing an IIDM meetup at the London conference. They agreed, and I ended up co-hosting the event with Tamara Borine and Sam Hilton (who also commented in this thread). Tam and I continued to meet regularly following the conference, and eventually she recruited Vicky and Laura to join us in mid-2020 based on ongoing conversations she'd had with each of them. Tam recently had to take a step back due to other commitments, so the core organizing team is now the three of us.

What was the research process for writing this post?
We haven't really thought of this post as a research post, so I'm not quite sure how to answer the question. I have a personal learning agenda for IIDM-related topics that I've been pursuing for the past several years, and drafted the original version of the IIDM definition based on that. The definition was subsequently refined considerably as a result of feedback from our advance readers. The list of initiatives came about as a result of a three-stage voting process our team undertook as an experiment in adapting more formal decision-making methodologies to our internal work; we expect to do more of that as the year goes on and hope to write about it as bandwidth allows.

How have you engaged with stakeholders so far?
The feedback process around this post and our stewardship of the IIDM Facebook and Slack communities have been the main activities so far. We also had a round of 1-on-1 conversations with around half a dozen EA leaders in early 2020 to gauge general interest in the cause area. We still have work to do to understand the EA community, but from my perspective the bigger gap is with stakeholders doing IIDM-related work outside of the EA context, whom we have yet to really engage in a formal way.

What kind of structures have you considered for this working group (e.g. purely professional, or a mix of staff & volunteers?) and which is the ideal one?
Right now, we are volunteer-driven out of necessity because we think the work is important to move forward but don't have any funding. I'm not sure if our group/organization needs to be fully professionalized on a sustained basis, but I think the more resources we have, the more possibilities it opens up. For example, even if we ran out of ways to put the money toward operations, we could set up a regranting fund for IIDM-related initiatives doing valuable work.

How are you planning to manage volunteers?
We'll be hashing out the details of that over the next few weeks, but I imagine that as a general model we'll determine what work needs to be done associated with each project/initiative and which tasks we feel comfortable delegating, set up whatever training or documentation needs to be in place in order to position volunteers for success on those tasks, and then match up the people who have expressed interest with the relevant tasks. As we continue to work with folks, we'll identify those who seem ready and interested in taking on more responsibility, and elevate them to management roles where possible.

If you haven't achieved your 2021 goals, what do you think would be the most likely reason?
Our goals are pretty ambitious for a group of part-time volunteers, so it wouldn't be all that surprising if we fell short! Lack of funding is a big constraint for us. Since I operate an independent consulting practice, I have the potential to redirect a large portion of my time to this project if we were funded to do so, but as a volunteer there are much tighter limits on the hours I can spare and there's a bigger risk that I'll miss deadlines because I have to prioritize paying clients. So that would be the most likely reason.

Comment by IanDavidMoss on Improving Institutional Decision-Making: a new working group · 2021-01-02T06:28:35.761Z · EA · GW

Thanks, Sam -- your feedback during the draft phase was extremely helpful and I'm happy for these open questions to be aired publicly as well. 

Re: the name
We've had a number of conversations about this, and at this point I'd say it looks like the name isn't going anywhere for the time being. There is definitely a contingent of folks who aren't crazy about IIDM as a label, but it has its fans as well, and all of the alternatives that have been suggested have shortcomings of their own. Ultimately, I think that once some of the work we're describing here has been undertaken, there will be more concrete outputs for people to associate with our community and the name won't have to carry as much weight on its own.

Re: scope
This is definitely a work in progress for us, and even the process of drafting this post was helpful for sharpening our sense of what our scope is and isn't. 

  • Regarding careers, I do want to clarify that we don't consider career guidance to be inherently out of scope for us. In fact, we are working informally with 80K to funnel mentees into IIDM community spaces so that they can have a way to learn about relevant opportunities and resources. However, we feel it's premature for us to try to offer individualized career advice before we have a better sense of how the priorities stack  up, and before we've had a chance to broaden our networks to include well-placed people in key institutions. The activities we've laid out for this year should help us make progress on both fronts.
  • Regarding individual decision-making, indeed, I see this as more CFAR's domain although there are certainly important individual decisions that take place within professional or institutional contexts. So it's kind of on the edge of our scope, but more in than out.
  • On ways to improve institutions that are not directly related to decision-making: this is related to your third point, so I'll address it below.
  • Your last suggestion is covered in our post -- we mention new institutions in our list of levers to improve IIDM, so we do consider it in scope for us.

With all of these, it's important to emphasize that because our ambition is primarily to provide a connecting and coordination function, it's possible for things ro be "in scope" for us where we would still expect other parties to be the primary drivers of that thing. Individual decision-making is a good example of this; we wouldn't try to replicate or compete with what CFAR is doing, but can still consider them as part of our community broadly speaking because of the relevance of their work to ours.

Re: whether to emphasize "institutions" or "decision-making" more
I think the questions you bring up here are quite profound. I will say that I was initially drawn to this cause area and Jess's framing of it in her 80K profile in no small part because of its explicit emphasis on decision-making. As an employee or consultant, I've seen the inside of dozens of mission-driven organizations over the course of two decades, so I'm reasonably well positioned to pick up on patterns of institutional structure and routines. From what I've seen, there are relatively mature infrastructures (by which I mean formal roles, career tracks, training programs, etc.) for organizational functions such as program strategy,  operations, research and evaluation, and executive leadership. Not so for decision-making, even though it cuts across all of the aforementioned areas and is absolutely central to what an organization actually accomplishes. In all of my time in the workforce, I have never seen a phenomenon as well-studied and obviously relevant as decision-making receive so little support from the organizations on behalf of which those decisions will be made. It's just assumed that everyone already knows how to make decisions well, even though the research clearly demonstrates that's not the case. It's really quite a puzzle! 

Getting back to the question of whether the goal is to improve institutions or improve decision-making at institutions, I see this as something of a false dichotomy. Institutions make their mark on the world via the sum total of the decisions they make, so by improving institutions you're necessarily improving their decisions and vice versa. I agree that some things institutions do are not as easily recognizable as decisions as others--you mentioned working to improve their reputations or communications as examples of the former. Even in those cases, however, there are still decisions to be made: about how to prioritize the time of staff, budget, and executives in service of those priorities; about which audiences are most important and what messages are most desirable; and so forth. We are making decisions all the time; right now, I am choosing which words best express my opinions to you; I am choosing to stay up a bit past my bedtime to respond to this comment; I am choosing to prioritize my engagement with IIDM over other volunteer opportunities; and the sum total of those choices helps to outline the shape of  the impact I create in the world, or don't. And that last principle applies to organizations just as well as individuals. At least that's the way I see it.

Comment by IanDavidMoss on Effective charities for improving institutional decision making and improving global coordination · 2020-12-08T14:35:26.271Z · EA · GW

This is far from a comprehensive or fully vetted list, but here are some ideas off the top of my head on the improving institutional decision-making front:

  • The Alliance for Useful Evidence
  • The UK What Works Centres/Evidence Quarter, particularly the What Works Centre for Wellbeing
  • Society for Judgment and Decision-Making
  • Society of Decision Professionals
  • Behavioural Insights Team and ideas42 (mentioned below by Mati)
  • Campbell Collaboration (research syntheses of social science literature)
  • Cochrane (research syntheses of medical literature)
  • CEDIL (Centre of Excellence for Development Impact and Learning)
  • Project Evident
  • Hubbard Decision Research
  • Strategic Decisions Group

Of these, most operate on a fee-for-service model and wouldn't necessarily be able to make good use of individual donations in the, say, $5k-and-under range. However, I believe that Alliance for Useful Evidence and Campbell Collaboration specifically operate on shoestring budgets and are mostly funded by contributed income, so I'd check into those first if you're considering a donation.

FYI, the improving institutional decision-making (IIDM) coordinating group within EA is working on a resource directory that will eventually be able to answer questions like these in greater detail. We'll be posting more about that on the EA Forum later this month.

Comment by IanDavidMoss on Introducing High Impact Athletes · 2020-12-04T14:55:01.105Z · EA · GW

I don't think anyone's suggesting optimizing for demographic diversity. I'm advocating for satisficing, which is a much weaker constraint. And while I understand the mathematical argument that you're making, in practice I reject the premise that including demographic diversity in one's recruitment calculus will always harm team effectiveness (even if only a little bit). If it spurs a founder to widen their search, broaden their network, and consider more options than they would have otherwise, in some cases it could result in increased effectiveness vs. the counterfactual.