Gordon Irlam: an effective altruist ahead of his timepost by Louis_Francini · 2020-06-10T20:48:52.928Z · EA · GW · 5 comments
When people think about the history of effective altruism as a memeplex, they generally assume that it developed into a coherent philosophy around the time of Giving What We Can’s founding in 2009. Of course, you could point to moral philosophers like Bentham, Singer, or Unger whose ideas naturally implied EA, but they never fully laid the conceptual groundwork for EA methodology. You could also point to altruists who were highly effective, but who didn’t develop a detailed EA worldview or methodology. To be sure, the idea of altruism was not new, nor was the idea of effectiveness, but the unique combination—emphasizing principles of cause neutrality, epistemic and instrumental rationality, quantitative analysis, and the importance of considering counterfactuals—had never been seen before [EA · GW]. Or had it?
Enter the work of Gordon Irlam. Irlam has a varied resume which includes software engineering at Google in 2004, working as a grad student in a malaria research lab in 2005, and most recently self-study in artificial intelligence. He also runs a small charitable foundation that has donated over $1.7 million to charities, mostly working on developing world health/poverty and global catastrophic risks. But what I would like to highlight is his essay “Making a difference,” which does not list its creation date but was last edited in January 2004.
The similarities between this 2004 essay and modern EA philosophy are uncanny. The article begins by discussing the difficulty of attributing counterfactual impact, and then goes into a very detailed discussion of replaceability, similar to what would later be seen in William MacAskill’s 2014 paper “Replaceability, Career Choice, and Making a Difference”. Here is one quote:
We each seek to exercise our free will in such a way as to maximize our preferred utility function of the world. What makes this difficult is the interlinking of any action we might take with the action of others. For instance, if somebody accepts a job working as a youth counsellor, offsetting the good that might be done, is the loss of good the next best candidate would have contributed. Taking the job causes things to ripple down the line, as they in turn, displace somebody else from some other job, and so on.
While there are earlier predecessors to the idea of “earning to give”, Irlam provides the clearest pre-EA argument I have seen:
Suppose you have a skill that is highly valued by employers, but you lack skills highly valued with respect to your utility function. Then, one option that makes a lot of sense is to take a high paying job that is neutral with respect to your utility function, and to donate much of what you earn to an organization that works on what you care about. This allows you to translate the skill you don't value into being effectively highly skilled at what you care about. You will undoubtedly be able to achieve more by working in this fashion than working on the issues you care about directly.
The article concludes by discussing the pivotal role one person, Viktor Zhdanov, played in the eradication of smallpox. This is eerily similar to the way EAs often talk about Stanislav Petrov or Vasili Arkhipov. William MacAskill would later write an article praising Zhdanov in 2015.
Irlam did more than theorize about the economics of doing good. He also tried to put these principles into action by developing his “Back of the Envelope Guide to Philanthropy” which compares various philanthropic causes according to their “leverage factor”, which refers to the cost-effectiveness of the interventions as opposed to relying on overhead ratio. According to the copyright notice, this project was started in 2005, but the earliest archive on the Wayback Machine is from 2008. For comparison, GiveWell was launched in 2007. In other words, it appears that Irlam independently discovered cause prioritization. At some point between 2011 and 2013, AI safety was added to the top of the list of causes.
I’m not saying Gordon Irlam is the earliest person to come up with these EA ideas, or even the earliest to write them down. For all I know, there’s an obscure economics paper or Usenet post from decades earlier that is even more uncannily similar to modern EA. Regardless of this possibility, I think Irlam deserves some recognition for his accomplishments.
H/T to Issa Rice for pointing me to Gordon Irlam, and to Matthew Barnett for proofreading and editing this post.
Comments sorted by top scores.